In this post, I will talk about the final step in the Machine Learning (ML) lifecycle which is Model Deployment. This is a blog post series on Machine Learning with PeopleTools 8.58.
The earlier blog posts can be found here –
In the earlier posts, I presented the different steps involved in creating and saving an ML Model using Oracle Cloud Infrastructure (OCI) Data Science Service.
In this post, I will talk about the step of deploying the ML Model as a REST API and then consuming the REST API in PeopleSoft to show the runtime inferences. This is a continuation of the series - Machine Learning in PeopleSoft. Once the ML Model is built and saved in OCI Data Science, it is available as an artifact for downloading and deployment.
The diagram below shows the high-level flow of deploying the ML Model and consuming it in PeopleSoft.
Aside from OCI Data Science Service used for creating and saving the ML Model, we will use three other OCI services for ML Model deployment:
There are three steps in deploying and consuming an ML Model:
Use the OCI Cloud Shell to deploy an OCI Function
Before you start using OCI Cloud Shell, the proper OCI Identity and Access Management (IAM) policies need to be created for using the Cloud Shell. Information on the IAM Policies can be found here. Once the OCI Cloud Shell is accessible, you need to configure your tenancy to use OCI Functions. This involves creating IAM Policies to give user access to repositories in OCI Registry, function-related resources, network resources. You also need to create IAM Policies to give access to the OCI Functions service to access network resources and repositories in OCI Registry. Creation of these policies in detail is described the OCI Functions documentation.
Now you are ready to create and deploy the OCI Function using OCI Cloud Shell. The OCI Cloud Shell comes with a 5GB persistent storage. In this storage, download the saved ML Model Artifact using the OCI Cloud Shell.
The list of detailed steps required to deploy the OCI Function are described in an example notebook which is provided by the OCI Data Science Service. You can find the example notebook when you open the OCI Data Science Notebook session. It is present in the following location - “ads-examples/ADS_Model_Deployment.ipynb”. Please go through all the steps to successfully deploy the OCI Function. Once the Function has been deployed, it can be tested to check if it returns a proper prediction response.
Create a REST API using OCI API Gateway
Once the OCI Function is deployed successfully, the next step is to create a REST API with the OCI Function as the backend. For this, we use the OCI API Gateway service. Before using the OCI API Gateway service, the tenancy needs to be configured to use the OCI API Gateway service. In addition to this, the OCI API Gateway users also need to access the functions defined in OCI Functions service. The list of IAM policies required for both these things are described in the API Gateway documentation.
Once the OCI API Gateway is configured, we can proceed in creating the REST API. The same python notebook which was used to deploy the Function also describes in detail the steps needed to create and deploy the REST API. Please follow these steps to deploy the REST API successfully. Once the REST API is up and running, we can test the prediction output using different request payloads.
You can also specify who can access the REST API which has been created. The API Gateway service allows you to do this in two ways –
Consume the REST API in PeopleSoft for runtime inference
In this step, we consume the REST API created by the OCI API Gateway service in PeopleSoft pages to show the prediction results. For consuming the REST API, we use the standard mechanism for consuming external services in PeopleSoft which is Integration Broker (IB).
Below, I am showing a small example of consuming the REST API using IB.
For calling the REST API, we have to pass a row of data for which prediction is required e.g. we need to pass the attributes of a particular employee for whom the attrition risk is being predicted. The REST API itself accepts data in a JSON format. Hence the row of data in PeopleSoft needs to be converted into a JSON string. We will utilize the PeopleCode built-in JSON methods and objects to do the same. A sample code block is shown below:
Once we have the request payload in the form of JSON, we can send the REST API request. For doing this we use PeopleSoft IB. We use the IB ‘ConnectorRequest’ function which enables you to build a message object and perform a POST or GET using any target connector. With this function, you use the Message object to populate connector values. We also use the PeopleSoft delivered IB_GENERIC message. Response messages are returned unstructured in the IB_GENERIC message. A sample code using ‘ConnectorRequest’ and ‘IB_GENERIC’ is shown below –
The PeopleCode above can be wrapped in various PeopleCode Functions. We can create one function for converting a row of data into a JSON string and another one for calling the REST API using IB with the JSON string as the input. The Response code of the REST API which will be in a numerical format can be converted to a string.
The example that we have been looking at relates to predicting employee attrition in an organization. Considering various HR attributes, the ML Model will predict if the employee is an attrition risk or not. In this sample use case, our ML Model will return a value of 0 or 1. We can translate the value of 0 into “Low Attrition Risk” and a value of 1 can be translated into “High Attrition Risk.
There are a couple of ways in which the Function can be called –
The sample below is the HCM Manage Succession page where the prediction function is called for each row and the Attrition Risk for an employee is shown in one of the grid columns.
These Model deployment and consumption steps complete one round of the ML Lifecycle.
Note to customers: If you are interested in participating in a Proof of Concept/Early Adopter program on AI/ML with PeopleSoft, contact us at firstname.lastname@example.org