X

An Oracle blog about PeopleSoft Technology

Machine Learning with PeopleTools 8.58 - Model Deployment

Rahul Mahashabde
Architect

Machine Learning with PeopleTools 8.58 - Model Deployment

In this post, I will talk about the final step in the Machine Learning (ML) lifecycle which is Model Deployment. This is a blog post series on Machine Learning with PeopleTools 8.58.

The earlier blog posts can be found here –

  1. Introduction to Machine Learning with PeopleTools 8.58
  2. Machine Learning with PeopleTools 8.58 - Data Acquisition
  3. Machine Learning with PeopleTools 8.58 - Data Modeling

In the earlier posts, I presented the different steps involved in creating and saving an ML Model using Oracle Cloud Infrastructure (OCI) Data Science Service.

In this post, I will talk about the step of deploying the ML Model as a REST API and then consuming the REST API in PeopleSoft to show the runtime inferences. This is a continuation of the series - Machine Learning in PeopleSoft. Once the ML Model is built and saved in OCI Data Science, it is available as an artifact for downloading and deployment.

The diagram below shows the high-level flow of deploying the ML Model and consuming it in PeopleSoft.

https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/488dc4d9-642a-492b-a7f4-e2801e060fdf/Image/e7101ebf628c96ff3e6be5798ed505e3/picture4.png

Aside from OCI Data Science Service used for creating and saving the ML Model, we will use three other OCI services for ML Model deployment:

  1. OCI Cloud Shell - Oracle Cloud Infrastructure Cloud (OCI) Shell is a web browser-based terminal accessible from the Oracle Cloud Console. Cloud Shell is free to use (within monthly tenancy limits), and provides access to a Linux shell, with a pre-authenticated Oracle Cloud Infrastructure CLI, a pre-authenticated Ansible installation, and other useful tools for following Oracle Cloud Infrastructure service tutorials and labs. More details about OCI Cloud Shell can be found here.
  2. OCI Functions - Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, and serverless Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the “Fn Project” open-source engine. More details about OCI Functions can be found here.
  3. OCI API Gateway - The API Gateway service enables you to publish APIs with private endpoints that are accessible from within your network, and which you can expose with public IP addresses if you want them to accept internet traffic. The endpoints support API validation, request and response transformation, CORS, authentication and authorization, and request limiting. More details about OCI API Gateway can be found here.

There are three steps in deploying and consuming an ML Model:

  1. Use the OCI Cloud Shell to deploy an OCI Function
  2. Create a REST API using OCI API Gateway
  3. Consume the REST API in PeopleSoft for runtime inference

 

Use the OCI Cloud Shell to deploy an OCI Function

Before you start using OCI Cloud Shell, the proper OCI Identity and Access Management (IAM) policies need to be created for using the Cloud Shell. Information on the IAM Policies can be found here. Once the OCI Cloud Shell is accessible, you need to configure your tenancy to use OCI Functions. This involves creating IAM Policies to give user access to repositories in OCI Registry, function-related resources, network resources. You also need to create IAM Policies to give access to the OCI Functions service to access network resources and repositories in OCI Registry. Creation of these policies in detail is described the OCI Functions documentation.

Now you are ready to create and deploy the OCI Function using OCI Cloud Shell. The OCI Cloud Shell comes with a 5GB persistent storage. In this storage, download the saved ML Model Artifact using the OCI Cloud Shell.

The list of detailed steps required to deploy the OCI Function are described in an example notebook which is provided by the OCI Data Science Service. You can find the example notebook when you open the OCI Data Science Notebook session. It is present in the following location - “ads-examples/ADS_Model_Deployment.ipynb”. Please go through all the steps to successfully deploy the OCI Function. Once the Function has been deployed, it can be tested to check if it returns a proper prediction response.

 

Create a REST API using OCI API Gateway

Once the OCI Function is deployed successfully, the next step is to create a REST API with the OCI Function as the backend. For this, we use the OCI API Gateway service. Before using the OCI API Gateway service, the tenancy needs to be configured to use the OCI API Gateway service. In addition to this, the OCI API Gateway users also need to access the functions defined in OCI Functions service. The list of IAM policies required for both these things are described in the API Gateway documentation.

Once the OCI API Gateway is configured, we can proceed in creating the REST API. The same python notebook which was used to deploy the Function also describes in detail the steps needed to create and deploy the REST API. Please follow these steps to deploy the REST API successfully. Once the REST API is up and running, we can test the prediction output using different request payloads.

You can also specify who can access the REST API which has been created. The API Gateway service allows you to do this in two ways –

  1. Using an ‘Authorizer’ function – This process involves creating an OCI Function which performs the task of Authentication and Authorization. This function verifies the identity of the caller and the determines list of operations the user is allowed to perform on the REST API. The steps for doing this are described here - Using Authorizer Functions to Add Authentication and Authorization to API Deployments
  1. Using a JSON Web Token (JWT) - A JWT is a JSON-based access token sent in an HTTP request from a caller to a resource to authenticate the caller. JWTs are issued by identity providers (for example, Oracle Identity Cloud Service (IDCS), Auth0, Okta). When a caller attempts to access a resource and passes a JWT to authenticate itself, the resource validates the JWT with an authorization server using a corresponding public verification key. The steps for creating JWT based Authentication and Authorization for the REST API are described here -  Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API Deployments

 

Consume the REST API in PeopleSoft for runtime inference

In this step, we consume the REST API created by the OCI API Gateway service in PeopleSoft pages to show the prediction results. For consuming the REST API, we use the standard mechanism for consuming external services in PeopleSoft which is Integration Broker (IB).

Below, I am showing a small example of consuming the REST API using IB.

  • Step 1 – Convert the row of input into a JSON string

For calling the REST API, we have to pass a row of data for which prediction is required e.g. we need to pass the attributes of a particular employee for whom the attrition risk is being predicted. The REST API itself accepts data in a JSON format. Hence the row of data in PeopleSoft needs to be converted into a JSON string. We will utilize the PeopleCode built-in JSON methods and objects to do the same. A sample code block is shown below:

https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/488dc4d9-642a-492b-a7f4-e2801e060fdf/Image/6ad71b85d96d9b15cb416c45668cbad4/picture6.png

  • Step 2 – Use IB to send the Request

Once we have the request payload in the form of JSON, we can send the REST API request. For doing this we use PeopleSoft IB.  We use the IB ‘ConnectorRequest’ function which enables you to build a message object and perform a POST or GET using any target connector. With this function, you use the Message object to populate connector values. We also use the PeopleSoft delivered IB_GENERIC message. Response messages are returned unstructured in the IB_GENERIC message. A sample code using ‘ConnectorRequest’ and ‘IB_GENERIC’ is shown below –

https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/488dc4d9-642a-492b-a7f4-e2801e060fdf/Image/6ad71b85d96d9b15cb416c45668cbad4/picture8.png

  • Step 3 – Consume this REST API on a PeopleSoft page

The PeopleCode above can be wrapped in various PeopleCode Functions. We can create one function for converting a row of data into a JSON string and another one for calling the REST API using IB with the JSON string as the input. The Response code of the REST API which will be in a numerical format can be converted to a string.

The example that we have been looking at relates to predicting employee attrition in an organization. Considering various HR attributes, the ML Model will predict if the employee is an attrition risk or not. In this sample use case, our ML Model will return a value of 0 or 1. We can translate the value of 0 into “Low Attrition Risk” and a value of 1 can be translated into “High Attrition Risk.

There are a couple of ways in which the Function can be called –

    • In a batch program where prediction can be done for a batch of rows and then the result is shown in some PeopleSoft page
    • Online mode, where the function is called for each row of data when the PeopleSoft page is loading.

The sample below is the HCM Manage Succession page where the prediction function is called for each row and the Attrition Risk for an employee is shown in one of the grid columns.

 

These Model deployment and consumption steps complete one round of the ML Lifecycle.

 

Note to customers:  If you are interested in participating in a Proof of Concept/Early Adopter program on AI/ML with PeopleSoft, contact us at psoft-infodev_us@oracle.com

 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.