Phil Whitwell: thanks very much to Pieter for putting together this excellent - trail-blazing introduction into how to use the Generic Integration Protocol without the need for Oracle Integration Cloud.
|Pieter ‘t Hoen: I was for nearly 6 years a consultant at Oracle working with OPA. I am now CTO at Concordia Legal where with OPA, now known as the Intelligent Advisor, we automate complex Legal decisions. Please see www.concordialegal.nl (in Dutch) or contact me at email@example.com for more information.|
In this blog we will share with you a sample implementation of the Generic Integration Protocol to help get you started. The Generic Integration Protocol REST API is the underlying API Oracle Integration Cloud (OIC) uses to connects an Intelligent Advisor interview, and is an alternative to the SOAP-based Connection Framework. The Generic Integration Protocol provides a different approach for custom integrations where more than one application is to be integrated and OIC is not in the mix.
We show the necessary steps for loading and saving data for an Interview from a connected system. As an example, we show an interview were an employee's work history decides whether he or she is eligible for a guaranteed number of working hours each month ( this is a real use-case from WAB/Wet Koolmees in the Netherlands). We integrate the interview with a Wordpress site, which in this case acts as the middleware, but the steps are a useful guide on how to start using the new integration protocol in with your middleware environment.
The example gives the minimal required steps but we touch on the more complex issues (security, complex data models, multiple query parameters, etc…), however covering these topics in detail is worthy of separate blogs and discussions. Please see also the included relevant links to the Intelligent Advisor documentation at the end of the blog for more in-depth reading that is required before embarking your full implementation.
As an interview, we use our application for the Dutch legislation WAB/Wet Koolmees where an employee based on his or her work history can claim a minimum number of hours to work each month. This is a complex calculation that requires the actual hours worked for the known contracts, but here we focus on a simple example that at the start of the interview loads simply the name of the employer, and returns the fixed number of hours the employee can claim. The full integration of the model requires mapping in and out entities and attachments. This first example is to get you started, and uses the same JSON format as the Batch REST API, so you can extend the model.
Besides the loaded and saved attributes, we will also use a query parameter caseID for the interview in order to retrieve the specific case. Typically the interview query parameters would be encrypted, and there would be several including authentication parameters (logged in user, a nonce for Wordpress, etc…) but this is skipped for the sake of brevity.
Below we show a snippet from a web page where several cases have been prepared with the employer names pre-filled to load at the start of the interview and an interview has already concluded and saved the number of hours the employee is eligible for. The table shows the case number, the employer name stored in the database for the case, a link to the Intelligent Advisor if it still needs to process, and the resulting offer in the fixed number of hours that need to be offered for this specific case. Employer Pack and Ship in this example needs two separate calculations for two employees who based on their individual cases will have individual outcomes.
If you want to create your own first integration after this walkthrough you will need the following:
The integration will roughly take the following 4 steps:
Prepare the hub by defining the Generic Connector. If you have only one Generic Connection, then this will automatically be used for your deployment. Otherwise you will have to manually select it for the deployed interview in step 2.
Choose the type as generic integration, the name of your connector (localWPGeneriek in our case), choose your workspace for the deployments, the url of your site with the endpoints (in our case our local Wordpress installation), and the login information needed for your REST endpoints when you configure them. The status should be green/online before you proceed.
For the interview, define the needed inputs and outputs and supply the names the Generic Integration protocol will utilise. Make sure your interview has a submit. Also, make sure your interview has at least one input or output defined as otherwise the hub will consider the use of the connector as irrelevant and treat it as a regular interview.
In the interview for the data mapping options choose to use the Generic Protocol. You generally state that you will use the Generic Integration Protocol, and not specifically the name. This is different for those used to setting up a connection with the Connection Framework or with Service Cloud where you pick a specific connection. The choice for the specific connection is done in the hub for the deployment.
Map all the inputs by giving them a name for the Generic Protocol and set to load at start. Here we only load the name of the employer:
Map the output attributes. Here we only map the number of fixed hours that are calculated:
Deploy the interview and you will receive a notification from the Intelligent Advisor that the interview cannot be deployed as the endpoints of the interview still need to be registered.
Inspection on the hub for the deployed interview (in our case WABBlog) shows the following error as the endpoints are not yet coupled. These will be set in step 4 and the stop signs will show ok/green.
Inspection of the hub through the REST API shows already the connections, and the operations automatically associated with the interview for the Generic Integration Protocol. The REST endpoints still have to be created (Step 3) and connected to the interview (Step 4).
Using Postman we inspect the current operations on the hub. With url <your hub>/api/auth we first retrieve a bearer token for the type OAuth 2.0 authentication, and with GET and url <your hub>/api/latest/operations we inspect the current state of operations for our interview, in this case for the LOAD:
If all went well, we see in the list of operations our interview with its LOAD and SAVE operations generated by the deployment. In the link to the interfaces, the input and output parameters for LOAD and SAVE respectively can be found and this is relevant for the format of the response from those operations.
The details for the status operation endpoint, the actual endpoint for the load, and the query parameters, still however need to be set (step 4). After Step 4 is completed successfully, retrieving the listed operations again will show the LOAD above with the status and load endpoints added in.
Note the %20 in the interfaces url used to represent the spaces in the name. We will use this convention also in step 4 to ensure the spaces are set properly when setting the further endpoints.
The connected application will need to supply a LOAD, a LOADStatus, a SAVE, and SAVEStatus endpoint for the interview. The status operators are simple and can return simply ok, but the load and save operations will have to be able to handle the specific input and output attributes as JSON responses using the same convention as the BATCH REST API, as well as process the query parameters.
The endpoint code is very specific to your connected system. Some generic patterns however will apply: the status endpoints are GET operations, as well as is the LOAD operation. The SAVE operation is a POST. You can call the endpoint whatever you like, but for the sake of maintainability and your sanity it is wise to name the endpoint appropriately with the version of the hub endpoint name as in our php encoded plugin example:
Deploying the plugin gives us the 4 needed endpoints that we will configure on the hub in Step 4:
The body of the operations is built again in php and tested (with Postman) before being mapped to the interview to ensure correct operation; small steps to ensure that we know where the error is coming from.
The load and save status endpoints in their most basic form can be a simple return of no value; they just need to be there. For more realistic scenarios these methods can do much more work, for example checking the availability of critical resources such as external systems and return an error in their absence.
For the LOAD and SAVE operations in the backend, our Wordpress site, we will need to store the incoming data for the interviews and also store the result. We do this in a simple table where the cases are stored to use in the LOAD and SAVE operations
Our LOAD operation uses the query parameter caseID to retrieve the name of the employer for the case we are processing. Of importance is that we return the result as in the same convention as used for the BATCH REST API, and that we supply an id to the body of the response.
With the load operation in place, we test the response:
Our LOAD for the full implementation uses multiple encrypted query parameters, authentication, error handling, a complex data model, encoded attachments, etc… but run your first trial with a compact example for you to get a good feel for what goes where and how to resolve errors that come up.
Our save operation is very straightforward and updates the number of hours as eligible for the correct caseID. After the interview has been submitted, the Wordpress page reflects in the snippet example table the saved number of hours and the Intelligent Advisor link is suppressed as the calculation has been completed.
As the last step, the hub with our deployment needs to have the operations for LOAD and save directed at our specific endpoints. This is with the PUT method.
We set the LOAD with the built status and load endpoints. The query parameters are a comma separated list of the names of the parameters. In our case this list consists of only caseID.
And also set the SAVE operation. Note that you can have significant fun debugging if you decide to switch the status and save operations and wonder why it does not work.
If all went well, then we can revisit our deployment of the interview and after a refresh/update we see that we have a green result for LOAD and SAVE.
In our case, http://localhost:7001/cllocalhub/web-determinations/startsession/WABBlog?caseID=3 will now start the Intelligent Advisor interview for case 3, load the name of the specific employer, and save the eligible number of hours to the case.
Note that updating either the inputs or outputs of the interview will for the deployment automatically update the names, interfaces, and versions of the REST endpoints used by the interview and these will need to be checked and remapped (Step 4). The code for LOAD and SAVE will need to be adjusted accordingly to handle the new attributes. So, if you add an extra input attribute after our initial deploy, the WABBlog LOAD v2 will be created and with the appropriate GET (inspect) and PUT (update) methods the deployment will need to be adjusted by you to use the correct (modified) endpoints.
Thank you for reading this walkthrough of the Generic Integration Protocol. I hope it helps you in your initial use for building integrations. We touched on many topics that we did not explore here and are continuing to develop. I am very interested in your experiences and what worked for you and what issues you had to tackle.
Lastly, a large thank you to Ben O’Keeffe and Phil Whitwell from Oracle for help in digging through the material presented here.
Please visit https://www.oracle.com/applications/customer-experience/service/intelligent-advisor/policy-automation.html for the latest Intelligent Advisor documentation.
Please visit the link on administration of the hub using the REST API to learn how to inspect and configure the details of the protocol
Title image credit: Ben White via Unsplash