X

The Integration blog covers the latest in product updates, best practices, customer stories, and more.

Recent Posts

Recover unsaved changes for an Integration edit session

This blog is for a new feature coming out in Oracle Integration Cloud (OIC) November release. Due to many unforeseen circumstances, integration developers can lose their work while editing integrations. Lost work may be as a result of browser crashing, losing network connectivity, server going down etc.  Depending on the state of the edited integration, the loss of the work can range from minimal changes to complicated mappings and transformations between applications.  This blog describes how the integration developer can recover the lost changes.   What is recovered? Changes made by the integration developer after fully completing an action. For Example, If the developer is adding an Invoke action, it is considered complete only after the developer completes the configuration and clicks the Done button on the Summary page and returns to the canvas. If the developer is adding a Logger action, it is considered complete only after the developer clicks the Close button on the Configure Logger page and returns to the canvas. If the developer is performing a delete of an action, it is considered complete only after the developer clicks the Delete button, confirms the delete action and returns to the canvas. Changes made in the Mapping editor after the developer clicks on Validate If the developer performed a few mappings and clicks on Validate, and if a crash occurs, any changes done before clicking on Validate is recovered.   What is not recovered? Layout changes made by the integration developer Map My Data integration changes Changes while configuring an action. For Example, If the developer is configuring a map in the mapping editor and the browser crashes, changes performed before clicking on Validate If the developer is configuring an adapter and a crash occurs before returning to the canvas Changes made in Mapping Editor when launched from Assignment variable   Who can recover unsaved changes? Only the integration developer who was making the changes can recover the unsaved data. This restriction is in place because, only the developer who was making the changes has the required context whether to recover the unsaved changes or not.   Details The following section describes in detail how recovery is performed Assume that an integration developer is editing an integration. The following screenshot shows that the developer has just created a scheduled integration.   Now the user adds an Invoke with a Map and a Note   Let’s assume that the browser crashes or is accidentally closed. When the same user logs back in and navigates to the Integration Assets page, the user will see that the integration that was being edited will display an UNSAVED CHANGES badge and is in a locked state.   It also displays the Edit icon. This is a deviation from the past.  Only the last user editing the integration would see this option.   It also displays Resume Edit option in the menu. This is a deviation from the past.  Only the last user editing the integration would see this option   Now the user decides to click on either the Name or the ‘Resume Edit’ menu option or the Edit icon. This will take them to the integration canvas with integration recovered as show below. User will also see a message “Unsaved changes for integration INTEGRATION1 (1.0) have been recovered and are displayed in the canvas, but not saved yet.”   If the user clicks on close, they get a prompt as shown below to either save or discard the changes as the recovered changes have not been saved.   If the user clicks on Save, the unsaved changes are saved. If the user clicks Discard, changes are lost and they would see as shown below   Since users are already used to clicking on Unlock to re-start editing the integration, if they click on Unlock they would see as follows. They get a message saying that “Integration was locked by you 5 minutes ago. You can edit and recover your changes or you can unlock and discard unsaved changes.”   Clicking on Unlock and Discard, will unlock the integration and discard the unsaved changes and keep the user in the Assets page   Clicking on Edit & Recover will recover the unsaved changes and will take them to the canvas   Instead of the user who was editing the integration if an Administrator logs in,  they will not see the UNSAVED CHANGES badge for the integration.   Administrator will not see the Edit icon either. Only the View and Menu icons are displayed.   Administrator will also not see the Resume Edit option in the menu, rather sees the Edit option.   But if the Administrator clicks on the Unlock option, they would now get a message saying “Integration was locked by icsdeveloper on Fri, Sep 25th, 2020 04:32:47 PM MDT. Only icsdeveloper can recover the changes. Please contact icsdeveloper if you want to recover unsaved changes or select unlock to discard unsaved changes.” So, the Administrator cannot recover the changes, only Unlock and discard them.   If the crash occurs when the user is in the Mapping editor, they will be able to recover changes up to the point they clicked on Validate button   Additional things to Note Recovery is also supported under the following conditions. When the user logouts intentionally or unintentionally during an edit session. When the user keeps an the integration edit session idle and a session timeout happens. When a managed server re-start happens.

This blog is for a new feature coming out in Oracle Integration Cloud (OIC) November release. Due to many unforeseen circumstances, integration developers can lose their work while editing...

Integration

Oracle Integration November 2020 Update for Technical Adapters

Overview Oracle Integration has rich capabilities for connecting with diverse technical end points may it be service / API based like SOAP and REST or messaging based like Kafka and JMS or database management system like ADW, ATP etc. OIC supports wide array of technical adapters enabling customers to connect, integrate and reap advantages of core digital technologies in their business processes. Oracle Integration continues to improve the technical adapters capabilities further empowering customers in their digital transformation journey. In November 2020 update we are pleased to announce following updates to the technology adapters: REST Adapter Improvements Oracle Integration REST adapter is a versatile adapter helping customers to communicate with wide array of products & services like Oracle Functions, OCI Object storage, AWS S3 etc, proving to be a one of the cornerstone adapters for customers modern integration solutions. In the conquest of further empowering the customers, Oracle Integration has been further enhanced the REST adapter with following capabilities. Generation of sample curl command Standard OAuth policy Improvements Connectivity Properties Improvements Let us now dive deeper into each enhancements. Generation of sample curl command As Oracle Integration customers look to leverage REST adapter for integrating with wide array of products and services, having variety of different configuration options like security policy, headers, parameters etc. As these options are configured at different stages, they get lost and Integration developer loses the visibility into the configuration. Oracle Integration envisaging Integration developer need, has provided an option in the REST adapter wizard configuration summary page to generate the sample curl command. The new link has been added in the summary page, on clicking it adapter would generate the sample cURL command. Please note this command cannot be executed on the command line right off the shelf as it misses few information, one would need to provide those before execution, as an example one needs to place in parameters value, security configuration parameters etc. Let us now take a quick look at the configuration summary screen, as you can see below Integration developer now has an option in the summary page to generate a sample cURL command: On clicking the "Generate a sample cURL" link, adapter would generate the sample cURL command. This greatly summarizes the configuration performed by the Integration developer in the format they are well accustomed to and provides them a handy tool to validate the configuration of specific invoke before hand i.e. before executing the same as part of the flow. Standard OAuth Policy Improvements OAuth is a widely accepted open standards for exchanging security token, simplifying access to APIs, resources & services in the digital world. One of the challenge though is OAuth specification defines high level exchange of information between participating parties leaving details of implementation on the implementor. Oracle Integration REST adapter supports variety of OAuth security policy out of the box viz. OAuth Client Credentials, OAuth Resource Owner Password Credentials and OAuth Authorization Code Credentials. This has created a unique challenge for Oracle Integration to seamlessly connect with different products and services using single adapter, as different vendors require different mechanism for exchanging the required security token. At same time Oracle Integration is looking to make it really simple for customers to connect and communicate with all the digital REST services using supported OAuth security policy, on the same conquest REST adapter now supports an additional configuration parameter "Client Authentication" in the connection page. The client authentication parameters enables Integration architect to configure how Oracle Integration exchanges security token over the wire, as an example whether client id and secret is sent as a part of basic auth header or in a body. This enables Oracle Integration customers to seamlessly connect and communicate with variety of digital products and services without requiring custom OAuth policy for overcoming the difference in the vendor implementation. Let us now look at the new parameter option in the REST adapter connection page, as you can see below just by selecting the option highlighted below, Integration architect can configure how to send the security token to the desired service. Connectivity Properties Improvements Oracle Integration REST adapter is used for wide variety of use cases, and requires additional flexibility such as querying and overriding the base URI, relative URI or even absolute endpoint URI  for scenarios where you want to conditionally change endpoint based on the received payload. The connectivity properties provide a great flexibility to Integration architect for querying and updating the endpoint through mapper based on the realtime information as received in the flow. The connectivity properties have been further enhanced to support the trigger requests as well i.e. with November release you will be able to access connectivity properties under trigger request node. This is really helpful for building contextual endpoints based on current trigger configuration, consider a scenario you would like to pass the location for next integration flow in the response of current trigger flow. The ability to consume trigger rest base URI along with XSL function would allow you to do exactly the same: In the above example you can see how we can build next flow access URL by accessing the current trigger connectivity properties. The connectivity properties for the invoke endpoint has also been enhanced to support skipping control characters from the response, this is helpful where your 3rd party application response includes control characters and you would like to exclude them while parsing the response. Kafka Adapter Improvements Apache Kafka is one of the modern pillars in customer big data and digital ecosystem where high volume of messages are being ingested and consumed for publishing the state of business at rapid pace. Oracle Integration Kafka adapters enables customers to decouple large components of their business applications by broadcasting and consuming business events on Kafka without writing any code. Kafka adapter already supports wide array of use cases such as publishing messages to specific topic & partition, consuming messages from specific consumer group and reading from beginning / latest from topic. With November 2020 update, Kafka adapter now supports configuring the Kafka connection as a trigger for the Integration flow. Please note this is only supported for the Kafka hosted behind corporate firewall or private network configuration and with the connectivity agent. This enables customers to consume the high volume of messages from Kafka topic and consume them in the application flow. As an example a fully automated robotic manufacturing factory is placing updates on the manufacturing on the products in real time on the Kafka topics, can now be consumed in application through Oracle Integration. Let us now take a look at the configuration of Kafka as a trigger, the key aspect of configuring the Kafka as a trigger is the polling interval, the rest of configuration is very similar to consuming messages from topic using Invoke pattern. Once you have given the name to the endpoint, the next screen will take the configuration of topic, consumer group with polling interval, the polling interval is used by the Oracle Integration connectivity agent to poll for the new messages on the configured Kafka instance. As you can see in the above configuration, all integration architect needs to configure is, consumption of the message from Kafka topic by configuring topic, partition, consumer group etc. along with the polling frequency. Once configured message is consumed and available for consumption in the down stream processes. One more key piece of information to take note off here is Maximum Number of Records to be fetched field, Integration architect can choose maximum number of records to fetch, lets say one picks 1000 based on the customers business need. The Kafka adapter will honour the 10MB payload limit by fetching either Maximum number of records as specified by integration architect or as many records that can be packed in the 10MB payload. This optimises the max number of records that can be read based on the technical limitation of OIC and at same time enables Integration architect to provide the limit based on the customer business requirements. The next key enhancement for Kafka adapter is, Kafka adapter now supports integration with the Confluent Kafka platform to produce and consume messages. Confluent Kafka is one of the prominent Kafka providers out there in the market supporting Kafka features and capabilities in the cloud pay as you go service model meeting the customers digital scale needs. Summary This concludes enhancements done in the November 2020 update of Oracle Integration to the technical adapters viz. REST and Kafka. Oracle Integration continues to invest significantly in the technical adapters helping customers in their digital transformation journey by meeting their modern use cases. I hope you enjoyed the blog and are eagerly waiting to consume the enhancements done in the technical adapters and reap advantage in your business processes.  

Overview Oracle Integration has rich capabilities for connecting with diverse technical end points may it be service / API based like SOAP and REST or messaging based like Kafka and JMS or database...

Track scheduled instances from submission

"Just clicked on Submit Now action, and when I navigate to Tracking there's no instance! Kind of lost, I click Refresh multiple times to know whether my integration actually triggered." Or "Started a schedule, but on Tracking there's nothing to indicate I have started something until the instance actually starts executing!" Sounds familiar? We have an enhancement to help you with a more streamlined monitoring experience for scheduled integrations. Synopsis: As soon as you trigger a Submit Now or start a schedule, an integration instance gets created immediately. This also shows up on Tracking and confirms not only the fact that your action is successful but also allows you to track the execution no sooner than you submit. Pre-requisite: You should be on Oracle Integration Cloud version 20.37960 or later. Details: Till now when you triggered a Submit Now, a banner slid up with a confirmation message that included the Request ID of the submission. This changes now. With this enhancement, the message will now include the Instance ID. Clicking on it will directly take you to the Tracking page and display the triggered instance. On the Tracking page, the triggered instance no longer shows the Request ID. It is not required for tracking the scheduled instance anymore. However, you can still directly navigate to the Schedule Overview page from the Tracking page using the menu which appears on hovering over an instance. Clicking on the Eye icon launches the Activity Stream. And you can observe some interesting new data there. Schedule request submitted:  This denotes the point at which the request to trigger the scheduled integration was submitted Schedule request started running:  This shows the time when the submitted request started executing Schedule paused:  This shows the time when a scheduled was paused Schedule resumed:  Similarly, this indicates the time when the schedule was resumed (from paused state) You will also notice that two new states have been introduced for an instance. Waiting: This shows the scheduled integration instance is waiting to execute. Typically this is the state of an instance when it is scheduled to execute sometime in future Paused:  This indicates the schedule has been paused, so the instance is also paused. In both the above instance states (Waiting and Paused), users can choose to Abort the instance using the Abort button that shows up on the row on hover There is also a new menu option ("Schedule") which allows you to directly navigate to the Future Runs page for the integration. Clicking "Back" button on the Future Runs page brings you back to Tracking.   What happens when you abort an instance in Tracking? Suppose you have defined a schedule to execute every 24 hours (daily 9:00AM) starting tomorrow (Tuesday). As soon as you start the schedule, an instance gets created and moves to Waiting state. It will trigger the integration on Tuesday at 9:00AM as scheduled. Till then it remains in Waiting state. For some reason you want to abort the schedule as you do not want it to execute on Tuesday. Using the new Abort button, you can accomplish this. Aborting the instance will move the instance state to Aborted, and it will reflect on the Activity Stream as such. Also immediately, another instance gets created which again moves to Waiting state. Don't get confused by this. This is the instance for the next day (Wednesday) which is now created and is waiting to execute. To explain this new behavior, in the below figure InstanceId 10000015 was aborted. As soon as you refresh the page, you will find the state shows Aborted; and a new InstanceId 10400001 gets created. And this instance is in Waiting state. This new instance is for the next run that gets submitted automatically on aborting the previous instance. Please note: The same behavior will also be observed when you do not abort the instance and allow it to run. It starts running at 9:00AM Tuesday and soon the next day's instance also gets created and moves to Waiting state.   Here are the main highlights of this change in behavior: Executing Submit Now shows instanceId instead of requestId Previous behavior New behavior   Clicking on the banner link immediately shows the instanceId in Tracking Previous behavior New behavior Activity Stream shows new milestones Previous behavior New behavior Pausing a schedule shows the instance as Paused Previous behavior There was no change observed in Tracking as the instance waiting to execute was never displayed New behavior Resuming the schedule moves the instance back to Waiting state (waiting for execution) Previous behavior There was no change observed in Tracking as the instance waiting to execute was never displayed New behavior Scheduled instance in Waiting or Paused states can be Aborted in Tracking Previous behavior Runs could only be aborted from Future Runs New behavior New menu item to navigate to Future Runs Previous behavior You had to click on "Request Id" link to go to Future Runs page New behavior New Schedule menu item to navigate to Future Runs page

"Just clicked on Submit Now action, and when I navigate to Tracking there's no instance! Kind of lost, I click Refresh multiple times to know whether my integration actually triggered." Or "Started a...

Oracle Integration Announcements On Home Page

What are OCI Console Announcements OCI Console Announcements are the messages which are sent to tenant administrator's email to notify them about information related to service status that will impact their tenancy.Tenant administrators can view most recent announcements and the announcements from the past by clicking on the bell icon on the Home Page of OCI Console. Learn more about OCI Console announcements here.   Clicking on the bell icon on OCI console navigates the users to the OCI Announcements page which displays announcements related to all OCI Cloud services as shown in the picture below.   Announcements In Oracle Integration Home Page Since the OCI announcements are sent only to the email of tenant administrators, the Oracle Integration administrators don't receive these emails and tend to miss out on the key updates.  From November 2020 quarterly release onwards users can view all the announcements related to Oracle Integration service on Oracle Integration Home Page. They can also continue to view these announcements in OCI Console announcements page like before. Since most of the Oracle Integration users spend their time within the Oracle Integration product, having the ability to view all announcements in the Oracle Integration console will be of immense help to them.  Please note :  This is one of the upcoming features which will be available only on Oracle Integration Generation2 instances from November 2020 release onwards. Oracle Integration team typically sends out two different types of announcements Maintenance -These type of announcements are typically sent to inform Oracle Integration administrators about an upcoming release which requires patching. Action Required - These type of announcements are sent when Oracle Integration administrators are expected to take an action and perform a step within a certain timeframe.   Enabling Oracle Integration Announcements in OCI Console You will have to define two policies in the tenancy by logging into the OCI Console.  Once these policies are being applied in OCI console, Oracle Integration users will be able to view announcements related Oracle Integration service in within Oracle Integration console.   View Oracle Integration Announcements On Home Page The urgent announcements related to Oracle Integration appear in a banner as shown in the picture below.   You can click on the bell icon to launch a dialog which provides a tabular view of all the announcements.   The upcoming announcements are represented with a blue circle icon and users can click on the announcement to view details. You can go through the list of announcements and mark them as read or unread. You can click on any of the announcements in the list to view the details as shown in the picture below. Summary From November 2020 release onwards users can view all announcements related to Oracle Integration on the Home Page of Oracle Integration console.This functionality is available only on Oracle Integration Generation2 instances.

What are OCI Console Announcements OCI Console Announcements are the messages which are sent to tenant administrator's email to notify them about information related to service status that will impact...

Integration

How to Invoke an OCI Function from the Oracle Integration Cloud

The Oracle Cloud Infrastructure (OCI) offers a great set of services that can be very useful in combination with Oracle Integration Cloud (OIC) for a wide variety of use cases. Things like the Object Storage, Oracle Streaming Service, Functions etc, can easily be accessed from OIC. The OCI ecosystem has a rich set of API’s that can be used for that purpose - https://docs.cloud.oracle.com/en-us/iaas/api/#/ In this post I am going to show how to deploy a Function, and how to Invoke it from OIC. What are Functions “Oracle Functions is based on Fn Project. Fn Project is an open source, container native, serverless platform that can be run anywhere, It’s easy to use, supports every programming language, and is extensible and performant.” How to Start There is extended documentation and plenty of blogs & tutorials that provide a comprehensive deep dive into OCI , Docker Images and Functions etc. Being this an Integration Blog, we will focus on the OIC side of things, while providing some tutorials on how to setup an OCI Function. If you already have a deployed Function, you can skip the next section. Pushing an Image to Oracle Cloud Infrastructure Registry (OCIR) A Function is stored as Docker image in OCIR, hence we need to push our image before invoking a Function. If you are new to this please follow this TUTORIAL! Noteworthy to mention that the tutorial is easier if you use the Oracle Cloud Shell. This way there is no need to install anything. At the end of the tutorial you will have  pulled the helloworld image from DockerHub, tagged it, and pushed it to Oracle Cloud Infrastructure Registry. Below is my OCIR list of repositories.   Documentation available here !   Create & Deploy a Function To Create & Deploy a Function please follow this TUTORIAL! At the end you will have a deployed function that can be invoked via several different methods, but we are interested in the HTTP Requests.   Invoke the Function (REST) How can we test the REST API for the Function? Normally I use POSTMAN to test all my REST requests before jumping into OIC. This time however I took a different approach and followed this excellent guide that uses CURL: https://www.ateam-oracle.com/oracle-cloud-infrastructure-oci-rest-call-walkthrough-with-curl It provides a bash script for which you only need to configure the security parameters and the REST endpoint. And the Function returns the expected concatenation of : Hello + “my input message” How to Invoke the Function from OIC Create a REST Connection You will need to collect some information in order to fill the required parameters. Connection Type REST API Base URL Connection URL https://xxxxxxxx.eu-frankfurt-1.functions.oci.oraclecloud.com xxxxxxx is the Function Unique ID. The easy way is to get this from the console, in the function details. Adjust the region accordingly. Security OCI Signature Version 1 Tenancy OCID Open the navigation menu, under Governance and Administration, go to Administration and click Tenancy Details. The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard. User OCID Open the Profile menu ( ) and click User Settings. The user OCID is shown under User Information. Click Copy to copy it to your clipboard. Private Key https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#two Fingerprint https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#three           Create an Integration For this demo use case I will create an AppDriven Orchestration with: REST Trigger REST InvokeFunction Return the Function response This Integration starts with a REST Trigger that takes one parameter as input. The Function Hello-World takes one parameter as input and returns a concatenation of Hello, <input message>! We then map the Function response to the output. I will focus on the REST InvokeFunction activity.   REST Function Invoke First we look at the Oracle OCI API’s reference where can see the URI and the required parameters. The FunctionId in the POST URI is the Function OCID and can be found in the Oracle Cloud Console. This is not the same as the Function unique identifier from the REST connection properties. Here we need the OCID.   POST /20181201/functions/{functionId}/actions/invoke In a real implementation I would obviously not pass the URI hardcoded, but instead use a variable for it. This is the lazy approach 😊 We check the box for Request Payload and Receive Response. Now at this time you need to tell the REST adapter what kind of Payload and Media Type your function expects. The Hello-World Function expects a plain text as input. The REST adapter allows to easily choose payload format from XML and JSON, but for plain text we need to actually choose Binary as the payload and then set the media-type as “text/plain”. We do exactly the same thing for the Response. Since the Binary Payload format expects a stream reference type, we need an additional step in order to properly map this. This article provides a very clear step by step: https://www.ateam-oracle.com/invoke-rest-endpoint-with-plain-text-payload Basically we need to perform some encoding/decoding to the input string before passing it along to the REST Function Invoke decodeBase64ToReference( encodeBase64( input))   When we put this to test is quite straightforward! The embedded REST testing capabilities in OIC really make life easier. Please note that the first time the Function runs, the image is pulled from the registry, and executed as a container. The subsequent requests are sent to the same container. After some idle time, the container is removed. More details on the invocation here. And we can verify in the OCI Console Logs the successful invocation of the Function.   Conclusion: Invoking a Function is just one of the many benefits of working alongside the OCI services. It allows OIC to invoke custom code deployed in this serverless framework, thus expanding its range of capabilities.    

The Oracle Cloud Infrastructure (OCI) offers a great set of services that can be very useful in combination with Oracle Integration Cloud (OIC) for a wide variety of use cases. Things like the Object St...

Integration

Oracle Integration - Adapter Enhancements to Non Oracle Applications

Overview Oracle Integration (OIC) has a rich set of security capabilities to enable our customers to connect applications and technologies in a secure manner. We continue to enhance Oracle Integration to provide additional enhancements and features on existing Application Adapters.  In the November 2020 release, Oracle Integration added the following capabilities to third party application adapters:  Salesforce Adapter:  Enablement for Salesforce Government Cloud Customers ServiceNow Adapter: Graceful downgrade of User Experience PayPal Adapter Inbound Support and New Modules Shopify Adapter New Modules Salesforce Adapter:  Enablement for Salesforce Government Cloud Customers In the recent past, the Salesforce Adapter was enhanced with the elimination of the need to upload the enterprise WSDL on the Connections page. With this enhancement the customers, who were using the Salesforce Government Cloud, were unable to create new connections due to login considerations as mentioned in this help document. With the November release, the Salesforce Adapter provides support to create a connection of the Salesforce.com application that can integrate with Salesforce Government Cloud.  You simply need to select the target Salesforce instance type as Government and provide your custom domain name and API version on the Connections page. So let's start with a quick tour of the new field on the Connection page: Custom domain: This custom domain name is required only for those users who use Government Cloud or custom domain to log in to the Salesforce account. For Salesforce Non-Government instance types (that is, Production or Sandbox environment), the custom domain field is optional. This helps to maintain the backward compatibility and existing connections continue to work the same. You can know your custom domain name by following instructions mentioned in the Salesforce developer guide. ServiceNow Adapter: Graceful downgrade of User Experience The ServiceNow Adapter enables you to create an integration with ServiceNow in Oracle Integration. The ServiceNow Adapter has been enhanced to run with as few privileges as possible. With the update in November release, the integration user needs minimal accesses on the tables to configure the ServiceNow Adapter as a trigger or an invoke connection in an integration. A ServiceNow Adapter connection can be created with accesses on the following tables. For Trigger connections: sys_db_object sys_soap_message sys_soap_message_function sys_script For Invoke connections: sys_db_object However, with these permissions only the modules (not the applications) are displayed by the adapter in the wizard interface. So let's start with a quick tour of the new wizard interface. Trigger Application page:   Invoke Operations page: PayPal Adapter Inbound Support and New Modules PayPal is a global payment provider that enables vendors to receive payments digitally from their customers and make payments to their suppliers. The PayPal Adapter enables you to create an integration with a PayPal application in Oracle Integration. With the November release, the PayPal Adapter shall provide Inbound support, that is, it can be configured as a trigger connection in an integration for the events related to Billing, Invoicing, Payment, Checkout, and Catalog modules. So let's start with a quick tour of the wizard interface for a trigger connection: Basic Info Trigger Operations Page Summary Page Furthermore, in the stated release PayPal adapter has been enhanced to support for the Transaction Search, Subscriptions, Add Tracking, and Invoicing modules.   Invoke Operations page Shopify Adapter New Modules The Shopify Adapter enables you to create an integration with a Shopify application in Oracle Integration. Shopify Adapter with the update in November release extends the capabilities and provides invoke (target) connection support for performing various types of operations against objects from the Plus (Gift cards) and Shopify Payments modules. And provides trigger (source) connection support for performing various types of actions against events from the FulfillmentEvents and Fulfillments modules. Summary The features described here extend Oracle Integration by providing additional capabilities added to Salesforce, ServiceNow, Paypal and Shopify adapters. Oracle Integration continues to invest heavily in security features in the adapters, to provide our customers with the ability to connect applications and technologies in a most secure manner. We hope you will be able to take advantage of these new features.

Overview Oracle Integration (OIC) has a rich set of security capabilities to enable our customers to connect applications and technologies in a secure manner. We continue to enhance Oracle Integration...

Security Improvements for Database & FTP Adapters

Overview Oracle Integration (OIC)  has a rich set of security capabilities to enable our customers to connect applications and technologies in a secure manner. We continue to enhance Oracle Integration to provide additional security settings and functionality. In the November 2020 release, Oracle Integration offers new security-related functionality for the Database and FTP adapters. The features discussed here include: Integration with ATP Serverless configured with Private Endpoint Support for Wallet based authentication with privately hosted databases Automatic Database Wallet and Password refresh  Message payload security capabilities with privately hosted SFTP servers Two of these features involve use of the Oracle Integration Connectivity Agent. Using the connectivity agent, you can create hybrid integrations and exchange messages between applications in private or on-premises networks and Oracle Integration.  1. Integration with ATP Serverless configured with Private Endpoint Autonomous Database (ATP) is becoming more widely adopted, along with its use within integration flows using OIC. When configuring your Autonomous Database, you can specify that it use a private endpoint within your VCN in your tenancy. This allows you to keep all traffic to and from your Autonomous Database off of the public internet. When using the ATP adapter in Oracle Integration to connect to an ATP instance using a private endpoint, you need to set up the connectivity agent. Inside the connection details for the ATP adapter, there are 2 options for security: JDBC Basic Authentication and JDBC over SSL. When selecting JDBC over SSL, you are prompted to enter the wallet and wallet password. Prior releases of Oracle Integration do not allow you to use the JDBC over SSL (wallet) option with the connectivity agent. In addition, Username-token policy is not supported by ATP Serverless, This means that there were no options for integrating OIC with ATP-S with the connectivity agent. This enhancement now offers support for connecting to ATP-S configured with a Private Endpoint.    This figure shows the connections page for ATP. The wallet and password are specified along with the DB service username and password. A connectivity agent group is specified for connecting to ATP-S with a Private Endpoint. In addition, the connectivity agent was downloaded and deployed in the network that has access to the ATP-S instance. 2. Support for Wallet based authentication with privately hosted databases In prior releases, when connecting to a privately hosted database using one of: Autonomous Database - Dedicated (Oracle Autonomous Transaction Processing - Dedicated, Oracle Autonomous Data Warehouse - Dedicated) Oracle Database Cloud Service using the connectivity agent, the only supported security option was JDBC Basic Authentication. In the November release, you can also specify JDBC over SSL. This allows you to connect to a privately hosted cloud database and leverage wallet-based authentication. Note that this is similar to #1 above for ATP-Serverless. The difference is that use of JDBC Basic Authentication with ATP-Serverless is not supported in the database whereas Basic Authentication is supported for ATP-Dedicated and Database Cloud service. 3. Automatic Database wallet and password refresh  Oracle Wallet can be used to securely store your database credentials. Wallet rotation provides the ability to create a new wallet and invalidate the existing wallet. You may want to rotate wallets for the following reasons: If your organization's policies require regular client certification key rotation. When a client certification key or a set of keys is suspected to be compromised. When the wallet is rotated or expires (or, if using Basic Auth and the database password is changed), a corresponding change is required in the Oracle Integration connection for that database. Once the connection is modified in Oracle Integration, you then need to de-activate and re-activate the integrations that use that connection. This can sometimes mean 100's of integrations which need to be re-activated which is very impractical. With our November, 2020 release it is no longer necessary to de-activate and re-activate the integrations. Instead, at runtime when the integration detects that the credentials have been updated, a session refresh will occur which will fetch the new credentials. You simply update the credentials in your OIC connection, and there is no longer a need to re-activate the integrations using that connection. Note that it is advisable to update the OIC connection with the new credentials soon after updating the credentials in the database, and to make these changes during a period when you don't expect integrations to be run. This will minimize the possibility that integrations will be run after the credentials are made in the database but before they are applied to your connection in OIC. 4. Message payload security capabilities with privately hosted SFTP servers When integrating with an FTP server that is hosted on-premise behind a firewall, you configure a connectivity agent in Oracle Integration in order to establish connectivity. In prior releases, there were certain security-related options which were not available when using the FTP adapter with the connectivity agent. In particular: Encrypting the message payload Decrypting the message payload Signing the message payload Verification of signed message payload The screenshot here shows the settings for encrypting and decrypting the message payload using PGP. There are similar settings for signing and verification. These settings can now be used with the connectivity agent for connecting to privately hosted SFTP servers. Summary The features described here extend Oracle Integration security capabilities by providing additional options in the Database (ATP, ADW, Database Cloud Service) and FTP adapters. Oracle Integration continues to invest heavily in security features in the adapters, to provide our customers with the ability to connect applications and technologies in a most secure manner. We hope you will be able to take advantage of these new features for your database and ftp server connectivity.        

Overview Oracle Integration (OIC)  has a rich set of security capabilities to enable our customers to connect applications and technologies in a secure manner. We continue to enhance Oracle Integration...

Integration

November 2020 Update

It is time for the November quarterly update to Oracle Integration. Lilly the Oracle integration mascot is looking forward to it. We have lots of exciting new features and improvements to share with you. Note that testing is still underway for these features and, although unlikely, it is possible that some will not meet our quality standard and be deferred to a later release. Announcements & Update Windows Currently tenant administrators get notified of OIC Gen 2 updates via notifications in the OCI console. Unfortunately most OIC users are not tenant admins and go straight to their OIC instance and so never see the notifications. To make sure users of OIC Gen 2 know when updates are coming we are adding update notifications into the OIC console. We have also added an ability for customers to mark their Gen 2 instances for one of two update windows. You can read about it in Choosing Your Update Window. Developer Productivity Enhancements We are making the following improvements for developers: Local Invoke Enhancements Configurator Enhancements Recover Unsaved Changes from Edit Improved Notification Activity Setup Improved Scheduled Flow Diagnostics Prebuilt Connectivity Enhancements We have a lot of enhancements both to existing adapters and new adapters. Security Improvements for Database & FTP Adapters Oracle Application Adapters Improvements  File Server Improvements Improvements for 3rd Party Application Adapters Paypal, Shopify, Salesforce, ServiceNow. Improvements in Technical adapters (REST, Kafka) Real Time Visibility Enhancements We are making it easier for you to track waht is happening at the business and the technical level. Embed Insight Dashboards into Applications More Insight Milestone Mapping Options Allow an Insight Model to Use Different Versions of Mapped Integrations Consumption Figures Now Include Visual Builder Operations, Reliability & Scalability Enhancements To make OIC run faster and support the most demanding environments we have made these improvements. Faster Activation of Integrations Business Rules Support for Visual Builder Customer Defined Hostname Instructions for Customer Managed DR (Shortly After November Release) Summary There are a lot of new features coming in this quarterly release. If you want to flag instances for upgrade in the first wave then you need to do it by October 26. Keep an eye out in the OCI console for the update notice, in future you will also be able to view this from the OIC console. Enjoy the future!

It is time for the November quarterly update to Oracle Integration. Lilly the Oracle integration mascot is looking forward to it. We have lots of exciting new features and improvements to share with...

Integration

Oracle Integration November 2020 update for Oracle Applications Adapters

Overview Oracle Integration is one of the most robust Integration platform for Integrating with Oracle Applications may it be Fusion Applications like ERP Cloud, Engagement Cloud or NetSuite and Service Cloud. Oracle Integration November 2020 update continues to build on the momentum of differentiating Oracle Integration platform for integrating Oracle Applications by providing deep functional, simplified and differentiating features. Thus enabling customers to consume business value from Oracle Applications at a rapid pace in continuum with the Oracle applications updates. In November 2020 update we are pleased to announce following updates to the Oracle Application adapters: Fusion Application Adapters Improvements Fusion application adapters are one of they key strategic adapter for the Oracle Integration platform, Oracle Integration continues to update the adapter with the features meeting customer needs and improving overall experience of the adapter. One of such improvement done in November 2020 update is cherry picking child REST resources. This enables Integration architect to pick & chose the child resources that are needed for their businesses. In the absence of this feature Integration architect would end up having all the child resources in the integration artifacts resulting in unmanageable number of elements to traverse through in mapper as well as unnecessary load at the runtime.  Let's now double click on the new features for the fusion application adapters. Cherry picking child REST resources This feature allows integration architect to pick & chose child resources they want to include in the integration flow based on their business needs. Let us first consider an example, you want to query the supplier information and fetch contacts and sites information alone. The supplier REST resource include wide variety of child resources, right from addresses, contacts, currency lookup to products and services information, just to be precise there are 15 child resources for supplier resource.  As per the above example, let's say you are configuring the supplier business resource in the ERP Cloud Adapter. As you select the Suppliers business resource wizard would now prompt you to cherry pick the desired child resource. To select the child resource simply select the desired child resource from left hand side list box and click highlighted button to move the child resource to a selected child resources list. Please note you can select maximum upto 10 child resources for a particular invoke. As you select the desired child resource, ERP Cloud adapter is going to include only the selected child resource in the response of the business resource. This will not only improve experience of mapping the response from business resource in your further processing of integration flow but also reduce the memory foot print at runtime, reduce the network payload that gets exchanged over wire resulting in improved experience both from design time and runtime perspective. Let me give you purview of impact by showing the difference in mapper, please see below left hand side mapper is with current functionality and right hand side of the mapper is with the November release having cherry picked child resources. As you can see in above picture, highlighted fields are eliminated in the right hand slide mapper, there by improving experience of mapping, reducing payload that is fetched at runtime and improving the overall performance of the integration flow. NetSuite Application Adapter Improvements Oracle Integration continues to invest in the NetSuite Adapter further spear heading the robust capabilities of integrating with the NetSuite applications meeting customer needs, simplifying access and thereby addressing wider use cases. In the November update NetSuite adapter gets enhanced concurrency limit, this means customers get high concurrency limit when they integrate with NetSuite using Oracle Integration. In the current scenario customer has a concurrency limit on invoking integration APIs concurrently based on the license they have purchased. This will enhance the concurrency limit to 100 irrespective of the license they have subscribed for the NetSuite applications. NetSuite adapter also has couple of functional enhancements in the November update viz. 1) Support for Initialize / InitializeList operations in the adapter. 2) Improved support for item and transaction records in the NetSuite applications. The initialize / initializelist operations gives an ability to the integration architect to pre populate the specific record with the values from related reference record and brings in all important efficiency in building the flow and consistency in the execution. Enhanced NetSuite concurrency limit This is of huge value to our Oracle customers and demonstrates how Oracle values loyal customers by providing enhanced value for their investment and trust in choosing multiple Oracle services. The enhanced concurrency limit is applicable only for the invokes performed through connections created using TBA Authorization flow, please note TBA Authorization flow is a new security policy introduced in the NetSuite adapter. The TBA Authorization policy brings in multiple benefits to customers viz. 1) It is much more secure as user does not have to enter the credentials in the Oracle Integration. 2) Customer would get enhanced concurrency limit whenever invokes are performed using such connections. 3) It is really simple to configure the connection, as all user has to do is to click on Provide consent in the connection page and follow the prompts.  Please note: NetSuite Adapter TBA Authorization flow security policy is not supported with the new customer defined hostname feature for the Oracle Integration service instance.  Lets now see in detail how would one configure the TBA Authorization security policy in NetSuite Adapter: Step 1. As you would go to the NetSuite Adapter connection page, you will find new security policy "TBA Authorization Flow" that requires user to click on Provide Consent button as highlighted below. Step 2. As user clicks on "Provide consent" button Oracle Integration would open a new tab, where user has to enter the Oracle Integration credentials and click "Log In". Step 3. Oracle Integration will navigate to the NetSuite page, where user would need to enter the NetSuite credentials. Step 4. Select the desired role, enter any additional security question and click Submit or Allow as prompted by the NetSuite application. Step 5. As you select the role and click Allow, NetSuite will exchange token with the Oracle Integration and Oracle Integration should prompt you success of authorization as below. Step 6) You can now test the connection on the connection page and it should successfully connect with the NetSuite application using the token obtained via TBA authorization flow.   Support for Initialize / InitializeList operations This is a really powerful feature for Integration architects as they do not need to populate the records from scratch, rather they can pre populate the records from the reference objects. As an example if you are designing a flow to initiate the cash refund, now this cash refund is against the cash sale, and NetSuite adapter allows you to initialize the cash refund record from cash sale where cash sale is a reference object where details like createdfrom, line items and amount are pre populated. Once you have the real object initialized through reference, integration architect only needs to worry about updating the required attributes and perform the transaction.  Lets now look at the experience of configuring the Initialize / InitializeList operation in NetSuite Adapter: Step 1. Initialize Operation is added under "Miscellaneous" operation type. Step 2. Select the desired transaction type, this is the record type that you want to initialize, as discussed above I am selecting CashRefund from the list. Step 3. NetSuite adapter shall populate the initialize reference objects, based on the selected transaction type, in our example it shows CashSale and ReturnAuthorization. Integration architect can also turn on the checkbox for system to list all the reference types, this is helpful for scenarios where listed reference type aren't helpful for the customer. If you choose this option, please make sure to select compatible Transaction and Reference Types. Selection of incompatible types will lead to runtime exceptions. Step 4. Now that you have defined Intialize CashRefund invoke action, you need to map the CashSale internal ID so that flow can fetch Cash Sale record for initializing the Cash Refund record. Here is the example of mapping to Initialize invoke that initialize's the Cash Refund record. Step 5. Now that you have Initialized CashRefund you can use the same to perform the desired CRUD operations, here is the example of mapping between initialized record and basic record of CashRefund.   Summary This concludes the summary of enhancements done in the Oracle Application adapters for November 2020 release. Oracle Integration continues to invest heavily in the Oracle application adapters, and is committed to provide best possible experience to Oracle application customers. I hope you are as much excited as I am to see these features live in your Oracle Integration and use them for solving your business needs.  

Overview Oracle Integration is one of the most robust Integration platform for Integrating with Oracle Applications may it be Fusion Applications like ERP Cloud, Engagement Cloud or NetSuite and...

Integration

How to use the OCI Object Storage from the Oracle Integration Cloud

The Oracle Cloud Infrastructure (OCI) offers a great set of services that can be very useful in combination with the Oracle Integration Cloud (OIC) for a wide variety of use cases. Things like the Object Storage, Oracle Streaming Service, Functions etc, can easily be accessed from OIC. The OCI ecosystem has a rich set of API’s that can be used for that purpose - https://docs.cloud.oracle.com/en-us/iaas/api/#/ In this post I am going to show how to create a file in the Oracle Object Storage. What is the Object Storage “The Oracle Cloud Infrastructure Object Storage service is an internet-scale, high-performance storage platform that offers reliable and cost-efficient data durability. The Object Storage service can store an unlimited amount of unstructured data of any content type, including analytic data and rich content, like images and videos.” Traditional integration use cases rely heavily on File Servers and SFTP access as an harbour for files. Object Storage and similar services are picking up fast in replacing them. We see more and more customers leveraging the benefits of these services for the handling of all different types of content that need to be stored and moved. How to Start Before jumping to the Integration Cloud, we need to create an object storage bucket and we need API signing keys. Create an Object Storage Bucket Some important definitions: Bucket - A logical container for storing objects Namespace - A logical entity that serves as a top-level container for all buckets and objects, allowing you to control bucket naming within your tenancy. Each Oracle Cloud Infrastructure tenant is assigned one unique and uneditable Object Storage namespace that spans all compartments within a region Object - Any type of data, regardless of content type, is stored as an object. The object is composed of the object itself and metadata about the object. Each object is stored in a bucket. In the Oracle Console go to Object Storage->Create Bucket Provide a name and you can leave everything else as default. As simple as it gets, you now have a Bucket. You can also see the Namespace – please make a note of this as it will be required in the next steps. Create a set of API Signing Keys In order to be able to use any of OCI’s API’s we need an API Signing Key: https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm In the Oracle Console go to Identity->Users->User Details>API Keys You need to generate a pair of keys – see here how (Instructions for Linux/Mac and Windows)! Upload your public key. And now you are ready to jump into OIC! Create a REST Connection You will need to collect some information in order to fill the required parameters. Connection Type REST API Base URL Connection URL https://objectstorage.<region>.oraclecloud.com Security OCI Signature Version 1 Tenancy OCID https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five   User OCID https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#five Private Key The private key from the previous step Fingerprint https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm#three     For this demo use case I will create an AppDriven Orchestration with: REST Trigger FTP Connection to read a file  Pass the file to the REST OCI Object Storage invocation. Return a response Nothing fancy as this is just a capability showcase 😊 For the REST Trigger, no need to have input parameters but we do define a response. The response requires a payload format – JSON Sample A simple response definition with Result as the parameter to be returned. Then we drag the FTP Connection with Read File operation where we pass the Input Directory and the File Name. I am using the OIC FTP Server – for more information please check these 2 posts! https://blogs.oracle.com/integration/embedded-file-server-sftp-in-oracle-integration https://blogs.oracle.com/integration/leveraging-oracle-integration-file-server-for-file-based-integrations-v2 Finally the interesting part – we drag the REST OCI connection (created previously) into the canvas. How to know the URI for this? Well, we need to read the OCI API Documentation. The method I will use is the PutObject which has the following URI: /n/{namespaceName}/b/{bucketName}/o/{objectName} In a real life scenario I would not pass any of these values directly as I am doing here, it would be an input parameter or a lookup instead. The other relevant part is the configuration of the request payload. We select the “Send Attachments in request” with a binary format and media-type as application/octet-stream The last piece of this puzzle is mapping the file reference from the FTP Response to the input of the request that creates a file in the Object Storage. We could potentially read the contents of the file and perform some transformation, but here we simply pass it as a reference (much easier). The last mapping required is the one for the Trigger REST response.The Integration looks like this After activating it and using the embedded test functionality (for REST Triggers) we can easily execute the Integration, and verify each step with the Activity StreamAnd that was it – the file was created in the Object Storage! Easy, yet powerful !

The Oracle Cloud Infrastructure (OCI) offers a great set of services that can be very useful in combination with the Oracle Integration Cloud (OIC) for a wide variety of use cases. Things like the...

Announcing Oracle SOA Suite on Containers & Kubernetes for Production Workloads

The Oracle SOA Suite Team is thrilled to announce the general availability of Oracle SOA Suite on Containers and Kubernetes for Production environments. This is a culmination of a tremendous effort backed by customer feedback in the early access phase. Scope As part of the first release version 20.3.3, we are making available the following: Container images for Oracle SOA Suite 12.2.1.4 including Oracle Service Bus on MOS patch 31966923 (Production) and container-registry.oracle.com (Dev & Test) Sample scripts on Github Documentation related to the following Installation and configuration of a new SOA Instance. Monitoring and management of the instance, configuration of Load Balancers, SSL Certificates etc. Patching and upgrading the instance. Steps on configuring a development environment to work with the instance. Initial release certifies the domain types of following components Oracle SOA Oracle Service Bus Oracle Enterprise Scheduler Additional component certification is on the roadmap Objective With growing adoption of Container and Kubernetes in Datacenters, this effort targets Supporting Oracle SOA Suite and Oracle Service Bus containers in Production environments Enable Datacenter consolidation/modernization efforts Enable SOA Suite's co-existence with cloud native applications Container based On-Premise deployments Features Following are the salient features of this release Oracle SOA Suite 12.2.1.4 based Container images WebLogic Kubernetes Operator 3.0.1 for deployment on Kubernetes cluster Certification on the following: Oracle Linux 7 (UL6+) and Red Hat Enterprise Linux 7 (UL3+)* Kubernetes 1.14.8+, 1.15.7+, 1.16.0+, 1.17.0+, and 1.18.0+ Docker 18.09.1ce+, 19.03.1+ or CRI-O 1.14.7 Flannel networking v0.9.1-amd64 or later Helm 3.1.3+ Oracle Database 12c or above for RCU Publish WebLogic operator logs and WebLogic Server logs to Elasticsearch and interact with them in Kibana Support for monitoring the Oracle SOA Suite instance using Prometheus and Grafana Ability to scale Oracle SOA Suite domains by starting and stopping Managed Servers on demand, or by integrating with a REST API Multiple load balancers supported like Traefik, Voyager, Apache and NGINX Patching framework leveraging Oracle WebLogic Image Tool Additional information is available here – Supported Virtualization and Partitioning Technologies for Oracle Fusion Middleware * Standalone Kubernetes support only. As part of the release, we are making deployment scripts and supporting files available on GitHub. They can be accessed from here – Oracle SOA Suite on Kubernetes Documentation - https://oracle.github.io/fmw-kubernetes/soa-domains/ Oracle SOA Suite and Oracle Service Bus Quick Start Guide - https://oracle.github.io/fmw-kubernetes/soa-domains/appendix/quickstart-deployment-on-prem/ WebLogic Kubernetes Operator Documentation - https://oracle.github.io/weblogic-kubernetes-operator/ WebLogic Image Tool - https://github.com/oracle/weblogic-image-tool Feedback We continue soliciting customer feedback. Please provide feedback using Slack Channel: Host - oracle-weblogic.slack.com; Channel - #soa-k8s

The Oracle SOA Suite Team is thrilled to announce the general availability of Oracle SOA Suite on Containers and Kubernetes for Production environments. This is a culmination of a tremendous effort...

A Simple Guide to Oracle HCM Data Loader (HDL) Job Support in Oracle HCM Cloud Adapter

Oracle Integration continues to simplify integrations with Oracle HCM Cloud by adding native support to more and more Oracle HCM integration touch points. Oracle Integration now supports Oracle HCM Data Loader (HDL) jobs, a powerful tool for bulk loading data through integrations. Using Oracle HDL, you can load business objects for most Oracle HCM Cloud products into Oracle Integration. For example, you can load new hires from Oracle Talent Acquisition Cloud (Taleo EE) as workers into Oracle HCM Cloud using an Oracle HDL job. To learn more about Oracle HDL jobs, refer to this blog. The Oracle HCM Cloud Adapter in Oracle Integration simplifies the way an integration specialist invokes an Oracle HDL job process and monitors the status of the job. The Oracle HDL job can load data into business object from delimited data (.dat) files. Integration architect can generate the delimited data files in Oracle Integration using business object template files provided by Oracle HCM Cloud. Because the business object template file contains every attribute including flex fields, it can be further simplified and personalized by removing the excess attributes. The business object template file must be associated with a stage file action in Oracle Integration to generate the delimited data file .This greatly simplifies generation of delimited data files for Oracle HCM business objects through Oracle Integration. To learn more about how to obtain the business-object template file from Oracle HCM Cloud and use the same for delimited data file generation, refer to this post on the Oracle Cloud Customer Connect portal.   At a high level, an Oracle HDL job pattern can be implemented in three steps: 1) Generate the HCM HDL compliant delimited data (.dat) file. 2) Submit the Oracle HDL job. 3) Monitor the job status until completion.   The Oracle HDL job is a bulk data load process that runs in a batch mode, to support this Oracle HCM Cloud Adapter supports two Oracle HDL operations: Submit the Oracle HDL job: The Oracle HCM Cloud Adapter uploads a ZIP file containing a .dat file to Oracle Universal Content Management (UCM) and invokes the Oracle HDL importAndLoad operation. This operation returns the Oracle HDL process ID. Note that the ZIP file can contain multiple business object .dat files, as supported by the Oracle HDL job. Get Oracle HDL process status: The Oracle HCM Cloud Adapter invokes the HDL getDataSetStatus operation to get the status of the specific Oracle HDL process Design-Time Flow for the Oracle HDL Process This part consists of the following steps: Create an integration that reads source files staged on an FTP server, having information that needs to be loaded into Oracle HCM Cloud using HDL Job. The business object here we are taking is workers. Here we generate the worker delimited data file using the business object template file obtained from Oracle HCM Cloud.  Create a stage file action using the Write File operation, specify file name as “Worker.dat” and output directory as “WorkerOutput”. Click Next.   Select XML Schema (XSD) document option in the wizard, as we will be providing the business object template files. Click Browse and upload the business object template file obtained from Oracle HCM Cloud.  Select the “WorkerFileData” schema element, this is the main element for the delimited data file.    Please refer to the Oracle Customer Cloud Connect post mentioned above for steps on retrieving the Oracle HCM Cloud business objects in the respective schema. Click Done to save the stage file invoke action. Now lets map the elements from source csv file to the business object template file. Use the mapper to map the source file elements with the target elements depicted in the schema. The left hand panel Sources, shows all of the available values and fields that can used in this mapping. The Target panel shown on the right illustrates the Write hierarchy. This is a representation of the basic structure for a Worker.dat file. For the field Labels on the Target panel, enter the corresponding header title e.g. Under the WorkerLabel parent, the EffectiveStartDateLabel will have the value of EffectiveStartDate. These values correspond to the header column within the final DAT file been generated. For the data sections of the Target panel, map the values from the Sources panel or enter in default values. Additionally, it is necessary to map the repeating element recordName value to the repeating element of the Parent specific section values. In our example it is NewHires that require to be mapped with Parent data sections such as “Worker”, “PersonLegislativeData”, “PersonName”, and so on. Create a stage file action using the ZIP File operation to generate the ZIP file to send to the Submit an HCM Data Loader job operation. Configure the Oracle HCM Cloud Adapter in the Adapter Endpoint Configuration Wizard to use the Submit an HCM Data Loader job operation. Select the Import Bulk Data using HCM Data Loader (HDL) action. Select the Submit an HCM Data Loader job operation. Select the security group and doc account configuration parameters for submitting the Oracle HDL job.   Map the OIC File reference. If you want to send additional parameters to the importAndLoad operation, they can be sent through the Parameters element. Configure a second Oracle HCM Cloud Adapter in the Adapter Endpoint Configuration Wizard to use the Query the status of HCM Data Loader Job (HDL) operation. This action can run inside a loop until the HDL job status is identified as Started or any other status that you want. Select Query the status of HCM Data Loader Job (HDL).   Map Process ID to getStatus Request (Oracle HCM Cloud) > Process ID. Map the response from the Query the status of HCM Data Loader Job (HDL) operation.   Oracle Integration provides extensive visibility to the job status by providing status information at various states viz. Job/ Load and Import states. Compare the status of the job as per your business requirement. The overall status can be one of the following values: Status Meaning Comments NOT STARTED The process has not started yet. It is waiting or ready. If this value is returned please poll again after some wait. DATA_SET_UNPROCESSED The process is running, but the data set has not been processed yet. IN_PROGRESS The process is running. COMPLETED The data set completed successfully. Job is completed, you can fetch the output. CANCELLED Either data set load or data set import was cancelled. Job is cancelled. ERROR The data set or process in error Job has ended in error. This concludes the blog, you can see how Oracle Integration streamlines process of submitting the HDL job, by natively supporting generation of HDL DAT file, submitting the HDL job and querying the HDL job status. To learn more about the feature, please refer to the Oracle HCM Adapter documentation here.

Oracle Integration continues to simplify integrations with Oracle HCM Cloud by adding native support to more and more Oracle HCM integration touch points. Oracle Integration now supports Oracle HCM...

Integration

Choosing Your Update Window

Starting in the November Quarterly release we will allow customers to choose between two update windows for their OIC Generation 2 instances. What is an Update Window? We currently provide OIC functional updates every quarter. For OIC Generation 2 instances we do this in two windows, usually two weeks apart. Starting with the November release we will allow customers to select the windows in which they wish to be updated. We recommend that non-production instances are updated in the first window and production instances in the second window. This allows customers to sanity check the update before it is applied to production instances. How Do I Select a Window? We use the OCI tagging mechanism to identify the window in which an OIC instance should be updated. We currently look at one tag : OIC_UPDATE_WINDOW1. If that tag is set then we update the instance in window 1, otherwise we update it in window 2. The rules for the update window are outlined in the flow chart below: Note that if the tag is not set then update window 2 is selected. In other words window 1 is only selected if the OIC_UPDATE_WINDOW1 tag is attached to the instance. How Do I Tag an Instance? Tagging of an instance can be accomplished from the OCI console. Navigate to the OIC instance you want to tag. Press Add Tags button. Enter the tag as the tag key, for example OIC_UPDATE_WINDOW1. Do not select a namespace. There is no need to enter a value for the tag. After adding a tag you can verify it is set by selecting the tags tab and choosing the Free Form Tags section to view the tags applied. How Do I Change the Update Window? You change the update window by deleting the tag and creating a new tag. If you need to delete the tag you can click the pencil next to the tag and then you have the option to remove it. When Do I Need to Set My Tag? The cutoff for tagging instances for the November Window 1 update is October 26.  We will send a notification two weeks before the update to confirm which instances are being updated. Cutoff dates for future update window changes will be announced closer to their release dates. Can I Use Other Tags? For purposes of update window we only look for the OIC_UPDATE_WINDOW1 tag. You are free to use other tags. For example you may want to tag instances targeted for window two with an OIC_UPDATE_WINDOW2 or OIC_UPDATE_DEFAULT_WINDOW tag to make them easy to identify, but this tag will have no impact on the actual window being used. If an instance has OIC_UPDATE_WINDOW1 tag attached to it then it will be updated in window 1 regardless of the existence or otherwise of other tags. Summary Starting with the next functional update in November you can choose one of two windows for your OIC instances to be updated. You select the window by using OCI tagging to identify the window the instance should be updated in. We hope that you like the additional control this gives you over the update process.

Starting in the November Quarterly release we will allow customers to choose between two update windows for their OIC Generation 2 instances. What is an Update Window? We currently provide OIC...

Integration

Inbound EDI message to Oracle Integration for B2B World

In this blog, let's create a basic inbound integration flow that receives an EDI document through a REST request, parses and validates the EDI, converts it to XML, and returns the XML in the response. We shall use REST adapter to keep our blog simple but, you can replace it as per your business requirement, could be FTP adapter, SOAP or any other technology adapters/application specific adapter. Prerequisites for implementing this use case: Prior implementation experience with Oracle Integration or Basic knowledge of using Oracle Integration and refer my previous blogs on B2B Let us look at the with the implementation: In the navigation pane of Oracle Integration, click Integrations. On the Integrations page, click Create. Select App Driven Orchestration as the style to use. The Create New Integration dialog is displayed. In the What do you want to call your integration? field, enter Inbound EDI via REST, then click Create Configure the REST Adapter Trigger Connection On the integration canvas, click the start node and select Sample REST Endpoint Interface as the trigger connection. The Adapter Endpoint Configuration Wizard opens. On the Basic Info page, enter the following details: In the What do you want to call your endpoint? field, enter Receive-EDI. Enter /inbound_edi as the endpoint's relative resource URI. Select POST as the action to perform on the endpoint. Select to configure both request and response for this endpoint and click on Next  2. On the Request page: Select Binary in the Select the request payload format field. Note: Most EDI documents contain only textual data, but they may also include line breaks and special characters, which are used as delimiters. A few EDI documents contain raw binary data, such as images, along with text. Therefore, select the request payload format as Binary. Select Other Media Type as the media type you want the endpoint to receive. In the Media Type field, enter application/EDI-X12.       3. In the Response page: Select JSON Sample in the Select the response payload format field. Click the inline link next to enter sample JSON. In the resulting page, enter the following sample JSON and click OK. {    "translate_result": "xxxx",    "hasError": false,    "validationErrors": "xxxx",   "translated_payload":  "xxxx" } Notice that JSON is automatically selected as the response media type Click Next, and on the Summary page, click Done to complete the REST Adapter configuration. The integration flow is now represented as follows in the canvas and click on Save to save your integration flow. Configure the EDI Translate Action Add an EDI translate action to the flow to translate an EDI document into an Oracle Integration XML message. On the right side of the canvas, click Actions , drag EDI Translate, and drop it after the first Receive-EDI element. The Configure EDI Translate Action wizard opens. On the Basic Info page, enter EDI-Translate as the name for the action, and click Next. On the Select Data Formats page, enter the following details: Leave the Inbound EDI message to Oracle Integration message radio button selected. Select the document version as 8010 (or 4010..) Select the document type as 850 (Purchase Order). Select the document definition as the one which you have created OR Standard. Select the EDI character encoding as UTF8. Select Yes in the Perform validations on input data? field. Click Next. 5. On the Summary page, click Done to complete the configuration, Change layout to Horizontal and click on Save to save your integration flow. Note that the corresponding mapping element is automatically added to the integration flow Configure Mapping Actions Configure data mappings for the EDI-Translate action and Receive-EDI action in order to successfully parse the incoming EDI message and translate it to an XML message. Configure the Map to EDI-Translate Action Click the Map to EDI-Translate action and select Edit. Map streamReference on the left to edi-payload on the right, within TranslateInput. Click Switch to Developer View and edit the expression for the mapping. Prefix the function call encodeReferenceToBase64 to the existing edi-payload expression as follows: oraext:encodeReferenceToBase64 (/nssrcmpr:execute/ns20:streamReference ). Click Save Click Validate and then Close. Configure the Map to Receive-EDI Action Click the Map to Receive-EDI action and select Edit. Click Switch to Developer View . Map edi-xml-document (present on the left within $EDI-Translate, under TranslateOutput) to translated_payload, within the response-wrapper on the right. Note: To achieve this mapping, drag and drop get-content-as-string function on to the translated_payload and now drag and drop edi-xml-document with in the () . Click Save. Map translation-status within the same TranslateOutput element to translate_result on the right. Map validation-errors-present to hasError. Map validation-error-report (TranslateOutput > validation-errors) to validationErrors. Click Validate and then Close. Activate the Integration Check for errors, save, and activate the integration flow. You'll notice an error notification on the canvas. To resolve it, click Actions Menu in the top-right corner of canvas, and select Tracking. In the resulting dialog, select streamReference on the left and move it to the table on the right. Click Save. Save the integration and click Close. On the Integrations page, click the toggle button against your integration to activate it. Click Activate in the Activate Integration dialog. Parse Your First EDI Document To execute your sample integration, send a request from a REST client tool, such as Postman OR you can use Oracle Integration console to test. If you use OI Console, you can ignore steps 1-3 given below. Create a request definition on the REST client tool as follows: Define the request to send an HTTP POST to the URL:                http://host:port/ic/api/integration/v1/flows/rest/INBOUND_EDI_VIA_REST/1.0/inbound_edi Configure authorization for the request. Select HTTP Basic Auth and provide the username and password to your Oracle Integration instance. Add an HTTP header with Content-Type as KEY and application/EDI-X12 as VALUE. Use the sample EDI X12 PO  Send the request from the REST client. Verify the response from the integration flow. Check if the response status is 200 OK and the JSON is displayed as follows: {   "translate_result": "Success",   "hasError": false,   "validationErrors": "",   "translated_payload": "<edi-xml-document...>" } The translate_result parameter with the value Success indicates that the EDI X12 document was parsed and translated successfully. For more information, please visit Oracle docs Refer my other blogs      

In this blog, let's create a basic inbound integration flow that receives an EDI document through a REST request, parses and validates the EDI, converts it to XML, and returns the XML in the response. W...

Integration

How to create a XSLT map that reads many correlated payloads

Summary On this post will see a way for creating XSLT maps that need to loop thought different sources (aka input payloads) which their instances are correlated by key fields. Example for 1:0..n and 1:1 relationships between sources I will use the Business units and Employees classic example: Each Business Unit can have  0..n Employees (1:0..n relationship). Also the G/L Accounts source with a 1:1 correlation with Business Units.  I want to create a XSLT Map that puts them together. Here are the sources (aka input payloads) $BusinessUnits <compay> <bu>   <id>SD</id> <name>Software Development</name> <accounbtid>i9</accountid>   </bu> <bu>   <id>BS</id> <name>Sales</name> <accounbtid>i1</accountid>   </bu>   <bu>   <id>MD</id> <name>Marketing</name> <accounbtid>i2</accountid>   </bu> </company>   $Employees <people> <emp> <buid>SD</budi> <name>Jorge Herreria</name> </emp> <emp> <buid>SD</buid> <name>Steve Jobs</name> </emp>   <emp> <buid>BS</buid> <name>Larry the Cable Guy</name> </emp> </people> $GLAccounts  <gl>   <account> <id>i1</id> <number>001.345</number> </account>   <account> <id>i2</id> <number>001.477</number> </account>   <account> <id>i9</id> <number>001.223</number> </account> </gl> As you may have derived, the link between the business unit and employees is the Business Unit ID. The "header" is the $BusinessUnit and the "detail" is the $Employees. The like for GL Accounts and Business units is the account ID.   Here's output needed: <xxx> <yyy>   <BU id='SD'>Software Development</BU>   <empName>Jorge Herreria</empName>   <accNumber>001.223</accNumber>   </yyy> <yyy>   <BU id='SD'>Software Development</BU>   <empName>Steve Jobs</empName>   <accNumber>001.223</accNumber>   </yyy>   <yyy>   <BU id='BS'>Sales</BU>   <empName>Larry the Cable Guy</empName>   <accNumber>001.345</accNumber>   </yyy> </xxx> a Solution: When the instances (records) of the sources have 1:1 correlation, using a predicate will perform well. When the instances have 1:0..n correlation, using a xsl:for-each-group performs better than using predicates because avoids over-parsing the source. Here's the XSLT: <?xml version = '1.0' encoding = 'UTF-8'?> <xsl:stylesheet version="2.0" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:param name="BusinessUnits" /> <xsl:param name="Employees" /> <xsl:param name="GLAccounts"/> <xsl:template match="/" > <xxx> <xsl:for-each-group select="$Employees/people/employee" group-by="buid"> <!-- this section will be executed only once per 'buid' --> <!-- Store the Business Unit Record in a variable --> <xsl:variable name="BURecord"> <xsl:copy-of select="$BusinessUnits/company/bu[id = fn:current-grouping-key()]"/> </xsl:variable> <!-- Store the GL Account Record in a variable --> <xsl:variable name="GLAccountRecord"> <xsl:copy-of select="$GLAccounts/gl/account[id = $BURecord/bu/accountid]" /> </xsl:variable> <!-- end: executed only once per 'buid' --> <xsl:for-each select="current-group()"> <!-- iterates the employees within the current 'buid' --> <yyy> <BU id="{./buid}"> <xsl:value-of select="$BURecord/bu/name" /> </BU> <empName> <xsl:value-of select="./name" /> </empName> <accNumber> <xsl:value-of select="$GLAccountRecord/account/number"/> </accNumber> </yyy> </xsl:for-each> </xsl:for-each-group> </xxx> </xsl:template> </xsl:stylesheet> Take away When there is a 1:1 relationship, using predicates instead of <xsl:for-each-group> is faster because the xslt engine does not need to sort the data to create the group When there is a 1:0..n relationship, using <xsl:for-each-group> will perform faster than using predicates, because using predicates will, in above example, parse the entire Business Unit source and GL Account source per every Employee.    Some links: XPath Predicates: https://www.w3schools.com/xml/xpath_syntax.asp xsl:for-each example:  https://www.xml.com/pub/a/2003/11/05/tr.html

Summary On this post will see a way for creating XSLT maps that need to loop thought different sources (aka input payloads) which their instances are correlated by key fields. Example for 1:0..n and 1:1...

Integration

Creating Complex Local Temporary Variables in OIC

Introduction Over the past couple of years, I have been involved in more and more projects and proofs-of-concept moving assets from Oracle SOA Suite to Oracle Integration Cloud. Now it is important to note that these are quite different products and serve different needs.  SOA Suite is a very extensive tool which allows you to create apps, user interfaces, integrations and much more.  Oracle Integration Cloud (OIC) is more focused on just integration.  However, many companies have used SOA Suite simply for integration purposes and making the move to OIC attractive. There are many actions in the world of BPEL that don't yet exist in OIC, but one that is particularly troublesome is the lack of local variables.  It is often convenient in BPEL to use a local variable to make future mappings easier in the integration.  SOA Suite's BPEL lets you create arbitrary variables of any shape and size based on a schema definition.  In OIC, local variables are either a string or based on a trigger or invoke operation. Data Stitch The new Data Stitch action almost gets us there, but again, the stitch variable must be a type already defined by a trigger or invoke.  You can't define a schema for the variable if it does not already exist. Solution STAGE FILE to the rescue!!! As you may know, you can use the stage file action to create a file on the local, in-memory, temporary filesystem.  This is very handy when manipulating files, performing zip and unzip operations and preparing files for file based integrations such as ERP Cloud.  But it can also be used to create temporary, local variables! Create local file First, create a stage action to write the file: I typically give it a name like the variable type and put it in a "/tmp" directory. Now I'm able to define the schema for the local variable.  Of course, you can use an XSD, JSON Sample, XML Sample or EDI document. In this case, I'll just use a sample json. Write contents to the "local variable" Now we can map values to the file based on the schema definition. Read contents back Now we can use the stage action to read the contents back in.  We can use the reference from the write operation. And, use the same schema that was used for the write operation. Use the new local variable You can now use the response from the Read operation as if it was a local variable.  You can also use this response to define a data-stitch variable. You can wrap these steps inside a scope to "hide" the complexity and use the variable outside the scope using the data stitch. Conclusions Many times, it is nice to create several "local" variables before doing your final mapping.  This can make it much easier to map and maintain.   Keep on integrating!  

Introduction Over the past couple of years, I have been involved in more and more projects and proofs-of-concept moving assets from Oracle SOA Suite to Oracle Integration Cloud. Now it is important to...

How to Configure the New Oracle Integration Cloud (OIC) Streams Adapter

Introduction I'm writing this blog because some of the existing documentation is a bit confusing and I initially struggled to get the new OIC Streams Adapter configured. Connector Parameters There are not many things to setup for the connector, but let me explain each one in detail: Bootstrap Servers If you search around, you'll find lots of different examples of how this should look.  Some use "cell-1" prefix, some show "console" prefix and others. For The OIC Streams Adapter, this should be in the form: streaming.us-ashburn-1.oci.oraclecloud.com:9092 where the "us-ashburn-1" is replaced with your region (e.g. us-ashburn-1,us-phoenix-1, etc).  This can be found in the URL when you login to the OCI console. The rest should be the same and the port should be 9092. SASL Username This part can be confusing.  The docs often speak of an ID, but not meaning OCID.  Let me try to make it clear here.  It is made up of three parts: 1. Tenancy NAME.  This can be found in the OCI console by clicking on your username and then Tenancy. 2. Stream User NAME.  This is configured according to the documentation found here: This is just the NAME of a user with authorization to publish or consume messages from the stream. 3. Stream Pool OCID.  In this case, we don't want the name, but the actual OCID of the stream pool. This can easily be found on the streams page, stream pool, OCID. These there are then concatenated together to form the SASL Username: e.g. mytenancy/streaming-user/ocid1.streampool.oc1.iad.amaaaaaa...cf54nka SASL Password This is an authorization token generated for the streams user. Go to the specific user to be used to publish or consume messages.  Click the Generate Token button. TrustStore This field can be confusing.  Depending on your browser, you download the trust details in different ways.  I've found that even the same browser behaves differently on Windows, MAC or Linux. I'll show an example here, but it is unfortunate this can't be done for us. Anyway, just go to your OCI console and click the security button from your browser: Typically, in Chrome, it looks something like this: When you click on the Certificate button, you'll see something like this: Click Details, then pick the top level root certificate and then the export button.  Again, different OSes and browsers work differently, but the basic idea is the same. Save this file somewhere. Now you have to create a jks truststore.  To do this, you need to have Oracle Java installed on your computer and use the keytool found in the bin directory. NOTE: You MUST use the Oracle Java to build the trust store.  I tried using OpenJDK and it did NOT work. Make the keystore using a command like this: /oracle/jdk/bin/keytool -keystore us-ashburn-1.jks -alias CARoot1 -import -file TopRoot.crt Where: /oracle/jdk/bin" is where your Java is installed. us-ashburn-1.jks can be any name you want. CARoot1 can be anything you want. TopRoot.crt is the file you exported earlier. This will create the jks file that you then upload to the Streams Adapter connector configuration. Testing At this point you should be able to click the "test" button and verify the configuration is correct. Conclusions This adapter works well, but setting it up can be confusing.  I hope this blog helps others avoid any headaches and clarifies exactly what to set in each field.  

Introduction I'm writing this blog because some of the existing documentation is a bit confusing and I initially struggled to get the new OIC Streams Adapter configured. Connector Parameters There are...

How to Keep Exactly One OIC Integration Instance running 24/7

Introduction Oracle Integration has a great feature to periodically run a flow instance according to a schedule.  This schedule can be something simple, like "run every 10 minutes" or something more complex using a calendar ICS definition.  This works great for scenarios that run periodically, then exit. However, I've recently been tasked to have an OIC flow monitor and process messages from an Oracle Stream using the new Oracle Stream Adapter.  This would ideally require a flow to be running 24/7 to look for messages and process them. But OIC is not designed to run a single flow instance for more than a few minutes and if we setup a schedule, there will be gaps in time where there are no instances running. Use-case My use-case is for the Streams Adapter, but this will work regardless of what is being done inside a given flow. So what's the secret?  Use the OIC REST APIs for the scheduling service. The basic idea is: - Use the scheduler to start the first instance and define the interval. - Use some basic logic to determine what schedule window we are currently running in. - When we are in the next schedule window, force the next schedule to start causing it to be in BLOCKING mode. - Exist the flow - The next instance will start within a second or two. Graphically, it looks something like this: As you will see, even though this is a scheduled flow, the scheduler should only start the first instance.  The process itself will queue up the next instance once it determines it is inside the next schedule window.  In this example, the window time is 10 minutes, but it doesn't really matter what is set here as long as it is short enough to keep a single instance running without OIC shutting it down. Create Connector I won't detail the creation of the OIC REST connector, but it should be pretty easy to setup using your OIC URL in the form of: https://myhost.integration.ocp.oraclecloud.com:443/ic/api The Flow Find Next Schedule Window Once the connector is setup, the first thing we need to do in the flow is make a call to the scheduler API and get a list of current items. The REST endpoint will look something like this: /integration/v1/integrations/MYFLOW%7C01.00.0000/schedules/Schedule_MYFLOW_01_00_0000/futureruns The results will look something like this: { "items": [ { "id": "48233", "requestId": 48233, "runTime": "2020-09-11T00:31:15.589+0000", "runType": "Submitted Run", "state": "RUNNING", "submitterName": "john.graves@oracle.com" }, { "id": "48234", "requestId": 48234, "runTime": "2020-09-11T00:42:13.000+0000", "runType": "Auto Scheduled Run", "state": "WAIT", "submitterName": "john.graves@oracle.com" }, .... }       The current instance will always be the first item and in the "RUNNING" state. The next scheduled window will be in the second item. The second item might be in a WAIT, BLOCKED or some other state. So the first step is to call this API and find the second item to know when the next window will start. Configure the REST adapter to do a GET operation passing the instance ID, version and schedule ID. Then map the instance information into the REST service. The integration Id will be something like this: concat ($self/nsmpr1:metadata/nsmpr1:integration/nsmpr1:identifier, "%7C", $self/nsmpr1:metadata/nsmpr1:integration/nsmpr1:version ) The %7C is the ASCII code for the pipe symbol "|". The schedule Id will be something like this: concat ("Schedule_", $self/nsmpr1:metadata/nsmpr1:integration/nsmpr1:identifier, "_", translate ($self/nsmpr1:metadata/nsmpr1:integration/nsmpr1:version, ".", "_" ) ) There is a "Schedule_" prefix and an underscore "_" between the id and version.   Also, the version has underscores rather than the standard dot notation, so we use the translate function. Then we can grab the result and put it into a variable for our while loop. To get this value, we will use an xpath similar to this: $FindTimestampForNextScheduledRun/nsmpr7:executeResponse/nsmpr1:response-wrapper/nsmpr1:items[2]/nsmpr1:runTime Note: xpath uses a start index of one, so the second item is [2]. If you want, you could check to make sure that at least two items were returned before doing this assignment, but it won't really matter much to the flow. The While Loop Now we can start our while look and check to see if the current timestamp is in the next scheduled window.  If it is, we can exit our loop and manage the starting of the next instance. The while loop check will look something like this: fn:current-dateTime() < xsd:dateTime($nextWindowTimestamp) We have to cast the string variable into a dateTime to make the compare work. All the real work should be done inside this loop.  In my case, I'm calling an Oracle Stream using the Streams Adapter.  This adapter will wait for messages, up to one minute and then exit.  Inside my ProcessMessage scope, I check to see if any messages were found and send them downstream. Setup Next Instance Once this flow has run into the next scheduled time window, we want to force a new instance to start. Note: The reason for this is if we simply exit and we are past the start time for the next instance, the scheduler will SKIP this instance and wait for the next time window.  That's not what we want. Note: If we tried to exit this instance "just before" the next scheduled instance, we'd have to know exactly how long our loop will run.  And, if we are wrong, there would be no instance in that scheduled window.  Not good. (I go into more detail about this in the demo video below) So, we wait until AFTER the next window starts to then force a new instance to start. Again, this is done using the OIC REST API and adapter. To force the next instance to run, we POST to the API: /integration/v1/integrations/MYFLOW|01.00.0000/schedule/jobs with a payload like: { "runAsSchedule": true } We map the instance id and version the same as before and pass "true()" to the 'runAsSchedule" payload. Note: It is VERY important to pass this value, otherwise you can have multiple instances running at once. Making this call will create an instance in the schedule set as BLOCKED.  A BLOCKED item will start immediately after the existing instance completes.  This makes it then safe to exit this flow knowing the scheduler will kick off a new instance. Now, it should never happen, but if something were to go wrong and an extra instance was setup in the schedule queue, you could get multiple BLOCKED instances.  So, I added a little check to see if we already have a BLOCKED instance before making this REST call.  This makes sure we don't get more than one. This just makes the same call to get the current schedule list and checks to see if there are any with a state of "BLOCKED". The condition value is: count($CheckForExistingScheduledRuns/nsmpr7:executeResponse/nssrcdfl:response-wrapper/nssrcdfl:items[nssrcdfl:state='BLOCKED']) > 0 If there is already a BLOCKED schedule, we can simple exit and be certain a new instance will be started. And that's it.  You will have one and only one instance of this flow running 24/7! Demo Here is a quick demo video showing this process in action. Sample Code Here is the code if you want to see the example in detail: https://github.com/gravesjohnr/oicsamples/blob/master/AlwaysRunningFlowSample/TESTSCHEDULEDSTREAMSGETMESSAGE_01.00.0000.iar

Introduction Oracle Integration has a great feature to periodically run a flow instance according to a schedule.  This schedule can be something simple, like "run every 10 minutes" or something...

Integration

ORACLE INTELLIGENT AUTOMATION SOLUTION

Automation, Artificial Intelligence (AI) and significance of process automation Automation is based on systems that automate some work that was previously done by humans and intelligent automation adds the use of AI to automate, empower, and accelerate digital transformation. Process automation is one of the most operationalized and most enterprise-practical aspects of AI. Applying the knowledge from data analytics techniques has been challenging due to its focus on historical data and lack of actionable outcomes. AI takes the analytics a step further – into actionable guidance - that are ideal for the process mapping and to be embedded in the integration or process flows. “Messy” unstructured work has been the hardest to automate and has not been available for the process coverage - while automation would be providing the highest value on top of exactly such unstructured and messy work. AI finally closes the gap by providing analysis of conversational, content, and web interfaces (for example sentiment analysis), and this way organizing and labelling unstructured work – giving finally organized and digitalized input to the process or integration component to proceed with automation. Application of AI technologies allows process-flow pattern detection and integration, process, and business metrics predictions - guiding corrective actions and reaching much higher levels of needed proactivity. AI-based automation leads to a new and intensive wave of business process redesign. The speed of processes and a boost in the coverage of complex processes is the transformational impact that AI brings on the success of the integration and process automation initiatives. Oracle supports a wide range of customers in the Business Process Management (BPM), Digital Process Automation (DPA), and Intelligent Process Automation (IPA) space for more than 15 years. In recent years Oracle Process Automation is integrated part of the Oracle Integration Cloud (OIC) and provides a suite of business process improvements and next-generation services that assists the knowledge worker by removing repetitive, replicable, and routine tasks, and that radically improve customer journeys by simplifying interactions and speeding up processes. Oracle Intelligent Automation Solution – on the top of enterprise applications As a vendor with a long tradition with process automation and integration technologies – Oracle certainly provides leading connectivity to Oracle Software As A Service (SaaS) and Oracle Applications. However, there is more to provide from the world’s biggest enterprise software vendor. Our view is that the intelligent automation needs of our SaaS and applications customers would perfectly be fulfilled by the market-leading functionality of Oracle Platform Services in the area of process automation, application integration, conversational AI, low code application development, and content management. There is a group of Oracle Platform Services - that is able to automate integration to help breakdown data silos, automate the new process to reduce manual and mandate work, automate experiences enabled by new conversational technology such as voice, mobile, chat across all of the applications, automate document and content capture and extraction. Integration, process, and experience layers provide bases for the additional automation with their set of pre-built skills, apps, and accelerators. These platform services are building blocks of Oracle Intelligent Automaton Solution. Picture: Oracle Intelligent Automation Solution; Source: Oracle EMEA Go-To-Market Team Building blocks of Oracle Platform Services in a complete Oracle Intelligent Automation Solution are: Oracle Integration Cloud (OIC SE and EE): a home-grown, cloud-based offering that combines application integration functionality commonly found in iPaaS (integration platform as a service) offerings with a business process automation suite Oracle Visual Builder (OVB): a low-code, graphical design, and development environment for web and mobile front-ends for workflow as well as database applications; bundled with Oracle Integration Cloud (OIC). Oracle Digital Assistant (ODA): a platform and toolset for building AI-powered conversational chatbots that can handoff requests to human agents and automatically mine established written FAQs to create knowledge bases. Oracle Content and Experience (OCE): a single content hub to create, manage, and publish omnichannel content including digital assets, user-generated content, web content, and business documents with full-text indexing and intelligent capture extraction. The starting point of rationalizing AI and intelligent automation – is application infrastructures. Oracle Integration Cloud (OIC), Oracle Digital Assistant (ODA, Oracle Visual Builder (OVB), and Oracle Content and Experience (OCE) all become de facto standard platform elements for Oracle Cloud Applications, as well as for their secondary customization or extension by partners or clients. As a global market leader in the enterprise applications space - in the Oracle Intelligent Automation Solution we provide very practical AI and AI that is specific to the enterprises (as opposed to a broad spectrum of AI).  The solution combines a set of tailored automation and intelligent services that make it easier for users to navigate through all the interrelationships across applications, simplify the interactions, bring data and processes closer to the users. Enabling digital transformation with “one-stop-shop for intelligent automation” and "true enterprise intelligent automation” Customers' transitions in the journey to the cloud need to be taken carefully and Oracle recommends two phases here: "move and improve” - a low-risk approach that revolves around hosting existing a client's on-premise capabilities on Oracle Cloud Infrastructure (OCI); and "improve and innovate” - transitioning to the new, cloud-native and Software As A Service (SaaS) applications with all modern application architecture approaches. Both phases include a roadmap to digital transformation, while still utilizing protected, moved, and improved on-premise assets alongside. Intelligent automation accelerates the path to digital transformation by eliminating barriers between business applications through a combination of machine learning, embedded best-practice guidance, and pre-built application integration and process automation. In Oracle Integration Cloud (OIC), we enable faster time to market, better business agility, and complete visibility across business process lifecycles where AI co-works with automation services to mimic activities carried out by humans and collects the data and analytics about these activities - and learns over time to do such activities better. With the addition of a range of automation-driven cloud services that also utilize AI: complementary Oracle Visual Builder (OVB), Oracle Digital Assistant (ODA), and Oracle Content and Experience (OCE) products – we are able to support "one-stop-shop for intelligent automation" and "true enterprise intelligent automation". AI capabilities and advantages of Oracle Oracle has built a mastery of AI, deep learning, and natural language processing within our conversation experiences tooling. Our integration and process automation layers provide, generation of auto-matching/mapping, recommendations for building canonical models, best next actions in workflow design, identification of patterns or anomalies on streaming data and predictive SLA violations, orchestrating Robotic Process Automation (RPA) bots, adaptive intelligent apps, machine learning service, and intelligent decisions. The hybrid architecture of the Oracle Integration Cloud (OIC) can support models that allow intelligent edge devices to communicate. A high volume of existing customers of the application integration layer – gives a vast amount of data for creating machine learning models. Oracle's application and SaaS market leadership can leverage end-users to efficiently generate, augment, and review the training data. Most types of reviews can easily be put into an Oracle Integration Cloud (OIC) workflow to receive a simple human annotation or judgment. Continuous feedback sets up transfer learning from a cohort of humans rather than machines. The add-on value of Oracle Cloud Infrastructure (OCI) We continue to invest heavily in its own Oracle Cloud Infrastructure (OCI) and the portfolio of application and platform products on OCI, offering simplicity, security, safety, and cost. Oracle Integration Cloud and other Platforms Services in the Oracle Intelligent Automation Solution - run natively on Oracle Cloud Infrastructure (OCI). OCI is a true enterprise cloud, supporting scale up as needed, highly performant, and for real production workloads. Oracle’s next-generation infrastructure (OCI) provides a set of native Cloud Native services that can be categorized into services for application development and operations services and services for observability and messaging. The Platform Service can leverage OCI services, such as Oracle Functions and OCI Events. Oracle Platform Services in the Oracle Intelligent Automation Solution integrate with Oracle data management and infrastructure platforms. A great place to start in intelligent automation is not far from the database, a critical part of the organization. Oracle Autonomous Database automates routine database tasks, allowing focus on getting the most out of data. Oracle DB Machine Learning extends Oracle DB & enables users to build “AI” apps and analytics dashboards with its powerful in-DB ML algorithms and automated ML functionality, integrated with Python & R. OCI Data Science is a platform for building, training, and managing ML models on Oracle Cloud, making them available for apps and analytics and focusing on the collaboration of teams of data scientists in the enterprise. RPA and Oracle Intelligent Automation Solution Oracle Content and Experience (OCE) provides capturing of content and business documents with full-text indexing and intelligent extraction and allows to scan and import documents in bulk from an imaging scanner, from a business application or from an email account or monitored folder, reorganize them and classify, assign document profiles, automate their grouping, read barcodes, populate metadata values¸ create content capture workflows, give an uploaded document to the OIC process, associate documents with processes, provides a custom user interface for interacting with business processes. These are often major needs of initial RPA projects and can be successfully fulfilled with OCE. Eventually, Oracle is able to partner with RPA vendors and have already some of their offerings on OCI Marketplace. Oracle Integration Cloud (OIC) comes with some of free native RPA Adapters - enabling third-party RPA bots to be invoked from integration flows, in order to access data from applications. Also, RPA vendors can be integrated using some of the technology adapters available in OIC (for example REST adapter). Oracle Intelligent Automation Solution can complement RPA projects by helping to identify opportunities for automation, tidy process, and make it efficient (so that the RPA bot can spend less time on training), and help to track the progress of the RPA project. Oracle does not consider RPA as a centre of intelligent automation, rather as one of its components that has to be orchestrated by the Oracle Integration Cloud (OIC) alongside other components from content, conversational and web interfaces and alongside decision models, AI models, human and system components, milestones, dynamic processes, and integration composites. Apart from simple tasks - automation of activities, systems, and more complex processes will not be instant but rather evolve through levels of a changing relationship between the human and the machine – indicating automation high dependence on process and integration component of Oracle Integration Cloud (OIC). RPA market grows from the fact that task automation and desktop automation (of fewer tactical processes) - is the automation project that very often actually starts first. Enterprises already started to explore a broader toolbox to help them orchestrate true end-to-end processes and automate more complex processes that bring an enterprise’s customers, employees, and suppliers closer. Oracle Intelligent Automation Solution is a perfect fit for such needs.

Automation, Artificial Intelligence (AI) and significance of process automation Automation is based on systems that automate some work that was previously done by humans and intelligent automation adds...

Integration

An Advanced Guide to OIC Notification via Emails

Introduction: Do you know how SMTP servers detect spoofs or detect the forging of the visible sender? Do you know how SMTP servers detect sender is legitimate? This blog is an answer to the above questions. With the migration of customers from OIC Generation 1 to Generation 2, we have changed the underlying stack that sends email from Cloud Notification Service (CNS) to OCI Email Service. With this, the SPF and DKIM configuration previously done will not be valid anymore and these need to be reconfigured to increase the deliverability. Using your own from address for Gen2 If you are willing to use your own "from" address like no-reply@oraclecloud.com. You have to follow the below 2 steps. You have to register the from address in Settings->Notification Screen. You have to configure SPF and DKIM on the sender domain i.e oraclecloud.com. More information on SPF and DKIM is below. SPF SPF is an acronym for “Sender Policy Framework”. SPF is a DNS TXT record that specifies which IP addresses and/or servers are allowed to send email “from” that particular domain. A domain administrator publishes the policy defining mail servers that are authorized to send email from that domain. when an email is received the inbound SMTP server then compares the IP address of the mail sender with the authorized IP addresses defined in the SPF record. An example of SPF record for oraclecloud.com is like below v=spf1 include:spf_s.oracle.com include:spf_r.oracle.com include:spf_c.oraclecloud.com include:stspg-customer.com ~all Where v=spf1 is the version include:spf_s.oracle.com is one of the domain which is authorized to use the from address. all -- The “all” tag basically tells the receiving server how it should handle all messages sent from a domain if it sees a domain in the header that’s not listed in the SPF record. There are a few options below, and these options are dictated by the character that precedes the “all” tag.   Options     Description   -all (dash all)   This is a hard fail. This means that servers that aren’t listed in the SPF record aren’t authorized to send an email for the domain, so the email should be rejected by the receiving server.   ~all (tilde all)   This is a soft fail. Basically, that means that the server isn’t listed in the SPF record, but it should not be flat out rejected by the receiving server. Instead, the message will be marked as possible spam.   +all (plus all)   NOT RECOMMENDED. This tag essentially means any domain listed is authorized to send email, even if it’s not listed in the SPF record.   The SPF value to be added is given below and depends on the email region which is being connected to.   Region  SPF Value   Americas    v=spf1 include:rp.oracleemaildelivery.com ~all   Asia Pacific    v=spf1 include:ap.rp.oracleemaildelivery.com ~all     Europe    v=spf1 include:eu.rp.oracleemaildelivery.com ~all   DKIM DKIM (Domain Keys Identified Mail) is an email authentication technique that allows the receiver to check that an email was indeed sent and authorized by the owner of that domain.  DKIM works by adding a digital signature to the headers of an email message by sending/Outbound SMTP server. This signature can then be validated by the receiving/Inbound SMTP server against a public cryptographic key that is located in the from address domain's DNS record. Customers should raise a SR ticket to get the public key and the customers should add the key to TXT record of sender's domain or DNS record. User Interface Improvements to detect SPF Configuration Below screen shots shows few changes that have be done to help customers detect whether SPF is configured for the from address domain. If the SPF is not configured, the SPF value which is to be added to DNS record is also provided in the same screen. Also the customer can also track whether the DKIM is configured for the from address using the below screen. Note: The UI will be available in future releases and is subjected to change. Below screen shows the value of the SPF record to be configured in the DNS record. Note: The UI is available in future releases and is subjected to change. Default From Address Avoid using no-reply@oracle.com as from address, also avoid using the oracle domain. The default from address has be changed from "no-reply@oracle.com" to "no-reply@mail.integration.<region>.ocp.oraclecloud.com". Customers should change the from address in their integrations from no-reply@oracle.com to "no-reply@mail.integration.<region>.ocp.oraclecloud.com", region attribute will be provided by OIC.   Suppression List "To" addresses are added to suppression list based on a lot of reasons. As of now recipient addresses with hard bounce, soft bounce and a large number of emails are some of the reasons for adding "To" address to a suppression list. If DKIM and SPF are not configured for the from address domain, the likelihood of having a bounce, or messages being silently dropped by the receiving infrastructure is higher. As of now, the suppression list cannot be viewed in the OIC and customers should raise an SR ticket for removing the emails from the suppression list.  

Introduction: Do you know how SMTP servers detect spoofs or detect the forging of the visible sender? Do you know how SMTP servers detect sender is legitimate? This blog is an answer to the above...

Integration

Create a B2B Document to Define Custom Definitions in Oracle Integration(OIC)

In this blog, we will be talking about how to create B2B Document to define custom definitions in Oracle Integration (OIC). You can create a customized document definition that you select in the Document Definition field of the EDI Translate Action Wizard in an integration. Custom document definitions are useful for environments in which your trading partner requires specific customizations to satisfy business requirements. From the left navigation pane on the Oracle Integration home page, select B2B > B2B Documents. Click Create. Enter the following details to create a new B2B document definition.   Element Description Name Enter a document name. Identifier This field is automatically populated with the document name. You can manually change this value. Description Enter an optional description of the customization details for this document. Document Standard EDI X12 is automatically selected for use and cannot be deselected. Document Version Select the document version. Document Type Select the document type.   4. Click Create. The details page for your new B2B document is displayed. The Document Schema field shows Standard as the schema type by default   5. Click Customize to customize the standard schema to satisfy your business requirements. If you had previously created custom schemas, they are also displayed for selection in the dropdown list. If you want, you can select those schemas to create further customizations. The Clone Standard Schema dialog is displayed. 6. Enter a name and optional description for your custom schema. 7. Click Clone. This action creates a copy of the standard EDI X12 schema for you to use as a baseline to customize   8. Find the element you want to customize, and select View Details. For this example, the CUR02 currency code element (part of the CUR segment) is selected.   9. Edit properties as necessary for your business environment.  10. Click Add a New Code List (if the element does not already have a code list defined). 11. Add a currency code name and optional description, and click x to save your updates. 12. Add more, as necessary for your business requirements. For this example, EUR (for the Euro currency) and USD (for US dollars) currency codes are added.   13. When complete, click X in the upper right corner to close the dialog and save your changes.   The segment (CUR) and the element (CUR02) that you customized to deviate from the standard schema are identified by a round dot   14. Click Save. Return to the B2B document's details page and click Save to associate the customized schema to the document 15. You can use this customized schema in your integration flow for B2B Translate action. For more information, please visit Oracle docs        

In this blog, we will be talking about how to create B2B Document to define custom definitions in Oracle Integration (OIC). You can create a customized document definition that you select in the...

Integration

Introducing B2B in Oracle Integration(OIC)

  Oracle Integration provides support for B2B e-commerce with B2B for Oracle Integration. B2B for Oracle Integration represents a collective set of features inside Oracle Integration to support EDI document processing. This includes the EDI Translate action and a B2B schema editor to customize the EDI data formats. B2B for Oracle Integration provides for the secure and reliable exchange of business documents between Oracle Integration and a trading partner. B2B for Oracle Integration works in an orchestrated integration through use of the EDI Translate action. When you add this action to an integration, the EDI Translate Action Wizard is invoked to guide you through configuration with the EDI X12 document standard Business Protocols Supported in B2B for Oracle Integration B2B for Oracle Integration supports the EDI X12 business protocol for the exchange of business documents between Oracle Integration and a trading partner. EDI X12 versions 4010 to 8010, which include all document types within each version, are provided with B2B for Oracle Integration Access B2B for Oracle Integration B2B for Oracle Integration is automatically included when provisioning the following Oracle Integration versions. Version Enterprise Edition Standard Edition Oracle Integration Yes No Oracle Integration for Oracle SaaS Yes No Oracle Integration Generation 2 Yes No Oracle Integration for Oracle SaaS Generation 2 Yes No Next blog on custom definitions For more information, please visit Oracle docs  

  Oracle Integration provides support for B2B e-commerce with B2B for Oracle Integration. B2B for Oracle Integration represents a collective set of features inside Oracle Integration to support EDI...

Integration

How to use the new Import/Export feature in Oracle Integration

With the August 2020 release, Oracle Integration now includes a great new feature that allows you to export all the design-time meta-data from your instance into a single export file stored on OCI Object Storage. That same export file can then be imported into a different instance of Oracle Integration, effectively giving you a complete migration and/or backup capability. There are several options that can be chosen when configuring an export job depending on your intended use and we will explore those a bit later on. I have also included a Further Reading section at the bottom of this guide that will take you to the relevant documentation for this UI and it’s underlying API’s – should you wish to automate these activities in your devops processes. This guide will walk you through these major steps: Create a storage bucket in OCI Object Storage Create an OCI Auth Token for accessing to your storage bucket Configure your storage bucket for use in Oracle Integration Create an Export Job in Oracle Integration Create an Import Job in Oracle Integration   Create a storage bucket in OCI Object Storage Before we get started creating export jobs, we must create a storage bucket in OCI and then configure secure access so that Oracle Integration can connect to your storage bucket. Let’s first go and create a storage bucket in OCI and then we will come back to Oracle Integration to configure and use it. First, open your OCI console for your cloud account. Next, locate “Object Storage” in the left sidebar menu and select the sub menu “Object Storage” i.e. Object Storage  Object Storage This will open a page that lists all the storage buckets you have in the current compartment. Note in the image below, I have already selected a compartment called “steve_tindall”. You should select an appropriate compartment for your storage buckets. Click the “Create Bucket” button and fill in the required information. I took all the defaults except for the name. From the screenshot above you can see that I created a bucket called “oicmigrations”. Next, select you bucket from the list to open the bucket details screen. Important Point! From here you can see that you have created a new bucket and that bucket is empty. At this point we need to take note of the namespace for your bucket. You will need this later on when configuring the Oracle Integration Import/Export UI. I suggest copying this text, along with the bucket name now, and pasting it into a text document for later use.   Create an OCI Auth Token for accessing to your storage bucket Next we need to create a security token that is required by OCI when you access your storage bucket via an API. Oracle Integration is going to use the OCI Storage Service API’s to read and write files to your bucket, so we need to have this Auth Token ready to go. To create an Auth Token, click on your Avatar in the top right of the OCI console and select your username under the heading “Profile” in the menu.  This will bring you to your profile page where you can create your Auth Token. On the left of the page under the heading “Resources” you will find several links including “Auth Tokens”, click on this to list your existing tokens.   Click the “Generate Token” button to create a new token. Give the token a description. I gave mine a description of “Import-Export Token”. Important Point! When you generate your token (see image below) YOU MUST COPY the token straight away. You will not be able to see this token or retrieve it ever again. I suggest you copy it straight away and paste it into a text document. You will need it in the next step.   Configure your storage bucket for use in Oracle Integration  Now we have a storage bucket ready to go, but before we can run an export job in Oracle Integration, we need to connect our Integration instance to our storage bucket. To do this, open you Oracle Integration instance homepage and the click on “Settings” in the left sidebar. Under the settings menu you will find “Storage”. This is where you configure you Oracle Integration instance to connect to your storage bucket.   On this page you need to fill in all four of the fields as follows: Name:  Any name will do. I used the same name as my bucket – so “oicmigrations” Swift URL: This is URL to your storage bucket. It is made up of the following: Protocol, host and version: I used https://swiftobjectstorage.ap-sydney-1.oraclecloud.com/v1 because my storage bucket was created in the Sydney data center. The best place to find this is in the URL of your OCI console. Look at the URL when you log into OCI and get the data center name from that. For example, you could replace “ap-sydney-1” with “us-ashburn-1”. In this case your URL would be https://swiftobjectstorage.us-ashburn-1.oraclecloud.com/v1 Namespace: This is the namespace value you copied from your storage bucket in step 1. If you failed to keep it in a text document as suggested, go back to your OCI console now and retrieve it. Bucket Name: This is the name you gave your bucket. In my case “oicmigrations” My full URL: https://swiftobjectstorage.ap-sydney-1.oraclecloud.com/v1/<my-namespace>/oicmigrations Username: This is your OCI username. If you are a federated user then you will most likely need to append “oracleidentitycloudservice” and a “/” to your actual username. For example, my full username to connect to the storage bucket is oracleidentitycloudservice/steve.tindall@oracle.com. If you use a different identity federator, enter its name instead. Password: Pay attention here. This IS NOT your OCI password as it might seem. This is where you need to provide your Auth Token that you created in the previous step. Hopefully you saved your token as suggested and have it in a text document. If so, use that as your password. If not, , go back to OCI and regenerate a new token. Once you see the green banner appear saying that you have saved the storage bucket config successfully, you are good to move onto the next step.   Create an Export Job in Oracle Integration  Now we can finally configure the export job in Oracle Integration. Under the settings menu in the left sidebar you will find the Import/Export UI. To create a new export job, click the “Export” button in the top right corner of the Import/Export screen. You will see a configuration drawer slide out from the right side of the page. Coincidently, this new slide-out draw feature of the UI is now used throughout the whole Oracle Integration UI for these types of configuration tasks. The required information to create a job is simple and quite self-explanatory. You need to provide a Job Name, choose whether to include security artefacts, and a Description. You will also note that the storage bucket name you defined in the Storage configuration UI is listed here indicating that this is the target location for this export job. See the image below for reference. The security artefacts that will be exported with your meta-data are as follows: Security policies Security credentials (for connections) Customer certificates Application role memberships (for Processes) Once you have completed all the fields, click the “Start Export Job” button. The export will take anywhere from 3 to 15 minutes to complete. Once completed, you can check that the export file was generated successfully by looking at your bucket in the OCI console. See image below showing the zip file created in my “oicmigrations” bucket called “Steve-Blog-Export.zip”. Note that during the export process you will see several temporary files written to the storage bucket. Do not delete or move these files. The export process generates many files during its operations and then finally puts them all in one resulting zip file with the name of your export job. Only once you see this file can you use it correctly.   Create an Import Job in Oracle Integration We are now onto the final step in this guide. Let’s take our export file and import it into our target Oracle Integration instance. Before we can do that though we need to configure your target instance to read the storage bucket just like we did in the first instance. Following the same steps, configure your target instance using the fields on the Storage page under Settings in the left sidebar. Next, click “Import/Export” in the left sidebar. You should see an empty list of import jobs (unless someone else in your company has already run an import). Click the “Import” button in the top right of the page. You will see a configuration drawer slide out from the right side of the screen. Here you can configure the options for importing an export file into your Oracle Integration instance. There are several options for importing into your instance. These are explained below. The different import options allow you to control the state of the integrations and processes after the import job completes. The simplest option is to tick all options. If you tick all options the import job will do the following: Import: Import all integrations and process and related artefacts like connections look-ups, decision rules etc. Include security artefacts: All connections will be created with security credentials in place and ready to connect Activate: Integrations that were active in the source instance will be activated in the target after the import completes Start Schedules: Any scheduled integrations that were active in the source instance will be running (activated according to their schedule) in the target instance. Important Point! It is important to note that you can execute an import without choosing Activate or Start Schedules. This is useful if you know you need to change some of the endpoints in your connections (say from test systems to production systems) before you want them to be active. If you are using the import/export tool, for example, to move integrations from a test instance to a production instance, you might want to follow a sequence similar to this: Run the import WITHOUT selecting Activate or Start Schedules Modify the connections of the recently imported integrations so that their endpoints and/or credential information point to your production systems Rerun the import but this time WITHOUT the import option i.e. just the Activate and Start Schedule options. This third step will cause the integrations in the in the export file (which are now already in your target instance as well) to be activated and to have their schedules started without replacing their meta-data or updating their respective connection credentials. Lastly, you can check the status of you job, for both import and exports, by clicking on the job name in the Import/Export UI. This will expand the list with details of the job and it current status. It the image below, you can see that for my job, the Process component of the export has completed but the integration component is still running. You can also see the overall status of the job next to the job name. Finally, if you need to debug a problem, you can download a log of what was exported. You will find this download icon next to the job name ONLY AFTER the job has completed. This will download a full report of the import or export job. Further Reading The documentation for these new features can be found here: https://docs.oracle.com/en/cloud/paas/integration-cloud/oracle-integration-oci/import-and-export-instances.html If you would like to automate this export and import process, so that it could be included in your automated devops processes or your CI/CD pipelines for example, you can call the underlying API's in Oracle Integration server by following this documentation in the admin guide: https://docs.oracle.com/en/cloud/paas/integration-cloud/integration-cloud-auton/export-integration-and-process-design-time-metadata-instances.html

With the August 2020 release, Oracle Integration now includes a great new feature that allows you to export all the design-time meta-data from your instance into a single export file stored on OCI...

Integration Properties

How to externalize properties using Integration Properties in OIC? Have you tried to use the integration to read the files from FTP and wondered if there was a way to externalize the FTP location outside the integration? Or create an integration that sends out notification and wanted to externalize the recipient of the email as a property to be passed as a parameter to the integration? We have heard from our customers on the need for an easy way to externalize properties from the integration and be able to modify the values without editing the integration. Changing hard-coded parameter values in the integration canvas can be a time-consuming task as         i) Integrations needs to be edited         ii) Familiarity with the integration and the design tool in order to update the appropriate action and property.  Also, there are use cases where an integration is developed by a one user and run by different users. The user running it might not have privileges to edit the integration making it difficult to change the parameter values. We have introduced a new feature that allows the integration developer to define properties which are exposed and can be modified outside of the integration designer. Integration developers can define properties of their choosing, providing them with meaningful names and default values, and then use them throughout the integration in the rich set of available actions. Although property values are not modifiable using integration actions, what makes them extremely useful is that the default value can be overridden outside of the design tool prior to running the integration. This feature is available for both - App Driven and Scheduled Orchestration styles. We have seen some of our customers use lookups and schedule parameters to externalize properties. Lookups works well for defining global properties, Integration Properties allows users to customize the properties at the integration level. Schedule parameters are only available for scheduled integrations and are modifiable using integration actions as they are used to maintain the Last Run Time (position) of the scheduled integration. How to add Integration Properties Integration properties can be added from the hamburger menu of trigger action inside the canvas (refer to screenshot below).  User is navigated to new page on clicking "Edit Integration Properties". Name, description and a value for the property can be added here. A maximum of 10 integration properties can be defined per integration. The default values can be edited later using the "Edit Integration Properties" option available in the menu. How to override Integration Property values Integration Property can be updated from the integrations landing page using the "Update Property Value” menu item (refer to the screenshot below).  Clicking on the menu item opens a popup where user can override the values for the properties defined for the integration.   The behaviour to override integration properties is based on the integration style.  For Scheduled Orchestration, the integration must be in ACTIVATED state.  For App Driven Orchestration, the integration must be in DRAFT/CONFIGURED state. Prerequisites The feature is available from Oracle Integration August 2020 release.   Hope you enjoyed reading the blog and find this feature useful.

How to externalize properties using Integration Properties in OIC? Have you tried to use the integration to read the files from FTP and wondered if there was a way to externalize the FTP...

Embedded File Server (SFTP) in Oracle Integration

We will be talking about how to leverage embedded File Server in Oracle Integration in this blog. Prerequisites: Basic knowledge of Oracle Integration. Targeted audience: Oracle Integration Developers OR Oracle Integration Users. File Server Overview File Server provides an embedded SFTP server within Oracle Integration, enabling organizations to focus on building integrations without needing to host and maintain a separate SFTP server.   Enable File Server in OIC Before use, File Server must be enabled for the Oracle Integration instance. Enabling File Server is a one-time action completed in Oracle Cloud Infrastructure by an administrator with manage access to the instance. See Enable File Server   File Server Users The primary users of File Server include: Oracle Integration administrators, who use File Server to manage server settings and configure users, groups, and folders, including permissions. To administer File Server as described in this guide, you must be assigned the ServiceAdministrator role in Oracle Integration. See Oracle Integration Service Roles in Provisioning and Administering Oracle Integration and Oracle Integration for SaaS, Generation 2. Oracle Integration developers, who use File Server along with the FTP adapter in integrations to read and write files. Oracle Integration users, who access File Server using an SFTP client. These users must be configured and enabled as users in File Server. Their access is controlled by their assigned folders and folder permissions. USECASE BACKGROUND This use case reads the file from Standalone SFTP server and writes into Oracle Integration(OIC) File Server. As part of this blog, you will also learn how to create a folder, provide permission and get the File Server details. Let us say I want to implement the above use case, I need Standalone SFTP Server(Source system) details like IP address/Host, Port number, username, password and hope you know how to get all those details. And I also need to get File Server details which is provided as part of Oracle Integration. Now, we will talk about how to get into the File Server of OIC and get all the details which are required to connect like IP Address, Port. For username and password, you could use your OIC Username and password. How to get IP and PORT information? Login to OIC àHomeàSettingsàFile Server àSettings àGeneral Tab   How to create a folder? Login to OIC àHome àSettings àFile Server àFolders àClick on home àclick on users àclick on the folder which is created with your name/emailed àclick on Create to create a folder After creating the folder, you will have to give permissions to that folder, select the folder, on your right hand side, you can see the option called Permissions. Click on the Permissions àClick on Add Permissions àSearch for the user to whom you have to give permission àSelect the user and click on Add àand Select All checkbox if you have to give full access or enable specific permission which you want to give and click on Save. Go back to Home page, click on Integrations to implement your use case by following the below steps. Create a connection based on the FTP adapter that points to a standalone SFTP server Create one more connection based on the FTP adapter that points to File Server.. Create an integration based on the scheduled file transfer pattern that reads files from the standalone SFTP server and writes them to File Server Invoke SFTP Server to read the file and select the operation and provide Input Directory and File Name   5. Write the file to OIC File Server, while writing, you will have to provide the relative path of your directory starting from the /home directory, screenshot given below. 6. Edit “Map to writeFile” and map the the file reference from source to target as per the screenshot given below and click on Validate and click on Save to save your integration flow. 7. Define Business Identifiers for tracking and click on Save and close the integration 8. Activate the integration and run it by clicking on Submit Now 9. Monitor the instance. If it is successful, you can login to OIC File Server using FileZilla to verify the file. Otherwise, fix the error and execute it again. 10. Install FileZilla client, Open FileZillaàClick on FileàSite Manager àClick on New Site àSelect Protocol as “SFTP” and provide the remaining details like OIC File Server IP, PORT, Username and password and click on Connect 11. Go to the folder which you have specified as part of the Integration and could see the output file generated from OIC.  Documentation Learn more about File Server here. 

We will be talking about how to leverage embedded File Server in Oracle Integration in this blog. Prerequisites: Basic knowledge of Oracle Integration. Targeted audience: Oracle Integration Developers...

Enhanced Integration with your Business Partners using OIC AS2 Adapter

Introduction to B2B Integration using AS2   What is B2B integration? Business-to-business (B2B) integration is the automation of business processes and communication between two or more organizations. It allows them to trade more effectively with their customers, suppliers, vendors and business partners by automating key business processes using B2B based data exchange including EDI. Electronic Data Interchange (EDI) is a standard means of exchanging data between companies so that they can transact business electronically. As part of the B2B Data exchange customers and their partners mutually agree on a: · Document format like X12, UN/EDIFACT · Transport protocol like AS2, sFTP     AS2 is a key transport protocol that is very popular worldwide used for B2B data exchange. It is a specification for Electronic Data Interchange (EDI) between organizations using the internet. AS2 uses Secure/Multipurpose Internet Mail Extensions (S/MIME), which secures data with authentication, nonrepudiation and encryption. The transportation protocol for this specification is HTTP and HTTPS for real-time communication. S/MIME secures data with authentication, message integrity and nonrepudiation. The AS2 adapter for Oracle Integration Cloud is being released shortly. The addition of AS2 provides a means for Integrations in OIC communicate across company boundaries using this protocol. Typically, this conjunction with an EDI Translate Action to generate EDI based documents.   Note: OIC currently support X12 data standards as part of EDI/B2B and will shortly be supporting UN-EDIFACT standards as well. United Nations/Electronic Data Interchange for Administration, Commerce and Transport is the international EDI standard developed under the United Nations. See here for more information on B2B for Oracle Integration.    Creating an AS2 Connection and the Adapter Capabilities To create a connection in Oracle Integration: 1.      In the left navigation pane, click Home > Integrations > Connections. 2.     Click Create. 3.     In the Create Connection — Select Adapter dialog, select the adapter to use for this connection. To find the adapter, scroll through the list, or enter a partial or full name in the Search field and click Search. 4.    In the Create Connection dialog, enter the information that describes this connection. Note: You would be able to use AS2 ‘Invoke’ an outbound AS2 connection or to ‘Trigger’ an inbound AS2 connection. If you are creating a connection expected to do both then, you will select the ‘Trigger and Invoke’ option   Configure Connection Properties: Enter connection information so your application can process requests. · Go to the Connection Properties section. · "AS2 Service URL" field will appear if the ‘Invoke’ role is selected for the connection Note: User should provide Partner's AS2 Endpoint in the field.   Configure Security:   Oracle Integration provides 2 Security Profiles that enables users to setup   · AS2 Basic Policy:   This enables users to ability to provide the -       authentication required for connecting to your partner’s endpoint username and password -       private key for inbound decryption and outbound signature generation -       public certificate for outbound data encryption and inbound signature verification     · AS2 Advanced Policy:   This enables users to ability to provide advanced configuration options including:   -       authentication required for connecting to your partner’s endpoint username and password -       Handling synchronous and asynchronous MDN’s -       AS2 decryption and encryption capabilities -       AS2 signature generation and verification -       MDN signature generation and verification     · Testing an outbound AS2 Connection for Invoke   Once the Connection is a 100% configured then you can use the ‘Test’ connection option to perform a connectivity test     All is well if you see a success message. Use Cases The AS2 Adapter provides the following benefits: Establishes a connection to the AS2 compliant B2B system to enable sending or receiving messages. Receive and send Business messages or MDN Acknowledgements Enable user to configure Outbound and Inbound message delivery using the Adapter Endpoint Configuration Wizard. Adapter Outbound can send Business Message and consume synchronous MDN Acknowledgement. It can produce encrypted, signed and compressed Business message. Adapter Inbound can consume Business message and MDN Acknowledgement. It can delivery synchronous as well as asynchronous MDN Acknowledgement. How to use it in an Integration?   Once you have a connection created, let’s look at how to use this in an integration.   · Configuring an ‘AS2 Receive’ Endpoint:   To use AS2 connection as a Trigger, select the required AS2 connection from the list. Once you select your required connection you will see the following options.   1.      Name your AS2 Endpoint     2.     Configure what type of messages you want this AS2 endpoint to handle     3.     Specify the AS2 to and from ID’s       4.    Once you validate the summary, you are done with your AS2 Trigger endpoint     5.     Once the endpoint is created, map the AS2 content to EDI-Payload as an Input to the EDI-Translate Action       6.    Once the Integration is activated you will be able to generate the AS2 URL that you can provide your Partner for Connectivity   7.     Here is a sample of how ‘AS2 Receive would look within an Integration     · Configuring an ‘AS2 Send’ Endpoint:   To use AS2 connection as an Invoke, select the required AS2 connection from the list. Once you select your required connection you will see the following options.   1.      Name your AS2 Endpoint     2.     Define AS2 to and From Identifiers     3.     Define AS2 Headers and configuration for the Messages     4.    Define MDN processing options     5.     Once you validate the summary, you are done with your AS2 Invoke endpoint     6.    And, you are set to send AS2 Messages as part of your Integrations once you map ‘edi-payload’ to ‘Message Payload – Content’       7.     Here is a sample of how ‘AS2 Send’ would look within an Integration   Summary: With the Introduction of AS2, Oracle Integration introduces powerful B2B Integration capabilities as part of its Integration Platform as a Service (iPaaS) enabling users to bring their Integrations into a single platform as part of their Digital Modernization and re-platforming initiatives. Note: This article was co-authored by Arvind Venugopal

Introduction to B2B Integration using AS2   What is B2B integration? Business-to-business (B2B) integration is the automation of business processes and communication between two or more organizations....

Oracle Integration Connectivity Updates August 2020

Oracle Integration continues to enhance the connectivity portfolio by building a new set of adapters as well as enriching the existing adapters with the key customer focused enhancements. Oracle Integration adapters are the corner stone for connecting and automating application business processes. Oracle Integration Adapter abstracts communication with diverse applications on a single pane, simplify integration interfaces as business resources and supports rapid development through configuration and declarative model rather than complex coding.  In the endeavour of building robust and diverse connectivity portfolio, Oracle Integration August release will offer two brand new adapters viz. PayPal Adapter & OCI Streaming Service. New Adapters Paypal Adapter Paypal Adapter is the newest addition to the E-commerce segment of adapters and shall enable Oracle Integration customers to connect and automate their web store business processes. In the stated release Paypal Adapter shall support outbound invoke to execute the REST APIs enabling Integration developer to perform CRUD operations on the Paypal exposed business resources. As an example while order is being manufactured you may want to authorize a payment that you can capture later on successful delivery of the order. Paypal Adapter connection page requires two sets of information to establish connection with the Paypal environment. First you would need to select the environment type i.e. Sandbox / Live so that adapter can connect to the appropriate endpoint. On security / credentials front you would need to provide a client id & secret, you can get those by following instructions in Paypal developer guide. Now on building Integration flows with Paypal, PayPal Adapter enriches the integration developer experience by grouping the Paypal business resources by modules, objects and operations. PayPal Adapter provides support for the Catalog Products, Orders , Payments , and Payouts modules. The configuration of invoke endpoint can be done graphically by selecting the options as prompted by the Adapter through the wizard interface. It can be easily accomplished by following three steps viz. 1) Configure Action Type 2) Configure Operation 3) Verify Configuration in Summary. OCI Streaming Service adapter OCI Streaming Service adapter is the newest addition in enterprise messaging segment of adapters. The OCI Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data. Customers can use OCI Streaming Service as a backplane to decouple components of large systems. One of the common use cases for OIC customers would be to broadcast application events emitted by applications to external systems by having OIC integration flows publish messages to OCI Streaming Service. OCI Streaming Service adapter enables an integration architect to publish and consume messages from the OCI Streaming service stream in a simple declarative manner.  OCI Streaming Service adapter connection page needs bootstrap server and SASL credentials for establishing the connection with the OIC Streaming Services. The OCI Streaming service bootstrap servers can be obtained as described in the OCI Streaming documentation. OCI Streaming Services support SASL based authentication over SSL, Integration architect needs to configure the user name, password and trust store. The SASL username is combination of tennant id, username and stream pool id that are concatenated by the '/' character and can be represented as <tennant id>/<user id>/<stream pool id>. The trust store needs to be generated from the OCI Streaming services browser certificate using any standard tool like keytool. On building flows with OCI Streaming Services, adapter greatly simplifies how you connect, produce and consume messages from the OCI Streaming services as part of the Integration flow. Lets now look at feature and functionality of the adapter, adapter supports consuming and producing messages to / from OCI Streaming service streams, however through invoke pattern. The ability to consume the message through trigger pattern is in the future roadmap. To produce the message, Integration architect would follow intuitive configuration wizard to configure the invoke endpoint, at an outset adapter supports following three operations: Produce message to the stream Consume messages from the stream Consume messages from the stream by offset The first operation enables Integration architect to produce the message to the stream, by selecting the details like stream, partition, message structure, etc. The second operation enables Integration architect to consume the message from the stream and associated partition. The message can be read from latest or beginning of the stream, latest will fetch messages produced in last 60 seconds, whereas beginning will fetch all messages that are produced since last read from the partition. The additional option user has while consuming message is he or she can specify the consumer group, this enables multiple flows to read from the partition. The last option to consume the message from the stream by offset is for the exceptional scenario where Integration architect wants to read a message from specific offset. Enrichments of Existing Adapters Oracle Talent Acquisition Cloud Adapter Bulk Export Support Oracle Talent Acquisition Cloud (OTAC) previously known as Taleo Enterprise Edition is the worlds most used recruitment platform. OTAC Customers conventionally use bulk import and bulk export integration patterns to load / update data into OTAC and extract updated data out from the OTAC instance. The OTAC Adapter reuses the Taleo Connect Client (TCC) job specification by using the wrapsoap xml file generated by the tool as a design artifact. This greatly saves time and money for the customers, as they can get started with the Integration flows in no time. The current OTAC adapter already supports performing bulk import operation to load / update the data into OTAC instance. With August release now OTAC adapter supports performing bulk export to get data out of OTAC. As an example you might want to export the new hires information from OTAC and import the same into your HRMS systems.  Lets now look at how bulk export Integration pattern can be implemented in OIC using OTAC Adapter. In the below flow we are exporting the new hires from the Taleo system and importing the same into HRMS system. In the stated scenario we can have a scheduled flow that periodically submits the export job request to the OTAC server, get the message keys, loop until job is completed and finally loads the data into HRMS system. This can easily be decoupled into multiple flows, depending on the business requirements. The bulk export pattern in OTAC Adapter is built on same principle as bulk import i.e. it uses the Taleo Connect Client artifact wrapsoap xml as a design configuration. The configuration steps for Integration developer would be as following: Generate wrapsoap xml file from TCC tool. Configure the endpoint to perform bulk export. Load the wrapsoap xml file in the configuration Map the query parameters to filter export data, as an example you may want to filter export of new hires based on hire date after last run. As stated earlier bulk export is built on same pattern as bulk import, Integration architect would need to check / loop till job status is completed (i.e. 5). Once completed Integration architect would need to use corelation key to access the exported data.  File Adapter Improvements File Adapter is part of the technology adapters, conceiving a possibility for the customer to empower their connectivity agents with the file server capabilities. As customers look to simplify, integrate and automate their business processes with on premise systems, file adapter carry out a pivotal role in exchanging files between cloud and on premise systems. File Adapter has gone through significant enrichment, allowing Integration developers to develop more robust files based integration with on premise systems. File Adapter now supports list, read, write, delete and move operations on files through connectivity agent. These enhancements will allow customers to implement use cases such as periodic retrieval of data extracts from on premises application/databases available in the local file system - and synchronizing them with cloud applications, cloud databases, FTP servers as well as external systems using Apache Kafka, OCI Streaming Service, etc. As files will be exchanged between on premise systems and cloud through connectivity agent, the limit of the file that can be transferred through file adapter is 1 GB. REST Adapter: Consumption of OpenAPI REST Adapter is one of the key technical adapter in the connectivity portfolio, enabling customers to connect with diverse applications based on the standards. Oracle Integration continues to adopt open standards to help customers simplify and streamline their integrations and reap the rewards from open standards. OpenAPI has indeed become a de facto standard for describing a REST API. Oracle Integration REST adapter in the conquest of open standards now supports consuming REST APIs described in OpenAPI specifications. This will now help Oracle Integrations to easily integrate with application and services that are exposing OpenAPI based APIs descriptor. The experience of consuming OpenAPI catalog is very similiar to the swagger catalog URL, Integration architect needs to provide the OpenAPI catalog URL in the connection page along with other details. Once configured, while designing flows with configured connection adapter shall discover all the available resources and supported verbs. Integration architect needs to just select the resource and verb while configuring the invoke end point, adapter will automatically configure the payload to send and receive from the endpoint based on the OpenAPI definition. To learn more about the feature please follow this blog.  Salesforce Adapter: Consumption of Apex for custom business logic Salesforce.com adapter is a robust adapter supporting wide array of functionalities for integrating with Salesforce CRM applications right from creating bi-directional integration, discovering business objects to providing human readable names for elements found in business objects for easy mapping. Salesforce.com allows customers to extend their application by developing and deploying custom business logic as APEX classes in Salesforce.com - And these APEX classes can be exposed by customers as a REST API which allows for programmatic invocation by external clients. With the August release, Salesforce adapter has been enhanced to allow integration flows to programmatically invoke the custom business logic deployed in Salesforce.com. Salesforce Adapter now enables you to consume the REST endpoints that are exposed through Apex REST classes on Salesforce.com and perform use cases such as Retrieve a Record, Create an Attachment, and so on. The configuration of Apex REST endpoint in Salesforce.com Adapter can be done graphically by selecting the options as prompted by the Salesforce Adapter through the wizard interface. It can easily be accomplished by following steps:   Configure the action on the Action page. Configure the Apex REST operation to perform in the Target Salesforce.com application. Configure the desired operation parameters viz. template/query parameters, sample request payload and sample response payload as desired. SAP Commerce Cloud customization support SAP Commerce Cloud Adapter help you to connect and create an integration with SAP Commerce Cloud applications. SAP Commerce Cloud Adapter already supports performing CRUD operations with the standard objects in the SAP Commerce Cloud application through invoke pattern. SAP Commerce Cloud adapter with the update in August release extends the capabilities for custom attributes/custom APIs/custom operations/custom objects in the SAP Commerce Cloud applications. SAP Commerce Cloud Adapter now enables you to integrate customized use cases such as Searched Based on Product Name, Get Store by Name, and so on. So let's start with a quick tour of the new features: Access the new custom field "productName" in a standard operation/API Access the new custom API/operation "Get Store By Name" under a standard object Access a new custom object "Brand" Shopify Adapter Improvements The Shopify Adapter enables you to design, set up, and manage digital stores across multiple sales channels including mobile, social media, web, online marketplaces, and so on. With the August release, we are announcing many exciting improvements for both inbound and outbound integration patterns. So, let's take a quick tour and find out what's new in the August release for the Shopify adapter: Custom Http Headers: Shopify Adapter now let the integration developer to configure the exposed custom http headers in the business scenarios such as Fetching the presentment prices for the product variants, currency exchange adjustment data for the order transactions. Inbound Improvements: Shopify adapter added support for exposing the attribute store name for all the business events received. This enables the integration developer to identify the event source. Also, added support for the “inventory level update” business event. API Certification Update: The adapter is now certified with latest Shopify api version 2020-04. Summary This concludes all the connectivity features that are being delivered as part of the August 2020 release. Oracle Integration continues to invest in the adapters, helping customers in their digital transformation journey of integrating, automating and transforming their business processes to succeed in the digital era.  

Oracle Integration continues to enhance the connectivity portfolio by building a new set of adapters as well as enriching the existing adapters with the key customer focused enhancements. Oracle...

Integration

Convert Basic Routing style Integration to Orchestration style.

Convert Integrations Style from Basic Routing to App Driven Orchestration. The Basic Routing Integrations are deprecated now and currently, there is no way to convert a Basic Routing Integration to Orchestration. With this new feature, the user will be able to do the conversion from Basic Routing to App Driven Orchestration as the latter has more flexibility. The convert can be done with the help of "convert" action provided in the actions menu which is explained in detail in the below sections. Prerequisites: The minimum Oracle Integration version required for the feature is “20.36330”. What happens during the conversion of Basic Routing style to Orchestration During the conversion of Basic routing to Orchestration, the process goes through each and every entity of the ProjectModel( or flow) of Basic Routing and does a conversion to the corresponding entity of Orchestration style. The below table gives a brief mapping of the entities.   Basic Routing Orchestration Source Trigger Target Invoke Request Enrichment Invoke Response Enrichment Invoke Content-Based Routing Switch Faults Mapping Scope Action Why Orchestration style? Below are the advantages of using Orchestration style over Basic Routing More flexibility in modeling the integration flow using looping actions like forEach(both serial and parallel), while, etc. which are not supported in Basic Routing Better fault handling mechanism using Scope action. Can use notification action for sending emails within the flow. Other actions like assign, stage write, stage read, etc can be used within Orchestration flow. Why "Convert"? If the customer needs to convert their Basic Routing Integration to Orchestration style manually then the customer needs to understand the corresponding entity/action in Orchestration style, the process is time consuming, prone to error, etc., with this feature through a click of a button the conversion is done automatically.  Steps to convert the Basic Routing Integrations to App Driven Orchestration integrations:   1. Navigate to Integrations Landing page. 2. Click the‘ Actions’ menu(Hamburger menu). 3. Click on the ‘Convert’ option under the actions menu.   NOTE: The option is not available for basic routing integrations that are active or locked.   4. A ‘Convert Integration Style’ dialog is displayed. 5. Select the appropriate option and click on the Convert button Copy as New App Driven Orchestration Integration.   This option will create a new App driven orchestration integration with provided name, identifier & version.  Overwrites the selected Basic Routing Integration.   This option will overwrite the existing Basic Routing Integration with the App Driven Orchestration. 6. When conversion completes, the following message is displayed: CONFIRMATION Integration Integration_Name (version_number) was converted successfully.     Example: Basic Routing Integration   Converted App Driven Orchestration Integration   Current Restrictions/Limitations: Basic routing integrations listed below cannot be converted currently. Publish to OIC  Subscribe to OIC  Asynchronous integrations with delayed response  Basic Routing Scheduled Integration   Faulted Integrations Behavior After Conversion   There is a change in behavior for faulted integrations after basic routing to app-driven orchestration integration conversion is complete. When a synchronous basic routing integration fails, a failed tracking instance is created. The failed integration is visible on the Tracking Details page for that integration under Home > Monitoring > Integrations.   If you select View Errors from the menu(Hamburger menu), the following error is displayed. Error while invoking target service "integration_name" for sync flow. The Error was raised back to the calling client. After conversion to an app-driven orchestration integration, the tracking instance is marked as succeeded for this integration. This is because the failure is handled in a scope fault handler. If you select View Activity Stream from the menu(Hamburger menu), the following message is displayed. Error Message is - Error has been recovered. No message to display. Therefore, the integration is marked as succeeded instead of failed.

Convert Integrations Style from Basic Routing to App Driven Orchestration. The Basic Routing Integrations are deprecated now and currently, there is no way to convert a Basic Routing Integration...

Integration

User Friendly Names in Mapper

In this blog, we will look at a new integration feature, User Friendly Names in the Mapper UI and see how the Mapper UI has changed with the introduction of this feature. The new feature will become available shortly. The Source and Target tree elements displayed in the Mapper UI are based on the application schemas. Many application schemas define their interfaces with extremely cryptic technical names that are not easy to correlate to the user friendly display names you would see in the end point application's UI. This feature provides the option to show display label instead of the technical name directly in the trees and expression builder.The feature is supported for all types of integration.  Mapper UI Displaying Technical Names (Developer Mode): Mapper UI Displaying User Friendly Names (User Friendly Mode): Toggling the Mapper UI Between User Friendly and Developer Mode On navigating to the Mapper screen, by default the Mapper is launched in the user friendly mode, with user friendly names displayed in the screen. To view the technical names of the elements, click on the 'Developer' button available on the top panel. To get back to the user friendly mode, to view the user friendly names, toggle the mode by clicking on the 'Developer' button again. User Friendly Names in Source/Target Tree In this section we will look at the user friendly names for the tree elements like the root elements and child elements. Root Elements The user friendly names for the root elements of the different payloads, help to easily correlate them with the invoke/trigger associated, the adapter used, the type of the payload (request/response). The icon of the root element is the icon corresponding to the adapter associated. The format of the user friendly names for the root elements differs based on the variable type or the adapter associated. Below is the table which lists the format of the user friendly names for the root elements for different variable types. Adapter/Variable Type Format of the User Friendly Name Sample Application Adapter <trigger/invoke action name>  <payload type (request/response)>  <(Name of the Associated Adapter)> SendInventoryAdjustments Request (Soap) System Adapter For the system adapters, find the user friendly names in the sample field Schedule - Schedule $self or the Integration Metadata - Integration Metadata Tracking Variables IF user friendly name (the 'Tracking Name' field) is entered for the tracking variable in the Tracking UI (Business Identifiers For Tracking), then that is the user friendly name for the variable IF the 'Tracking Name' field is not populated for the variable, then the system constructs the user friendly name for the tracking variable in the format: 'Tracking Variable 1/2/3' My Business Identifier Tracking Variable 1 Tracking Variable 2 Tracking Variable 3 Other Variables For all other variables, i.e simple variables and root element of the complex variables, the user friendly name is constructed by the system in the format 'Name with which the variable was created, without the '$' prefix' counter studentName Child Elements The user friendly names for the child elements in the Source/Target is derived from the associated Schema files. If the schema files are generated with the user friendly names for the elements, then the elements get rendered with those names in the user friendly mode in Mapper. If the Schema files do not contain the user friendly names for the elements defined, then the child elements are displayed with the technical name in user friendly as well as the developer mode. Child Elements Displayed with User Friendly Names Child Elements Displayed with Technical Names The attributes of the schema elements are rendered with the '@' prefix followed by the attribute name in the Mapper UI. With the user friendly names introduced, the '@' prefix is not appended to the user friendly name or in the user friendly mode. On turning the 'Developer' mode on and viewing the technical names, the attributes get displayed with the '@' prefix. The user friendly names do not include the namespace prefix, hence the option to view the element names with prefix 'Show Prefix' available in the View menu of the Mapper UI, is disabled when the Mapper UI is in user friendly mode and the option is enabled once the UI is switched to 'Developer Mode'. Searching For Data in the Source/Target Tree The Source/Target tree can be searched with the sub string in user friendly/technical name of an element, in both the modes of Mapper UI. For example, if the Mapper UI is in user friendly mode and there exist an element whose user friendly name is 'BEG: Beginning Segment for Purchase Order' and the technical name if 'BegSegPO' and the search string which is used to search for the element in the tree is 'SegPO', the search highlights the element irrespective of the current mode User Friendly Expression for Mapping Just like how the elements technical names are simplified by their user friendly names, the mapping expression created are represented in a simplified form using 'User Friendly Expression'. The user friendly expression is simple and easy to read and understand. This is a UI only entity, i.e the user friendly expression for a mapping created, is displayed in the Mapper UI, however it does not get saved in the XSL file that gets saved. This can be noticed by navigating to the 'Code' tab of the Mapper UI, after creating the mapping, the 'Code' tab would display the XSL file that is going to get generated behind the scenes and it contains only the technical mapping and not the user friendly expression. Hence the maps should work as it always used to, at runtime. At design time, the Mapper UI displays the mappings as user friendly expression in the user friendly mode and as technical mappings in the developer mode. The user friendly expression for a mapping is created by the system when a mapping is constructed in the Mapper UI. The user friendly expression is created based on the user friendly name for the components in the mapping. Example: Consider a mapping 'concat($EDI-Translate/nsmpr0:executeResponse/ns31:TranslateOutput/ns31:translation-status, $EDI-Translate/nsmpr0:executeResponse/ns31:TranslateOutput/ns31:tracking-info)'. The mapping refers to a concat function whose parameters are two elements from the payload User Friendly Expression for the above mapping is 'concat( translation-status, tracking-info)' where 'translation-status' is the user friendly name of the element '$EDI-Translate/nsmpr0:executeResponse/ns31:TranslateOutput/ns31:translation-status' 'tracking-info' is the user friendly name of the element '$EDI-Translate/nsmpr0:executeResponse/ns31:TranslateOutput/ns31:tracking-info' Expression Builder The Expression Builder section in the Mapper UI displays the mapping for the target element selected. The Expression Builder also has two modes: User Friendly Mode: Developer Mode: The user friendly mode of the Expression Builder displays the mapping as a user friendly expression. The Expression Builder is view only in this mode, hence the 'Save', 'Erase', 'Shuttle' buttons are not available in this mode. To toggle the Expression Builder between the two modes, click on the toggle button available on the right side of the Expression Builder. On navigating to the Mapper screen, the expression builder launches in the user friendly mode, by default, on selecting a target element. To edit the existing mapping manually, toggle the Expression Builder to the Developer Mode. Other Sections of the UI The other sections of the Mapper UI where the source/target elements are displayed, like the 'Test' tab where the root elements of each source is displayed as the header of the tabs, 'Filter' menu, where one of the criterion to filter the tree data is by 'Source name' shows the root elements of the different sources, all of them display the names in sync with the base screen of Mapper, i.e, if Mapper UI is in user friendly mode, these section of the UI also displays the names in the user friendly mode and if the Mapper UI is in developer mode, these sections displays the technical names of the elements.

In this blog, we will look at a new integration feature, User Friendly Names in the Mapper UI and see how the Mapper UI has changed with the introduction of this feature. The new feature will become...

Integration

August 2020 Update & New Release Cycle

Well it is update time again, and we have new features for you and a new release cycle. New features include a New Home Page, Data Stitch and new Adapters. The new release cycle will provide a more predictable update for your instances. Release Cycle We are introducing a new release cycle into Oracle Integration. We will be releasing new functionality every 3 months, in February, May, August and November. We are doing this to make it easier for customers to know when their instances will be updated and what new features they will be getting. We chose February, May, August and November because these are the months most Fusion Apps customers update their Fusion Apps instances. We will be doing additional releases to provide security patches to our platform but we will generally not be providing new functionality except at the quarterly update. We are working with OCI customer notification services to improve our ability to warn you of updates ahead of time. We are sending emails and adding notifications in the OCI console. We know that emails get lost because our inboxes are too full, or the addressee has left the company, in addition a lot of customers always go direct to the OIC console. To accommodate mis-routed or overlooked emails and to support customers who rarely go into the OCI console we are working on enhancements to OIC console to provide information about upcoming or recently completed updates. This in-OIC notification will not be available for the August release but will be in a future release. New Features There are several new features in the August release. New Home Page When you log into an Oracle Integration instance, the first page you see is the OIC Home Page. This page helps you navigate to the areas of the product you need, provides relevant metrics and status and shows your current tasks and actions. We have redesigned the Home Page for Generation 2 instances to provide the most relevant information as well as to expose new functionality in the product. Read more about it in this blog by my colleague Michael Meiner. Jump-start your Integrations with the new Oracle Integration Home Page Data Stitch Customers often need to manipulate an existing message. The Map activity replaces the target message with a new message, this does not allow us to accumulate results for example. The new Data Stitch will allow us to iteratively update a message. There are a number of blogs available for you to get a preview of what is coming: Use Data Stitch to simplify integrations Data Stitch: Append and Assign for repeating elements Data Stitch Assign operation for Elements with Attributes Use Global Variables and Data Stitch to log request payloads #749 OIC Feature Flag - Data Stitch Mapper Enhancement Many application schemas define their interfaces with extremely cryptic technical names that are not easy to correlate to the user-friendly display names you would see in the end point application's UI. The new friendly names feature in the mapper provides the option to show display label instead of the technical name directly in the trees and expression builder.  You can read more about it here: User Friendly Names in Mapper Map My Data is Deprecated When ICS was released, we had a simple integration style called Map My Data that supported a single source and target. Over time we have seen customers demand more and more complex features that we provide through the Orchestration styles. We are now deprecating the Map My Data style and providing a tool to migrate your existing Map My Data patterns into Orchestrations so you can take advantage of all the features of Orchestration style. You can read more here: How to convert Integration from Basic Routing style to Orchestration Integration Properties There are often magic numbers or strings in our integrations that represent configuration items such as filenames & locations or email addresses. The new Integration properties capability allows the developer to define these constants as properties that can subsequently be edited directly from the integration menu without having to find them in maps or expression editors. You can read more about it here: Integration Properties Adapter Updates The following new adapters or updates to adapters will be part of this release and can be read about in Oracle Integration Connectivity Updates August 2020: New Adapters OCI Streaming Service Adapter     Paypal Adapter Adapters with Enhanced features  Oracle Talent Acquisition Cloud Adapter Bulk Export Support File Adapter Improvements REST Adapter: Consumption of OpenAPI Salesforce Adapter: Consumption of Apex for custom business logic SAP Commerce Cloud customization support Shopify Adapter Improvements B2B Update As part of providing improved B2B capability in OIC we have added support for the AS2 transport protocol. AS2 is a key transport protocol that is very popular worldwide read all about it in this blog: Enhanced Integration with your Business Partners using OIC AS2 Adapter Released Since Last Major Update Although not strictly part of the August release, we have in recent weeks announced the availability of some key new features in Oracle Integration. File Server avoids the need for you to have a separate SFTP server in the cloud by providing one with OIC. Insight provides your business leaders with real time visibility of the transactions flowing through their systems. You may want to re-read the announcements for these services: Leveraging Oracle Integration File Server For File Based Integrations Empower your Business Users with Integration Insight  

Well it is update time again, and we have new features for you and a new release cycle. New features include a New Home Page, Data Stitch and new Adapters. The new release cycle will provide a...

PeopleSoft Integration using Oracle Integration – Part 2

Part 2 is a continuation of the blog Integrating PeopleSoft with Oracle Integration - Part 1. In this series, Jin Park shares his experience with Integrating PeopleSoft with cloud applications using Oracle Integration. In this part we cover more details on using Oracle Integration (for example creating connections, creating an integration flow, and validating it through Oracle Integration).  Here is the part 2 of the series from Jin Park.   Now we’re ready to create connections and integration in Oracle Integration. Oracle Integration is capable of hybrid integrations. Therefore, you can use Oracle Integration for SaaS integration scenarios such as SaaS and SaaS or SaaS and on-premises. You may need to set up a virtual private network (VPN) between your data centre and Oracle Integration using VPNaaS (VPN as a Service from Oracle) or install Oracle Integration connectivity agent inside your organisation network. It really depends upon the network / security policy of your organisation. For PeopleSoft, it’s normally located behind corporate firewall. Therefore, make sure you’ve got VPN working or connectivity agent installed. Make sure that you’ve got a successful web service call from public internet using the SOAP / REAT API test client such as Soap UI / Postman. That saves lots of time debugging connectivity issues later. Don’t forget to enable web service request logging. PeopleSoft provides excellent internal tools to monitor web service request which I will explain in the next section. Enable request message logging from PeopleSoft Visit the NavBar again in the top right-hand corner of PeopleSoft. Go to the Navigator > PeopleTools > Integration Broker > Integration Setup > Services. Search by service name CI_CONTACT_INFO. Select operation CI_CONTACT_INFO_F.V1. Select Routings tab. Set *Log detail to Header and Detail as below. This enables web service request message logging which helps you to debug issues. Go to Navigator > PeopleTools > Integration Broker > Service Operation Monitor > Monitoring > Synchronous Services from PeopleSoft. Search the request message for the I_CONTACT_INFO_F.V1 operation as below.  Select the Details link and again View XML from details page. It shows the XML content of the request message. Good to draw something simple before jumping in! It’s actually a good idea to start with a simple integration flow and extend it further with more details as necessary later. Here is my simple flowchart showing what I want to achieve; It’s just mimicking an email address update from the PeopleSoft contact information page. In PeopleSoft, contact information is searched for by business unit and name, etc. The user then selects actual contact information and takes action on that. To make things easy, I’m going to use a fixed business unit and name to search contact information and to skip contact creation if information is not found. I also assume there are no name duplicates in contact information. In real world scenarios, we must plan for duplicates! In the next section, I’ll show some hands-on steps. Let’s Now Create and Run an Oracle Integration Log in to Oracle Integration and go to the Connections page. Define a SOAP Adapter connection with the Invoke role for the PeopleSoft SOAP web service.  You may have following error upon using runtime WSDL URL from PeopleSoft. In that case, zip up the downloaded WSDL and schema files before from the runtime WSDL URL as one file and upload it to the Connection Properties page. Continue to configure connection security and the agent.  Configure the Salesforce.com adapter also. All the adapters are now ready to go. Select an App Driven Orchestration integration style. Enter SFDC_PSFT_SYNC as the integration name and complete the others fields. Providing a meaningful package hierarchy is another simple best practice to manage integration later (for example, to import / export integration).  Add the SFDC_CONTCT_INFO Salesforce Adapter as a trigger connection in the integration. Name the endpoint and select outbound WSDL and click Next. Deselect Send a Response and click Next. Review your selections, and click Done. Salesforce Adapter trigger configuration is complete.  Deselect Send a Response and click Next. Review your selections, and click Done. Select the find operation CI_CONTACT_INFO_F, and click Next. Select No for Configure Header and click Next.   Review your selections on the Summary page, and click Done. The find contact operation is now configured. Configure mapping elements: SetID with “SHARE” NAME1 with concat(FirstName + ” ” + LastName). We can define SetID with an assign action to be reused later in the flow.  I’ll explain that later. Add a switch action after the find contact operation and set the condition with count(CONTACT_ID) > 0.0, which means When contact information exists – one or more. Use CONTACT_ID from the search result of a later flow. I can use the assign action to define variable once and refer to it or reassign the value later.  Therefore, CONTCT_ID, SETID is mapped later as $CONTCT_ID, $SETID. Add a get operation (first red circle below, CI_CONTACT_INFO_G) and update operation (the second red circle below, CI_CONTACT_INFO_UD) into the flow and complete the required mapping.  Add an assign action (first red rectangle below) to hold values from the get operation results, which are used from the update operation. Using an assign action is highly recommended for reusability and reducing unnecessary mapping. Therefore, define it for most of the values from the get operation results. Complete mapping and activate the integration. When you activate the integration, it shows the endpoint to use. Salesforce.com invokes this integration using this endpoint when an update to contact information occurs. Salesforce.com supports calling external webservices by invoking callouts using Apex. This can be easily acheived by following below steps, please refer to the salesforce.com documentation for more information. Generate classes from WSDL generated from Oracle Integration Adding a new Remote Site Create a Stub for the external service Invoke the callout  Before changing email address, let’s get contact information for Bruce from PeopleSoft.  I’m not using a real email address. Therefore, don’t use this email address! You’ll notice the email address is the one of a movie starring Bruce Lee. Go to Salesforce.com, go to Contacts, and search for Bruce. Select Edit for Bruce. Change the email address of Bruce (another movie done by Bruce) and save it. The moment the update is successfully completed, it shows the changed email address from contact list. At the same time, Salesforce.com invokes the web service from Oracle Integration to notify about the changed email address.   Return to PeopleSoft and search Bruce again. It ishows the updated email address! Check the Integration instance in Oracle Integration. Go to Monitoring > Integrations to see last message as below. In case something is wrong, a value of more than zero is shown for Errors. Click Success under the number or go to the Tracking page. The Track Instance shows status of integration instance. Click First Name. The full flow of this integration instance is shown visually as below.    I’m sure we can apply the same pattern for realtime sync between PeopleSoft HCM and  Oracle HCM (or any HCM SaaS application). For PeopleSoft, we have typical other use cases with import / export files for sychronizing with other applications. I hope to make that my next blog for PeopleSoft integration with Oracle Integration.   This blog was first published here on www.redthunder.blog by Jin Park our Integration Champion from Australia.

Part 2 is a continuation of the blog Integrating PeopleSoft with Oracle Integration - Part 1. In this series, Jin Park shares his experience with Integrating PeopleSoft with cloud applications using...

Jump-start your Integrations with the new Oracle Integration Home Page

When you log into an Oracle Integration instance, the first page you see is the OIC Home Page. This page helps you navigate to the areas of the product you need, provides relevant metrics and status and shows your current tasks and actions. We have redesigned the Home Page to provide the most relevant information as well as to expose new functionality in the product. The new Home Page will become available shortly. Note: What we describe here is the new Home Page for Oracle Integration Generation 2 instances. For older Oracle Integration instances, the Home Page will remain the same for now (until of course, your instance is upgraded to Generation 2). Here’s a preview of the new Home Page:   The first thing you will notice is the ability to try a recipe. Recipes are pre-assembled solutions to help jump-start your integration development. You can try out one of the recipes highlighted here, or search for others. More about recipes later. Next is the Summary section where you are presented with the following: My Tasks gives you information on your assigned open and total Process tasks. See here for more information on how to view tasks and manage your work. Processes tells you how many process instances are in progress and completed. Click on this card to access your Process Applications. See here for more information on automating processes with Oracle Integration. Integration gives a view of the number of messages processed and how many activations failed. Click on it and you will get a more detailed dashboard including: messages received / failed; Agent health; active integrations; daily and hourly history of successful and failed instances). See here for more information on administering Oracle Integration. Insight visually shows you the number of activated, deactivated models and total models. Clicking on the card will bring you directly to the models. See here for more information on using Integration Insight for collecting and collating business metrics.   Now let's move further down the page for the highlight! Accelerators and recipes are a great way to get jump-started on building your integrations. Accelerators are run-ready business integrations or technical patterns you can configure and activate. Recipes are starter templates that give you a head start. Some of the recipes exist today as separate downloads via Oracle Marketplace and other repositories. We have harvested Oracle-built recipes and made them available as part of Oracle Integration instances. So our new Home Page brings these right to your doorstep! For instance, say you need to perform contact sync between Oracle Sales Cloud and Service Cloud. There’s a recipe for that. Or say you want to increase productivity by having activities automatically created from IoT Cloud Service to Field Cloud Service. There's a recipe for that. Or say you need to retrieve issue data from Jira Service Desk and synchronize it with case data in NetSuite. There’s a recipe for that. You simply select the desired recipe card from the home page. Say you want to streamline the entire opportunity to quote to order process with Oracle Engagement Cloud and Oracle CPQ Cloud. You guessed it, there's a recipe for that. Now, install the recipe by hovering over the card and clicking on the + sign:     The recipe is now installed. Hover over the card now, and you will see options to configure, activate and delete. Now click configure and configure the connections to point to your applications. Then, activate the recipe and you will see the integrations associated with the recipe. These are fully editable, so you can edit the integrations and associated mappings to fit your specific business needs. Now we have successfully installed and configured our recipe! Accelerators take this one step further. These are run-ready integrations which can provide business value out of the box, with no customizations on your part. You will notice technical accelerators, which provide technical solutions. For example, say you want to be alerted via text of email when a failure occurs in one of your integrations. There’s a technical accelerators for that.    Business Accelerators solve a specific business need. These are more than samples, but rather can be used as end-to-end solutions. For instance, say you need to automate the complete order-to-cash process between Shopify and NetSuite by synchronizing customers, products, orders, fulfillments and payments.There will be a Business Accelerator for that. We will be introducing Business Accelerators on the Home Page soon, so stay tuned!   Lastly, if you are looking for dashboard level information available in the prior Home Page, click on the integration card in the Summary section (shown above).  Our redesigned Home Page makes it easier to access the functionality you need in Oracle Integration. Whether you need a summary status of your integrations, or need to access your Insight models, or access your process tasks, the Home Page has you covered. Plus, you can search our repository of recipes and accelerators to find one that fits your integration needs and you will be that much closer to implementing your business solutions.   We hope you will enjoy using our redesigned Home Page for Oracle Integration!            

When you log into an Oracle Integration instance, the first page you see is the OIC Home Page. This page helps you navigate to the areas of the product you need, provides relevant metrics and status...

Integration

Leveraging Oracle Integration File Server For File Based Integrations

Introduction  While most enterprises want to leverage modern API-based integration technologies to automate their business processes , they are also in need of a File-based integration to enable exchange of data through files. Such enterprises often require a secure file storage solution to exchange files with their trading partners, vendors and suppliers. It is very common for these enterprises to use such a file storage solution with File-based integration to schedule and automate the process of reading and transforming files before exchanging them with multiple systems. There are multiple use cases where enterprises rely on File-based integration. Learn more about the use cases and File-based integration patterns supported by Oracle Integration in this blog from Michael Meiner. Introducing File Server In Oracle Integration (OIC) Oracle is introducing a new functionality called File Server which comes embedded within Oracle Integration (OIC) and offers significant advantages to enterprises which are in need of building and rolling out File based integrations. While customers can provision an SFTP server on Oracle Cloud Infrastructure (OCI) compute resource today, they will now have an option to leverage an Oracle managed SFTP server that is tightly coupled with OIC. File Server is available at all regional data centers on Oracle Integration Generation 2. Learn more about all the features available in Oracle Integration Generation 2 here. Feature Summary Embedded SFTP server within Oracle Integration (OIC) File Server provides a standard SFTP interface that can be used by any SFTP client to access the files. It also exposes a REST api. You can access the REST API documentation here. Also, the integrations within OIC can read and write files from the File Server using the OIC FTP adapter . Free Storage – 500 GB per service instance Each File Server service instance comes with 500GB of storage which can be used by enterprises to store any number of files. File size is not limited when uploading or downloading files from an SFTP client, although it is subject to allocated storage limits. However, files accessed in an integration are subject to Oracle Integration limits. In integrations, inline message payloads (such an XML string or a JSON string) are limited to 10MB, and files and attachments (such as SOAP attachments) can be up to 1GB.   Powerful web admin console to manage and configure server, users and group   Configure File Settings The Settings page can be used by the File Server administrator to monitor overall health, and configure other settings like default home folder for users, and security.   Configure Users and Groups This page is used to configure user's default home folder and public key for authentication. You can also enable or disable File Server access for specific users on this page.    Manage Folder Permissions File Server administrators can manage permissions for users and groups on specific folders. You can also choose if you want the subfolders to inherit the permissions or not. Manage Custom Folders Folders page can be used to manage custom folders and set permissions on these folders.  Multiple connectivity options to read/write files from File Server Oracle Integration via FTP adapter You can connect to the File Server from an Integration through an FTP adapter. The picture shown below shows a simple integration which reads a file from a standalone sFTP server and writes it to the File Server embedded within Oracle Integration.   SFTP Client or SFTP command line Users can also connect to File Server through an sFTP client or sFTP command line interface.  You can learn more about File Server connectivity here.   REST APIS (Link to API documentation) You can learn more about the File Server REST APIs here. Use Cases #1: sftp server lift-and-shift Enterprises which are hosting an sftp server in cloud to store files for integration can move their files to File Server within Oracle Integration (OIC) . They can then redirect their sftp adapter to point to OIC File Server. #2: Communication with trading partners, customer and suppliers Leverage File Server to receive and store information (PO, invoices, shipping info, etc) from trading partners via sftp. This use case also applies to B2B/EDI. #3: Integration for Customer’s SaaS applications This is a scenario where an enterprise has SaaS (or on-premise) applications that export (bulk) data to a file on an sftp server. OIC can then pick up the file, translate and send to target system.   Documentation Learn more about File Server here. 

Introduction  While most enterprises want to leverage modern API-based integration technologies to automate their business processes , they are also in need of a File-based integration to...

Promoting Your Code

A few years ago my wife and I were honored to be invited to our friends promotion ceremony. By hard work, dedication and outstanding leadership he was promoted to a full Colonel in the US Air Force. Fortunately when promoting integrations between environments it is a lot less work. Environments We usually have multiple environments for our code. Some possible environments are listed below: Development for building integrations. Test for testing integrations. Production for running integrations. Other possible environment might include QA for final acceptance testing Load Test for performance testing. Promotion Requirements When promoting code between environments some key features are worth bearing in mind. Same code should be deployed to production that was tested - no changes. This is important because we want to deploy to production what we tested in development and test/qa. Configuration data should be separate from code - each environment will have its own unique endpoints. This is important because we don't want to accidentally store production data in test or development systems. OIC Features to Support Code Promotion OIC allows individual integrations to be exported. Each integration includes dependent artifacts such as lookup tables, JavaScript libraries and connection types. Note that it does not include connection endpoints or credentials as these will vary between systems. To simplify deployment multiple integrations can be bundled into a package. Connections are identified by a name and a time. When imported if a connection of the same name and correct type already exists then OIC will use that connection for the integration. If a connection of the correct name and type does not exist then it will be created but will require configuring as no configuration data is carried across in the integration. This is important because it precludes us from accidentally using dev or test connection sin a production environment. APIs exist to support export and import and also configuration of connections and update of lookups. OIC supports versioning of integrations. Once an integration has been promoted to another environment then if it needs to be changed in development then a new version should be created. Changes to numbers before the first decimal point are major version numbers and integrations with different major version numbers can be active at the same time. Changes to numbers after the first decimal point are classed as minor version changes and only a single minor version of a major version can be active. The Promotion Process The code promotion process is illustrated in the diagram below. A package is exported from production and then the same package is used for both test and production environments. The configuration of connections is separated out and the connections can be manually configured or they can be configured through an API. Continuous Integration & Continuous Delivery Using the Oracle Integration API it is possible to include OIC packages and integrations in a CI/CD pipeline. For an example using Jenkins please check out this CICD Implementation for OIC article by Richard Poon. Cloning an Environment Note that it is possible to create an exact copy of an OIC environment by using the Export and Import Oracle Integration Design-Time Metadata capability of OIC. Unlike the code promotion discussed, this approach clones the connection endpoint and credential information as well as the integrations. This is NOT FOR CODE PROMOTION. This is to allow you to create an exact copy of an environment for historical backup or to clone an environment to create an environment with the same endpoints, for example for load testing or for diagnosing a production problem. If you have these requirements then I have created some wrapper scripts to simplify the process that can be deployed as a docker image. Summary Use the export and import package capability of OIC to promote code from lower environments to higher environments. Use the export/import design time meta-data to create exact copies of an environment at the same level, do not use it for code promotion from dev to test to production. All of this can be scripted and embedded into a CI/CD pipeline.

A few years ago my wife and I were honored to be invited to our friends promotion ceremony. By hard work, dedication and outstanding leadership he was promoted to a full Colonel in the US Air Force....

Integration

Upgrade your Oracle SOA Suite to modern Oracle Integration in the Cloud – Webcast July 1 2020 15:00 CEDT (Berlin), 14:00 BST (London)

Are you looking to move integrations to the cloud, or just to modernize your SOA Suite platform? Join a free webcast by Oracle Platinum Partner eProseed, on 1st July, to find out how customers have used Oracle’s modern integration solutions to dramatically increase integration agility, as well as lowering costs. Requests for new integrations are arriving faster than ever! These may be coming from:  SaaS applications which are replacing traditional ERP, such as E-Business Suite.  New services for call-center systems, mobile applications and chat bots.  Cloud business models driving the need to work more closely with customer and partners. Furthermore, consumers need access to your data and services 24/7 with highest level of security. Leave your Data Center behind -  SOA Suite Uplift Many customers last upgraded their Oracle SOA Suite platforms when 12c became mainstream in 2016. This is likely running on obsolete hardware which is costly to support and more likely to fail than cloud servers. Others have “Data Center Exit” strategies so have a CxO imperative to migrate to the cloud. What options are open to you? Oracle’s state-of-the-art strategic platform with pre-built integrations, fully integrated with 3rd party applications, fully managed by Oracle. Integration Cloud (OIC) SOA Suite 12.2.1.4 deployed on second generation Oracle Cloud Infrastructure, customer-managed so you have the control you are used to today. SOA Cloud SOA Suite 12.2.1.4 deployed on Oracle Container Engine for Kubernetes (OKE), or a container platform of your choice – fully customer-managed, high portable and highly configurable.SOA Suite on Kubernetes Uplift at your own pace - you choose! With out-of-the-box adapters in both OIC and SOA it has never been easier to run a hybrid platform, allowing modernization at a pace to suit your business. Finally, the Bring Your Own License model gives you the option to preserve your existing investment in Oracle Integration, while benefiting from cloud levels of performance, security and reliability. Upgrade your Oracle SOA Suite to modern Oracle Integration in the Cloud July 1 2020, 15:00 CEDT (Berlin), 14:00 BST (London) In this 30 minute webcast you’ll learn from eProseed about the merits of each approach and the benefits their customers achieved. This will enable you to understand better how to update your existing Oracle Integration platform into a cloud-centric, highly flexible, and cost-effective solution For details please visit the registration page here.

Are you looking to move integrations to the cloud, or just to modernize your SOA Suite platform? Join a free webcast by Oracle Platinum Partner eProseed, on 1st July, to find out how customers have...

Integration

Oracle Integration (OIC) Generation 2 is now available in all cloud tenancies

Oracle Integration (OIC) Generation 2 Now Available  All Cloud Tenancies  All Data Centers We’re delighted to announce the availability of Oracle Integration (OIC) Generation 2, which runs natively in Oracle Cloud Infrastructure Generation 2, for all Universal Credit Cloud Tenancies. Continue reading to learn about the exciting new functionality that Oracle Integration Generation 2 provides. Benefits of Oracle Integration (OIC) Generation 2 Natively integrated with the Oracle Cloud Infrastructure Console Simplifies creating and managing user accounts, permissions, and Oracle Integration instances Integration Insight Integration Insight in Oracle Integration dramatically simplifies the process of modeling and extracting meaningful business metrics, allowing business executives to understand, monitor, and react quickly to changing demands File Server File Server provides an embedded SFTP server within Oracle Integration, enabling organizations to focus on building integrations without needing to host and maintain a separate SFTP server See Availability of File Server by Region for the latest updates Support for Oracle Cloud Infrastructure (OCI) Compartments Organize your Oracle Integration instances into OCI compartments (for example, separate dev, test and production compartments). This lets you separate access to instance creation and control instance level management by department. Oracle Cloud Infrastructure Identity and Access Management (IAM) Provides a rich permission model that gives Oracle Integration users fine-grained access to Oracle Integration instances—for example, manage (create, edit, move, …) or view only Enables you to provide view access to instances per compartment (for example, provide development and test view access but not production) Support for tagging Allows you to define keys and values, and then associate those tags with resources Helps you organize and list resources based on your business needs, and lets you use tags in searches and in policies to assign access to tagged resources Updating Oracle Integration instances Allows you to move an Instance to a Different Compartment Lets you change the Edition, License Type, and number of Message Packs of an Instance Service instance Lifecycle Management (LCM) capabilities Lets you create, edit, and delete instances using various methods such as Terraform, command line interfaces (CLIs), and rich APIs Integration with the Oracle Cloud Infrastructure Monitoring service You can view charts that show the total number of message requests received, message requests that succeeded, and message requests that failed for each instance in Oracle Integration Additional metrics are coming soon Compartment Quotas Compartment quotas give better control over how resources are consumed in Oracle Cloud Infrastructure, enabling administrators to easily allocate resources to compartments using the Console. OIC administrators can limit the amount of OIC instances to be created in a compartment. Automating with Events You can create automation based on state changes for Oracle Integration state changes using event types, rules, and actions Do I get Oracle Integration Generation 2 with my Oracle Integration for Oracle SaaS? Oracle Integration for Oracle SaaS accounts created on or after February 11, 2020 support Oracle Integration Generation 2. Older accounts should get Oracle Integration Generation 2 soon. For more details, please contact your Oracle Sales Representative. What happens to my existing Oracle Integration (Gen 1) instances? You access your Oracle Integration instances through the OCI console, following the steps in Access Oracle Integration from the Oracle Cloud Infrastructure Console. Over the next few months, Oracle will upgrade your existing Oracle Integration instances. The upgrade will be done in waves, and you will be notified in advance when your instances are scheduled for upgrade. Please review the Upgrade to Oracle Integration Generation 2 guidance page. Got More Questions? Peruse the Oracle Integration Generation 2 documentation. Read more about the key concepts and terminology in Oracle Cloud Infrastructure.

Oracle Integration (OIC) Generation 2Now Available  All Cloud Tenancies  All Data Centers We’re delighted to announce the availability of Oracle Integration (OIC) Generation 2, which runs natively in...

Integration

Empower your Business Users with Integration Insight

We are pleased to announce the immediate availability of Integration Insight for Oracle Integration Generation 2.  This is a truly differentiated offering from the Oracle Integration team in the EiPaaS market. Integration Insight is releasing as a new Oracle Integration (OIC) Generation 2 feature and is now available in all data centers worldwide. Why Integration Insight? Today's competitive market demands that stakeholders understand, monitor, and react to rapidly changing conditions. Businesses need flexible, dynamic, and detailed insight – and they need it as it happens. Collecting, storing, visualizing, and reporting on business metrics in real time has traditionally been a costly undertaking, requiring significant investment of capital and engineering resources. Software is typically developed to meet the unique needs of business applications. In today’s sophisticated enterprise software environment, many businesses use multiple integrated systems, provided by a variety of vendors, further complicating the task of collecting business metrics. Integration Insight dramatically simplifies the process of modeling and extracting meaningful business metrics to help you understand, monitor, and react quickly to changing demands. Integration Insight has released as a new Oracle Integration (OIC) Generation 2 feature and is available today in the navigation menu of your Integration instance.  Learn more about Integration Insight here. Try Integration Insight today!

We are pleased to announce the immediate availability of Integration Insight for Oracle Integration Generation 2.  This is a truly differentiated offering from the Oracle Integration team in...

Integration

Testing REST trigger-based Integrations in OIC Console

Test Integration feature allows users to test an App-driven integration with REST trigger by invoking it directly from OIC without relying on a third party software. How it works Activate the Integration. Click on "Run" link. A popup will be displayed as below. Click on Test link to go to Test Integration page. Test Integration page will have 3 sections: Operation, Request, and Response. Operation and Request section will have the endpoint's metadata populated. Operation Operation section contains Operation option(if Integration trigger is configured with multiple operations) along with HTTP method and relative URI (for the selected operation in case of multiple operations). User can choose any of the available operations. Request Request section will have following fields: URI Parameters, Headers, Body, & cURL URI Parameters field will have the list of expected path(or template)  and  query parameters. Headers field shows all the custom headers including Accept & Content-Type based on the Integration configuration. Input body can be provided in the Body field which will have a placeholder describing the expected body type. User can copy the equivalent curl command from the cURL field. Curl command will be generated based on the endpoint's metadata and input provided by the user. User can click on the Test button to invoke the integration and check the Response section for response details. Banner message will be displayed once the Integration(endpoint) is invoked. Then, user can check the response details in the Response section. Response Response section will have the response body(if any) and Headers displayed in Body, Headers field respectively along with the HTTP Status, and Instance Id(if any). User can view the activity stream for the generated instance by clicking on the 'View Activity Stream' button. Find more details about using activity stream here: https://blogs.oracle.com/integration/using-the-next-generation-activity-stream Also, user can click on the Instance Id to go to Tracking details page  

Test Integration feature allows users to test an App-driven integration with REST trigger by invoking it directly from OIC without relying on a third party software. How it works Activate the...

Integration Monitoring and Scheduling pages - Progressive Web App UI Experience

- Pre-requisites These pages were made public on June 8th, 2020. What's New New Oracle Integration (OIC) monitoring and scheduling UI is built using Oracle JavaScript Extension Toolkit (Oracle JET) utilizing full benefits of JavaScript, CSS3 and HTML5 design and development principles. This UI is compliant with latest UX standards and offers consistent user experience across all Integration pages. Following are the highlights of the new features and enhancements included in the new UI: Monitoring UI: Progressive load of data on all pages System health information displayed on Dashboard Search feature in Activity Stream page New page for Design time Audit with full search capabilities Aborted instances count included in Integrations page Summary of all instances displayed in Integrations page Integrated Activity Stream in Tracking page Scheduled Run information now displayed in Tracking page Inline display of error messages in Errors page Scheduling UI Reorganized Future Runs and Schedule page Ability to search for older completed requests Ability to select timezone while defining schedules Toolbar, table-view layout and search and filtering consistency maintained across Designer and Monitoring pages. Please check Integration pages - Progressive Web App UI Experience  blog for in-depth details about these items and also about the new Navigation scheme. Dashboard Runtime Health card shows total number of received and failed messages. The chart shows the success rate 'System Health' card shows status of Service Instance and Agents A new 'Agent Health' card has been added which shows total number of agents and how many agents are down. The chart shows the the percentage of available(up) agents. 'Integrations' card shows percentage of active integrations. It also shows the number of integrations triggered by schedules and app endpoints 'Scheduling' card shows percentage of running schedules. It also shows the number of running as well as stopped schedules 'Design Time Metrics' card shows number of connections, integrations, adapters and schedules. More information is available in Design Time Metrics page which can be opened with 'More...' link. Daily and Hourly history data is displayed in the bottom charts Activity Stream   To access Activity Stream page from Dashboard, click on View and select 'Activity Stream' from the options that appear in the drop-down The new Activity Stream page displays the records in a more readable, column separated fashion which is consistent with other pages There are a bunch of new filters added which can help in viewing and searching required data Integration filter: It displays the records for the specified Integration. It is an auto-suggest filter, i.e., the filter suggests the integration name and version for a partially entered integration name. Record count filter: It displays the total number of records specified by the user. However, it does not allow the user to view more than 500 records on the UI. The user needs to download ics-flow.log file by clicking the Download button to view more than 500 records The limitation of viewing only the latest 15 records has been removed   Design-time Audit Similar to Activity Stream, on the Dashboard page, click on View and select 'Design-time Audit' from the options that appear in the drop-down to navigate to the brand new Design-time Audit page In previous UI this information used to lie hidden inside the Design-time metrics page. Now this has been elevated to an independent page The limitation of only 15 records has been removed Audit logs can be filtered on the basis of Date-Range time window filter, Integration filter, Log Count filter, Username filter and Action filter. Date-Range time window filter: It displays the audit logs having timestamp between the specified 'From' and 'To' time. Integration filter: It displays the audit logs for the specified Integration. It has an auto-suggest filter, i.e., the filter suggests the integration name and version for a partially entered integration name. Log count filter: It displays the total number of records specified by the user. Username filter: It displays the audit logs for the specified user. Action filter: It displays the audit logs with the specified Action. It is an auto-suggest filter, i.e., the filter suggests the action name for a partial query. Option to download the audit log file remains   Integrations:   The new Integrations page adds a host of features New "Aborted" column, which shows the number of aborted messages Summary of total message count is displayed at the top New filter options have been added to view different types of Integrations - like Status and Style For scheduled integrations, the Run information is now available in the row details section Agents: Agents page displays the Agents Group data in every row Individual Connectivity Agent information is displayed inline when the row is expanded Row details section displays details about Connectivity Agent Status column displays information about the availability of the Agent   Tracking: The enhanced Tracking page is not consistent with other table view pages and displays data in a tabular format Request ID information for scheduled instances is displayed inline (this used to be part of Runs page in previous UI) Clicking on Request ID link takes user to Schedule and Future Runs page and applies the Request ID filter appropriately In-progress instances have the option to Abort. The button is displayed on hovering over the row For Asserter instances, users can view the Asserter Result which gets displayed in the same drawer as Activity Stream Integrated Activity Stream and Tracking: Now users can view Activity Stream for every individual instance on the Tracking page without needing to drill-down into the Tracking Details page This makes it very easy to view the flow of messages through the integration Users can click on the 'Eye' icon (which shows on hovering over the row) to launch the drawer that brings up the Activity Stream for that instance Full-page view of Activity Stream is supported Download option now allows user to download the JSON file representation of the Activity Stream Errors: Errors page displays error messages in the row details section Users can select individual instances or select all instances displayed for Resubmit or Abort operations. Depending on the error (recoverable or non-recoverable) the instances will be taken up for Resubmission or Aborting. A new 'Error Type' filter has been added to view only Recoverable or Non-Recoverable errors. The default is to display all types Activity Stream is integrated with Errors page - making is easier to view the flow of message for selected instance   Schedule and Future Runs: Schedule and Future Runs page has been reorganized to give greater focus on the Future Runs table New filters have been added to filter data on the table Time Window filter: Options are the same as previous UI with Next 24 Hours being the default Type filter: Users can select if they want to view only Manually Submitted Runs or Scheduled Runs or All Request ID filter: Users can search for a specific Request ID using this filter Users can click on the Schedule name bar to expand the details of schedule definition Schedule Definition: Clicking on the 'pencil' button on Schedule and Future Runs page navigates user to the schedule definition page As in previous UI, users can use this page to define a new schedule or edit an existing one A new time-zone drop-down has been added that allows users to select a custom time-zone for the schedule "Type" is renamed "Define recurrence" "Basic" type is renamed "Simple" and "Advanced" is now called "iCal"   How to track scheduled instances? In the new Monitoring UI, Runs page has been removed. Runs page essentially showed Tracking instances and resulted in duplicate data. Since this data is already available on the Tracking page, there was no reason to keep Runs page. When users start a schedule or click Submit Now, the message banner would display a link to the Tracking instances page (this link previously navigated to the Runs page) and clicking on it will enable users to see the instance generated by the scheduler. Users can also track their existing scheduler requests and future runs in the Schedule and Future Runs page as before. As explained earlier in this blog, Tracking page has been enhanced to display the Request Id for scheduled instances. Clicking on it leads to the Schedule and Future Runs page which gets filtered by the Request Id. This allows for a simplified tracking experience and also enables users to view the details of the schedule request and the subsequent tracking instance easily.   Asserter Recordings Users can enable  Asserter recording for app-driven orchestrations by selecting Enable Asserter Recording menu-item on the Designer -> Integrations page On the same menu click on Asserter Recordings menu-item to navigate to the Asserter Recordings page Click on the Play button (shows on hovering over the row) for the instance which has to be replayed Navigate to Monitoring -> Tracking page to view Asserter instances. They will be highlighted with an orange colored icon indicating these are Asserter instances Users can choose to view the result of Asserter instance by clicking on the 'Eye' icon. It brings up a drawer that shows the Asserter Result Clicking and expanding the row details also shows the Asserter status and the Recording Id On Tracking page users can also click on the link icon for instances of Asserter-enabled integrations to navigate to Asserter Recordings page    

- Pre-requisites These pages were made public on June 8th, 2020. What's New New Oracle Integration (OIC) monitoring and scheduling UI is built using Oracle JavaScript Extension Toolkit (Oracle JET)...

Integration

Slack Adapter for OIC

The Slack adapter for Oracle Integration Cloud was released recently and delivers an easy way of Integrating with Slack. Slack and other platforms with similar capabilities changed the way we work, and the way we interact with our colleagues. The boost in productivity and collaboration with these types of platforms is incredible. Slack is also a verb nowadays (let me slack you), that alone is enough to show its impact ! Slack Adapter Capabilities The Slack Adapter offers outbound integration with Slack on the Oracle Integration platform. You can create outbound integrations that invoke the Slack application so you can manage channels, invite users, get profile information, manage chat and groups, upload files, and perform search operations. More details on the documentation page. Use Cases The first use case that comes to mind is about notifications – Traditionally a notification is an email – but instead of relying on an email, you can publish those notifications into a dedicated channel, or tag the proper team/individual. This will allow a transparent handling and collaboration of all notifications! Let’s now think on Sales Orders – When a new order/opportunity is created in the CRM system you can create a new slack channel with all team members, or you can tag someone that has a particular task waiting to be fulfilled for example. These are just two obvious use cases but in reality there are so many possibilities here – you can pick up any real time event , filter it and tag someone, create a channel, send an attachment to a channel, escalations, reminders etc, etc. Prerequisites for Creating a Connection If you are an Integration developer probably you will ask the Slack administrator for this information – but in case you have both roles, here is a quick walk-through. Before you can create a connection with the Slack Adapter, you must satisfy the following prerequisites. Have an OAuth application (published) on the Slack platform. Client ID and client secret. User Token Scopes defined Fill the Redirect URL When you create a Slack app, the Client Id and Secret are automatically created.   Under OAuth & Permissions tab you can add the desired scopes                           Fill the Redirect URL with: https://<instance name>.integration.ocp.oraclecloud.com:443/icsapis/agent/oauth/callback The last step is to install the app to the slack workspace!   How to create a connection? Choose Slack as the desired adapter. Name your connection and provide an optional description. Click configure Security: Client Id and Secret: Use the client id and secret from the slack app Scope: Add the desired scope – here I just want to enable read/write capabilities. Scopes are separated by a space. Here is a list of the supported scopes and their respective operations: https://docs.oracle.com/en/cloud/paas/integration-cloud/slack-adapter/invoke-operations-page.html   Finally press the “Provide Consent” button – this will open a popup/tab where you need to provide the slack credentials, after which you will be requested to allow access. All going well you should see the below screen   How to create an Integration? I will showcase how to use the Slack adapter for notifications. With a simple scheduled Integration that is supposed to read files from an FTP – built to fail !! then the global fault comes into play and we create the Slack connection so that we can write the error notification to a channel!   Drag the Slack adapter onto the Global Fault canvas. Next you can see all the available operations – For this case we want to send a message, Full list of Operations here . And that’s about it! Finally, we need to define the mapping activity. There are 2 fields that we should map: Channel – here I will hardcode the name of the channel where we want to write the message – “ops_integration” Text – This is the message we want to write. Since this is an error notification I concatenate some relevant fields, like “IntegrationName”,”Enviroment”, “ErrorReason”. You can add attachments or enrich this message as much as you want!   concat ("There is an error in Integration : ", $self/nsmpr2:metadata/nsmpr2:integration/nsmpr2:name, ", on environment: ", $self/nsmpr2:metadata/nsmpr2:environment/nsmpr2:serviceInstanceName, " with error details: ", $GlobalFaultObject/nsmpr1:fault/nsmpr1:reason ) Now we can run the Integration, and it is built to fail on the FTP invocation! Once the error occurs, the Global Fault handles it! And this is the message published to Slack!   Easy, yet powerful!!

The Slack adapter for Oracle Integration Cloud was released recently and delivers an easy way of Integrating with Slack. Slack and other platforms with similar capabilities changed the way we work, and...

Integration

Conditional Mappings in Oracle Integration

Sometimes, when modeling integrations, we need to map data dynamically depending on other data.   In this blog, we will look at creating conditional mappings using Oracle Integration. Use case: Consider this pseudo code sample of the mapping logic. How can we achieve this in the mapper UI? if PER03 == 'TE' {     Contact.Phone = PER04 } if PER05 == 'TE' {     Contact.Phone = PER06 } if PER07 == 'TE' {     Contact.Phone = PER08 } Solution: First, enable “Advanced” mode and open the Components palette.  This exposes XSLT statements which we need to create conditional mappings.     Locate the phone element in the target tree.  This is the element where we want conditional mappings.  If phone is a lighter color and italicized, that means the element does not yet exist in the mapper’s output. Right click and select Create Target Node.  We will not be able to insert conditions around phone without this step. Drag and drop the choose element as a child of phone.  The cursor position surrounding phone indicates whether choose will be inserted as a child (bottom left) or as a parent (upper right).  In this case, we will insert as a child.     Now that we have choose in the tree, drag and drop when as a child of choose 3 times to create placeholders for our 3 conditions.  Note, you could also drop a when statement as a sibling before or as a sibling after another when. Each condition also needs a corresponding mapping value.  Drag and drop value-of as a child of each when.  Now we have the tree structure needed to create our conditional expressions and mapping expressions.     Let’s create the expressions for the first condition and mapping. if PER03 == 'TE' {     Contact.Phone = PER04 } To create the condition, select the first when in the target tree.  Drag and drop PER03 from the source tree into the expression builder.  Complete the expression by typing = “TE” and click the checkmark to save the expression. To create the mapping, select the value-of under the first when.  Drag and drop PER04 into the target value-of. The first conditional mapping is complete.  Repeat these steps for the remaining second and third conditional mappings to achieve the desired logic. Save the map and integration. if PER05 == 'TE' {     Contact.Phone = PER06 } if PER07 == 'TE' {     Contact.Phone = PER08 } The finished product will look like this. Congratulations! We have just used Oracle Integration to create conditional mappings.

Sometimes, when modeling integrations, we need to map data dynamically depending on other data.   In this blog, we will look at creating conditional mappings using Oracle Integration. Use case: Consider...

Integration

Kafka Adapter for OIC

The Kafka adapter for Oracle Integration Cloud came out earlier this month, and it was one of the most anticipated releases.     So what is Kafka? You can find all about it on https://kafka.apache.org/, but in a nutshell: Apache Kafka is a distributed streaming platform with three main key capabilities: Publish and subscribe to streams of records. Store streams of records in a fault-tolerant durable way. Process streams of records as they occur. Kafka is run as a cluster on one or more servers that can span multiple data centres. The Kafka cluster stores streams of records in categories called topics, and each record consists of a key, a value, and a timestamp.   Kafka Adapter Capabilities The Apache Kafka Adapter enables you to create an integration in Oracle Integration that connects to an Apache Kafka messaging system for the publishing and consumption of messages from a Kafka topic. These are some of the Apache Kafka Adapter benefits: Consumes messages from a Kafka topic and produces messages to a Kafka topic. Enables you to browse the available metadata using the Adapter Endpoint Configuration Wizard (that is, the topics and partitions to which messages are published and consumed). Supports a consumer group. Supports headers. Supports the following message structures: XML schema (XSD) and schema archive upload Sample XML Sample JSON Supports the following security policies: Simple Authentication and Security Layer Plain (SASL/PLAIN) SASL Plain over SSL, TLS, or Mutual TLS  More details on the documentation page: https://docs.oracle.com/en/cloud/paas/integration-cloud/apache-kafka-adapter/kafka-adapter-capabilities.html   How to set up everything? I installed Kafka on an Oracle Cloud VM running Oracle Linux. This was quite straightforward. If you are new to Kafka, there are plenty of online available resources for a step by step installation. You need Zookeeper and Apache Kafka – (Java is a prerequisite in the OS). I have a very simple configuration with 1 broker/node only, running on localhost. From an OIC standpoint you must satisfy the following prerequisites to create a connection with the Apache Kafka Adapter: Know the host and port of the bootstrap server to use to connect to a list of Kafka brokers. For Security - username & password (unless you choose no security policy) For SASL over SSL, TLS, or Mutual TLS - have the required certificates. The OIC connectivity agent needs to be up and running. I installed the connectivity agent in the same machine as Kafka, but this can be installed anywhere if they are in the same network.   How to create a connection? Choose Apache Kafka as the desired adapter. Name your connection and provide an optional description.     Bootstrap Server: I used localhost:9092* – this is because the actual connectivity is handled by the agent, so in reality we are connecting to the Kafka server as if we were inside the machine where it runs. You can also use the private ip of the machine instead of localhost. *9092 is the default Kafka port, but you can verify the one you are using in <Kafka_Home>/config/server.properties Security: I choose no security policy but in a real-life scenario this needs to be considered. More on this can be found in the official documentation! Agent Group: Select the group to which your agent belongs.   Finally, test and verify the connection is successful.   Create an Integration (Consume Messages) So now, we can create a Scheduled Integration, and drag the Kafka Adapter from the Palette onto the Canvas. We can Produce or Consume Messages. Let’s look at Consume Messages.   We have 2 options for consuming messages. With or without offset Part of the unique characteristic of Kafka (as compared with JMS) is the client’s ability to select from where to read the messages – offset reading. If we choose offset reading, we need to specify it, and the message consumption will start from there, as seen in the picture below. Picture from https://kafka.apache.org     Select a Topic: My Kafka server only has 1 Topic available – DTopic. Specify the Partition: Kafka topics are divided into several partitions. Each one can be placed on a separate machine so that multiple consumers can read from a topic at the same time. In our case there is only 1 partition – we can choose the one to read from, or give Kafka the control to choose - If we do not select a specific partition and use the Default selection, Kafka considers all available partitions and decides which one to use Consumer Group: Kafka consumers are part of a consumer group. Those consumers will read from the same and each consumer in the group will receive messages from different partitions in the topic.   Picture from O’Reilly - Kafka: The Definitive Guide The main way to scale data consumption from a Kafka topic is by adding more consumers to a consumer group. Picture from O’Reilly - Kafka: The Definitive Guide   I added this Integration to a consumer group called: “ test-consumer-group” which only has 1 consumer. Specify Option for consuming messages: Read latest: Reads the latest messages starting at the time at which the integration was activated. Read from beginning: Select to read messages from the beginning. Message structure & headers: I choose not to define the message structure, and the same for headers.   This is what the Integration looks like. It does not implement any specific use-case, it’s a pure showcase of the Kafka adapter capabilities. Note that we are not mapping any data in the mapping activity.   Now, going back to the Kafka server, we can produce some messages. By using ./kafka-console-producer.sh script we can produce messages in the console. >message for oic When you run the Integration, that message is read by OIC as showed here in the Payload Activity Stream. The option to consume messages was - Read latest, otherwise we would get more in the output.     Easy and straightforward – which is the main benefit of Adapters, remove all client complexity, and focus on the use case implementation.   Create an Integration (Produce Messages) Lastly, how can we produce messages.   I created a new topic - DTopic2 - to receive the messages. Yes, I know, not very imaginative for naming conventions! I select the desired topic, partition as default and do not want to specify message structure nor headers.   We need to map data, which translates to: What is the data we want to produce in the topic?   To keep it simple we hard-code the attribute Content with the following message:   So, now we start up the console consumer to track the new messages being sent to the topic. I run the Integration, and we can see in the Kafka console the message being consumed!   And if we go to the OIC monitoring we see exactly the same!     This shows how easy it is to create and manage Kafka clients, both producer and consumer, from within OIC. For more information please check: https://docs.oracle.com/en/cloud/paas/integration-cloud/apache-kafka-adapter    Sources: kafka.apache.org / O’Reilly Kafka: The Definitive Guide / Oracle Documentation    

The Kafka adapter for Oracle Integration Cloud came out earlier this month, and it was one of the most anticipated releases.     So what is Kafka? You can find all about it on https://kafka.apache.org/, b...

Integration

Boost Your WebForm Productivity with our New Expression Builder Features

We're introducing several new Oracle Integration improvements we hope will markedly boost your web form expression productivity. These enhancements are an example of our ongoing efforts to address your feedback! Expression Editor Redesign  We've redesigned the form expression editor to make it easier to build and keep track of event logic. The expression editor content is now cleaner, more compact, and easier to understand. Many of the changes were made in response to feedback from customers and the User Assistance team. For example, function variables are now aligned, and expression summaries are now clearly differentiated from input fields.  Before example:  After example:  To see the new expression editor in action simply follow these steps: 1. Open a Form. 2. Add an Input Text control to the Form canvas. 3. Click on the Input Text. 4. Add any event from the General Properties panel. 5. Click the Edit icon next to the Event to open the expression editor.  6. Start exploring! Try adding different types of blocks such as Actions, Ifs, loops, Connectors, Filters, and Reusable Snippets.    Reusable Snippets Reusable Snippets allow users to extract and name a group of blocks (Actions, Ifs, Loops, Connectors, and Filters) and use that group of blocks in other Events in their Form Presentation. This feature saves users from having to recreate the same event logic over and over again. Instead, users can create a reusable snippet and use it wherever they want to implement the same logic. Moreover, users can manage their reusable snippets in a central location. If users want to make changes to a reusable snippet, they can modify the master copy and their changes will be reflected wherever the reusable snipped is used. To learn how to use reusable snippets, follow the steps below for extracting, using, and managing reusable snippets. Extracting Reusable Snippets Let's pick up right after step 6 from the Expression Editor Redesign steps. If you haven't followed those steps already, go ahead and follow them now.  7. Make sure you have added at least one block to your expression editor. Click the Extract Snippet button in the top right-hand corner.  8. Give your Reusable Snippet a name if you want. There is already a default name provided.  9. Select the blocks you want to extract by using the toggles on the right-hand side of the blocks.  10. Click the OK button in the top right corner to finish extracting your Reusable Snippet.  11. Click the OK button in the bottom right corner to save and close the dialog.  Next, let's learn how to use Reusable Snippets.  12. Add a new event. You can add it to the same control if you wish.  13. Open the expression editor for the new event. 14. Click the + Reusable Snippet button from the bottom toolbar of the expression editor dialog. 15. In the Reusable Snippet dropdown, you will see all the Reusable Snippets you have created. Select one to use in your event, and you will see the event logic in read-only mode.  16. Click OK and close the dialog.  Note: You can also detach a Reusable Snippet if you no longer want the changes to the master copy of the Reusable Snippet to be reflected wherever the reusable snippet is used. To detach, click the Detach icon (the very left square icon of the four icons in the top right of the Reusable Snippet block). If you detach a Reusable Snippet the blocks inside will be like any other blocks in your event, and you will be able to edit them.  Managing Reusable Snippets   Finally, let's learn how to manage Reusable Snippets.  17. Go to the Presentation properties panel. You will see a list of all the Reusable Snippets in your Presentation. From here you can create new Reusable Snippets, edit existing Reusable Snippets, and "delete" Reusable Snippets. Deleting will detach all instances of a Reusable Snippet in your events.  18. Click the Edit icon next to the Reusable Snippet name to open the Configure Reusable Snippet Dialog. In this dialog, you can rename your Reusable Snippet and edit the event logic of your snippet. Remember that updating a reusable snippet here updates it wherever it's used in your events. Congratulations! You now know how to work with reusable snippets in your process application forms! See https://docs.oracle.com/en/cloud/paas/integration-cloud/user-processes/create-web-forms.html for more info!   Credits:  Kalyn Chang, Nicolas Laplume, and Carolina Arce Terceros          

We're introducing several new Oracle Integration improvements we hope will markedly boost your web form expression productivity. These enhancements are an example of our ongoing efforts to address your...

Netsuite Custom Field Discovery

Prerequisite Before using an already existing netsuite connection, refresh metadata needs to be done on it. Make sure the last refresh status is complete for the connection.   This feature exposes custom fields for standard objects as named fields in the mapper and during netsuite endpoint creation for advanced search and saved search operations. This feature applies to all basic(except delete) and search operations of netsuite. And for both sync and async processing modes. For Basic CRUD operations, the custom fields is exposed on the mapper as a named field. The custom field name is derived from the name given to custom field in netsuite. This makes it easier to map without needing to know the internalId and scriptId of a particular custom field for standard object. For eg, here is the mapping done for netsuite update operation. The image below shows a request mapping from Rest(trigger) to Netsuite Update operation on Customer Standard Object .   You can check that there are two fields that have been mapped for the netsuite update operation. ICSEmailId and AdvertisingPreferences. ICSEmailId is a simple type custom field, no further work is required on the part of the integration developer. Just use it like any other simpletype field. AdvertisingPreferences is a complex type custom field. It correlates to a multiselect custom field in netsuite. For complex type custom fields, listitemId correlates to the internalId of the listItem. For the invoke request to netsuite update operation to succeed, integration developer needs to ensure listItemId value is mapped. For mapping more than one listItem, just repeat the ListItem and do the required mapping. Here is how to get the internalid of list Item of a complex type custom field on netsuite UI . Go to customization menu->Lists,records,Fields->Lists->Go to the particular custom field in question. Below image provides an example of response side mapping for Customer Standard Object for netsuite get operation.  For search operations. The approach to be taken by the integration developer for each of the operation type is as follows. 1) Basic Search. On the request side, mapper will surface all discoverable custom fields under customFieldList element under the standard object. The customer does not need to provide internalId or scriptId for the customfield. However  it is required to provide the custom field specific search values as well as operator values. For select and multiSelect type customField, searchValue element under the named customfield needs to  be used. If more than one listItem needs to be specified, searchValue element needs to be clone. The internalId for the listItem can be obtained from netsuite UI as discussed before in the blog. The response side for basic search is same as described above for the get operation response. The response payload from netsuite wil contain named custom fields in the response. Diagram showing request mapping for basic search  custom field for a standard object. Diagram showing custom field search for Advertising Preferences which is a multiSelect custom Field, specifiying two list members(by cloning the searchValue element)   For joined Search, on the request side, the mapping to be done remains more or less same as for Basic Search. The only difference being that we can also make use of discovered custom fields in the mapper for joined business object. On the response side for joined search, the response is the same as one for basic search/get operation. One can make use of for-each operation shown below to extract the discovered custom field 3) for Advanced Search and Saved Search, the request side mapping to be done, remains same as for basic and joined search. In addition to above, we can also select discovered custom fields for response sub-object during endpoint creation.  Diagram below shows that.   Please note that customers can still use the old way of mapping custom fields for all the operations. By using internalId and scriptId as before.

Prerequisite Before using an already existing netsuite connection, refresh metadata needs to be done on it. Make sure the last refresh status is complete for the connection.   This feature exposes custom...

Integration

See How Easily You Can Access Integration's metadata

Many times we may want to use the name of the integration, its version inside the OIC integration flow and we may not want to hardcode the values. And also we may want to access dynamic value like runtime instanceId, invoked by etc., inside the integration flow. All these are possible now with the introduction of a new feature called 'Integration Metadata Access' and it allows access to most of the commonly useful metadata. In this blog, we will see what are the metadata that we can access and how we can use it in the integration flow. The minimum Oracle Integration version required for the feature is 20.34310 List of exposed metadata  Integration Name Identifier Version Runtime data Instance ID Invoked by name Environment data Service instance name Base URL All these metadata are read-only fields and can be used in any orchestration like Assign activity, Log activity, Notification activity etc., Step by Step Guide: Create a new integration or edit an existing integration flow Add a new Log action Edit the log message and in the source tree you can see the list of metadata Drag and drop the required metadata to the expression builder Save and activate the integration Trigger the integration flow using the endpoint Go to Monitoring > Tracking page Open the particular run and click on 'View Activity Stream' and you should see the Log message which logs the integration name

Many times we may want to use the name of the integration, its version inside the OIC integration flow and we may not want to hardcode the values. And also we may want to access dynamic value like...

Invoke a Co-located Integration from a Parent Integration

The capability to ‘Invoke an Integration from another Integration’ is now GA – in other words, the ability to easily implement Modular Design is now GA. This topic has already been covered some time ago here, but now the feature is GA, available to every OIC user, and it’s worthwhile a refresh!   What did it really change? Before this feature, we could achieve the same result, but that would require to expose the desired Integration with a REST Trigger and we would need to create a Connection to enable calls to that Integration. Now we can simply call the Integration and avoid the need to handle the Connection and the endpoint changes in different environments. There is no need to configure the Connection in the Integration , where we would need to define request/response payloads, headers, parameters and many other settings available in the REST connector. It is much more practical!     This is how it looks, we simply drag the Integration icon from the Actions Palette onto the canvas.   Why is this important? Having the ability to call other Integrations  allows us to divide parts of our work into smaller manageable blocks which gives flexibility and offers decoupling. Work re-usability is one of the most fundamental concepts in development. Let’s see what Wikipedia has to say about modular design. Modular design, or modularity in design, is an approach that subdivides a system into smaller parts called modules which can be independently created, modified, replaced or exchanged between different systems A decoupled system allows changes to be made to any one system without having an effect on any other system.   Sounds about right to me!   Example Use Case Recently I came across a use case that would benefit tremendously from this approach. The customer I talked with, had a consolidated CRM (Oracle Engagement Cloud) that was synchronised with several legacy and on premise systems across several locations. Let’s look to the synchronisation of Accounts. The activity CreateAccount can be complex, far beyond the actual API request CreateAccount. One may need to verify if the actual account already exists, and then decide to update an existing one or to create a new one. From a data modelling perspective, the entity Account can have many relationships – Contacts, Addresses, Child Accounts – etc – If we factor all of them, this can lead to a more complex set of steps to create/update an account. It’s not important to delve on all the details, but just to understand that the activity CreateAccount has the potential to be complex. Because the Customer’s on premise CRM’s had different interfaces, we had to create two different Integration workflows within OIC. Best practices say that we should reuse the CreateAccount activity.   How to implement this? For this use case, we need to implement an AppDriven Orchestration Integration called CreateAccount, with a REST trigger (so that it can be called from another integration). As you can see below, in this Integration called CreateAccount_Demo we start with verifying if that account already exists in Engagement Cloud and then we decide if we create a new one or if we update an existing account.   Now we can create an AppDriven Orchestration (Or Scheduled) Integration that will connect to the on premise CRM’s. One will use the SAP adapter and the other the technology SOAP adapter and both will synchronise the accounts back to Oracle Engagement Cloud. This diagram explains the result, where CreateAccount is used by both Integrations.   The ability to invoke an integration directly from within an integration is made possible with the Action: “Call Integration” as seen below.   Drag that icon onto the canvas and you will get a configuration Wizard. Choose an appropriate Name and provide a Description.   All your Active Integrations of type Trigger are displayed.   The CreateAccount_Demo Integration only implements a POST operation.     In the end you would get something similar these Integration below, where the CreateAccount_Demo Integration is being reused by both Integrations with just a simple drag and drop ?       This is just an example use case of how this improved functionality could bring added value to your OIC implementation. The documentation gives more information on what can be done! If you are interested in the new features already released please check the What’s New page!  

The capability to ‘Invoke an Integration from another Integration’ is now GA – in other words, the ability to easily implement Modular Design is now GA. This topic has already been covered some...

Announcing Early Access of SOA Suite for Kubernetes

The SOA Suite Team is excited to announce the early access availability of Oracle SOA Suite on Containers and Kubernetes. This program will lead to certification of SOA Suite deployment using Containers on Kubernetes in Production environments Scope The scope of eventual deliverable is as follows Provide Container images for Oracle SOA Suite including Oracle Service Bus Certify these Container images for deployment on Kubernetes for Production workloads In later phases, we will expand certification to additional components based on feedback received from the Early Access program Objective With growing adoption of Containers and Kubernetes in Datacenters, this effort targets Supporting Oracle SOA Suite and Oracle Service Bus Containers in Production environments Enable Datacenter consolidation/modernization efforts Enable SOA Suite's co-existence with cloud native applications Features Following are the salient features of this release Container images created using Oracle SOA Suite 12.2.1.3 release Certified deployment using Weblogic Operator (2.4) to deploy and manage Oracle SOA Suite and Oracle Service Bus with ease Support for searching and analyzing logs with ease using the ELK Stack Integration to Powerful metrics collection and alerting system with Prometheus and Graphana Support for multiple Load Balancers like Traefik, Voyager, and NGINX As part of the release, we are making supporting files, deployment scripts, and samples available on GitHub. You can access them here Weblogic Kubernetes Operator Documentation - https://oracle.github.io/weblogic-kubernetes-operator/ Oracle SOA Suite and Oracle Service Bus Quick Start Guide - https://github.com/oracle/weblogic-kubernetes-operator/tree/master/kubernetes/samples/scripts/create-soa-domain Requirements The requirements for the Operator are as follows: Oracle Linux 7 (UL5+) or Red Hat Enterprise Linux 7 (UL4+) Kubernetes 1.13.5+,1.14.3+ and 1.15.2+ (check with kubectl version). Flannel networking v0.11.0-amd64 (check with docker images | grep flannel). Docker 18.9.1 (check with docker version). Helm 2.14.3+ (check with helm version). Oracle FMW Infrastructure Container Image 12.2.1.3 (Download from OCR fmw-infrastructure:12.2.1.3-200109) Oracle SOA Suite 12.2.1.3.0 + Latest CPU (30638100: 12.2.1.3 Bundle Patch 191208 (12.2.1.3.191208.1658.0113)) Oracle Database 12c or above Limitations Please note that you may encounter the following limitations : https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-fmw-domains/soa-suite/#limitations Feedback We are actively soliciting your feedback. You can provide feedback using - Slack : #soa-k8s on oracle-weblogic.slack.com GitHub Repo: https://github.com/oracle/weblogic-kubernetes-operator/tree/master/kubernetes/samples/scripts/create-soa-domai

The SOA Suite Team is excited to announce the early access availability of Oracle SOA Suite on Containers and Kubernetes. This program will lead to certification of SOA Suite deployment using...

Integration

Use Global Variables and Data Stitch to log request payloads

In this blog, we will look at 2 new Integration features Global Variables, and Data Stitch.  Data Stitch allows us to make assignments to complex type variables.  We will show how the features can be leveraged to log invoke request payloads in case of fault. Prerequisite Enable following features: oic.ics.console.integration.stitch-action oic.ics.console.integration.complex-variables To enable feature flags – Refer to Blog Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 200113.1400.33493 Use case:  When invoke fails, we want to log the request payload.  Currently, request payloads are visible after the invoke, but not visible inside the fault handlers. Solution: We will create a Global Variable based on the request payload.  Global Variables are visible anywhere in the integration, including fault handlers. Click on the (X) icon on the right-hand toolbar to access Global Variables.   Give the global variable a name.  For type, select Object, indicating a complex type. Selecting Object opens the schema tree.  There, select an appropriate element, indicating our global variable will inherit this element’s type. Next, we will use Data Stitch to assign a value to this variable. Add Data Stitch action prior to the invoke to copy the request payload to the Global Variable. Name the Data Stitch action and click configure. In the Variable field (To), type “payload” to use the quick search to find and select the previously created Global Variable. Or, click Browse All Data to open the schema tree.  Under sources, drag and drop the Global Variable root element into the Variable field. With the Variable field populated, the Operation field defaults to Assign, and the Value field appears.  Accept the Operation field default as Assign.  In the Value field (From), use the schema tree to select the desired request payload. Optionally, we can click on the tool icon to the right to toggle the Variable and Value fields into developer mode to look at the full xpath expression. We have completed this Stitch action.  Click “X” to close the editor and apply changes.  Next, we will create the logger action to log the Global Variable in the fault handler. Go to the appropriate fault handler for the invoke. Add logger action. In the Logger editor, the Global Variable will be shown as an available source. Drag and drop it into the expression.  Make sure to also use an appropriate function such as getContentAsString in order to print the text properly. Save and activate the integration.   Test it against an invoke failure scenario, and you will see the request payload in the ics-flow.log: Example: [2020-03-09T21:05:15.982-07:00] [integration_server1] [NOTIFICATION] [] [oracle.ics.trace.soa.bpel] [tid: 154] [userId: icsadmin] [ecid: 8b44b372-eefe-4f03-ae9f-ecea6d4d020d-000ee5c4,0] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [FlowId: 0000N32bZJS9d_HpIsT4if1ULFFC0002bU]  [ICS User Logging]: [Code:CREA_ORDE_BY_REGI_STIT_LOG_PAYL][Version:01.00.0000][Instance:21][Operation:Logger][ActionID:lg0][ActionName:logPayload]: <Account xmlns="http://xmlns.stark.com"><ns23:AccountDetails xmlns:ns23="http://xmlns.stark.com"><ns23:ExternalSystemId>a</ns23:ExternalSystemId><ns23:SystemRowId>a</ns23:SystemRowId><ns23:ExternalSourceSystem>a</ns23:ExternalSourceSystem><ns23:AccountStatus>a</ns23:AccountStatus><ns23:AccountStatusChangeReason>a</ns23:AccountStatusChangeReason><ns23:AccountTypeCode>a</ns23:AccountTypeCode><ns23:AccountNumber>a</ns23:AccountNumber><ns23:ContactId>a</ns23:ContactId><ns23:CurrencyCode>a</ns23:CurrencyCode><ns23:PrimaryOrganization>APAC</ns23:PrimaryOrganization></ns23:AccountDetails></Account> Congratulations, we have just used Global Variable and Data Stitch to log payloads in case of fault!

In this blog, we will look at 2 new Integration features Global Variables, and Data Stitch.  Data Stitch allows us to make assignments to complex type variables.  We will show how the features can...

Integration

Use Data Stitch to simplify integrations

In this blog, we will look at a new integration feature, Data Stitch, and show how it can simplify integrations to help us reduce maintenance costs.  Data Stitch allows us to make assignments to complex type variables. Prerequisite Enable following features: oic.ics.console.integration.stitch-action oic.ics.console.integration.complex-variables To enable feature flags – Refer to Blog Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 200113.1400.33493 Use case:  We have deployed 3 instances of the application service in 3 regions (APAC, EMEA, AMER). 3 separate REST connections are used to handle 3 separate endpoint URIs and credentials for each region. We have an integration with 3 invokes to the CreateOrder REST API using the 3 connections. The payloads to these invokes are identical, only difference is the connection used. Maps can be heavy.  They may handle cardinality differences between source and target, or involve loops, functions, etc.  If we need to build, test, maintain 3 maps it can be costly and error prone.  Ideally, we only want to do this mapping once to improve developer productivity. Solution: We can re-use the first map to the APAC CreateOrder request, and then leverage Data Stitch to copy this request to the EMEA and AMER CreateOrder requests. Move the map to APAC CreateOrder outside of the APAC Region condition Delete the Map to EMEA CreateOrder and AMER CreateOrder. Add Data Stitch action before EMEA CreateOrder to assign the APAC CreateOrder request payload which is already mapped to the EMEA CreateOrder request payload. Give the Data Stitch action a name and click configure. In the Variable field (To), click Browse All Data to open the schema tree. Under Sources, select the EMEA_CreateOrder_Request/execute/Account variable, and drag & drop it into the Variable field. With the Variable field populated, the Operation field defaults to Assign, and the Value field appears.  Accept the Operation field default as Assign.  In the Value field (From), use the schema tree to select APAC_CreateOrder_Request/execute/Account. Optionally, imagine the EMEA request payload is not precisely the same as the APAC request.  Suppose the request is identical except I need to override one field in the case of EMEA.  To achieve this, I can add a second Stitch statement to override the specific field inside the Account object.  Stitch statements are executed sequentially at runtime. To achieve this, I click the “+” icon to add another Stitch statement. In the Variable field (To), select EMEA_CreateOrder_Request/execute/Account/AccountDetails/ExternalSystemId.  Since AccountDetails is an array, we may need to qualify which element of the array is to be used. Click on the tool icon to the right to toggle the Expression Box to developer mode for advanced expression editing.  In this case, we will qualify the expression with AccountDetails[1] to pickup the first record in the account details array. Accept the Operation field default as Assign.  In the Value field (From), you could use the schema tree to select the desired value, or just type a string, such as “ABC123”. We have completed this Stitch action.  Click “X” to close the editor and apply changes.  Repeat the steps to create another Stitch action to assign the APAC CreateOrder variable to the AMER CreateOrder variable in the AMER branch. When complete, the integration will look like this. Now, our integration only requires one Map to our first invoke.  We used Data Stitch to copy the payload to our subsequent invokes.  We may have also used Data Stitch to override specific fields for some of the invokes.  Our maintenance costs for this integration have been reduced because in the future we only need to maintain one map instead of 3.  Data Stitch is a valuable tool which can simplify our integrations!

In this blog, we will look at a new integration feature, Data Stitch, and show how it can simplify integrations to help us reduce maintenance costs.  Data Stitch allows us to make assignments to...

Persisting SOA Adapters Customizations

with inputs from Vivek Raj The lifetime for any customization done in a file on a server pod is upto the lifetime of that pod, the changes are not persisted once the pod goes down or restarted. For example: Below configuration updates `DbAdapter.rar` to create a new connection instance and creates DatasSource CoffeeShop on Administration Console for the same with jdbc/CoffeeShopDS. file location: /u01/oracle/soa/soa/connectors/DbAdapter.rar <connection-instance> <jndi-name>eis/DB/CoffeeShop</jndi-name> <connection-properties> <properties> <property> <name>XADataSourceName</name> <value>jdbc/CoffeeShopDS</value> </property> <property> <name>DataSourceName</name> <value></value> </property> <property> <name>PlatformClassName</name> <value>org.eclipse.persistence.platform.database.Oracle10Platform</value> </property> </properties> </connection-properties> </connection-instance> If you need to persist the customizations for any of the adapter files under SOA oracle home in the server pod, you need to follow one of the below methods. Method 1: Customize the Adapter file using the Administration console: Login to WebLogic Administration console : Deployments -> ABC.rar -> Configuration -> Outbound Connection Pools Create a new connection that is required : New -> provide connection name -> finish Go back to this new connection : update the required properties under it and save Go back to deployments : select the ABC.rar -> Update This step asks for `Plan.xml` location. This location by default will be in `${MW_HOME}/soa/soa` which is not under Persistent Volume. Hence when you specify above location, provide the domains PV location such as `{DOMAIN_HOME}/soainfra/servers` etc. Now the `Plan.xml` will be persisted under this location for each Managed Servers. Method 2: Customize the Adapter file on the Worker Node: Copy the `ABC.rar` from the server pod to a PV path: command: kubectl cp <namespace>/<SOA Managed Server pod name>:<full path of .rar file> <destination path inside PV> kubectl cp soans/soainfra-soa-server1:/u01/oracle/soa/soa/connectors/ABC.rar ${DockerVolume}/domains/soainfra/servers/ABC.rar Unrar the ABC.rar. Update the new connection details in `weblogic-ra.xml` file under META_INF. In WebLogic Administration console, Under Deployments -> select `ABC.rar` and click update. Here, select the `ABC.rar` path as the new location which is: `${DOMAIN_HOME}/user_projects/domains/soainfra/servers/ABC.rar` and update Verify that the `plan.xml` or updated .rar should be persisted in PV.  

with inputs from Vivek Raj The lifetime for any customization done in a file on a server pod is upto the lifetime of that pod, the changes are not persisted once the pod goes down or restarted. For...

Expose T3 protocol for managed servers in SOA Domain on Kubernetes

T3 ports for Managed Servers in Oracle SOA deployed in WebLogic Kubernetes operator Environment are not available by default. This document provides steps to create T3 channel and the corresponding Kubernetes Service to expose the T3 protocol for Managed Servers in SOA Domain. Exposing SOA Managed Server T3 Ports With the following steps you will be creating T3 port at 30014 on all Managed Servers for soa_cluster with below details: Name: T3Channel_MS Listen Port: 30014 External Listen Address: <Master IP Address> External Listen Port: 30014 Note: In case you are using different NodePort to expose T3 for Managed Server externally, then use same value for "External Listen Port Step 1 : Create T3 Channels for Managed Servers WebLogic Server supports several ways to configure T3 channel. Below steps describe the methods to create T3 channel using WebLogic Server Administration Console or using WLST Scripts. Method 1 : Using WebLogic Server Administration Console Login to WebLogic Server Administration Console and obtain the configuration lock by clicking on Lock & Edit In the left pane of the Console, expand Environment and select Servers. On the Servers page, click on the soa_server1 and go to Protocols page SOA Server Protocols Select Channel and then Click "New" Enter the Network Channel Name as say "T3Channel_MS", Select Protocol as "t3" and Click "Next" New Channel 1 Enter Listen Port as "30014", External Listen Address as "<Master IP>" and External Listen Port as "30014". Leave empty for "Listen Address". Click "Finish" to create the Network Channel for soa_server1. New Channel 2 Perform step 3 to 6 for all Managed Servers in soa_cluster. When creating Network Channel for other Managed Servers, make sure to use same values as for all parameters including "Network Channel Name". To activate these changes, in the Change Center of the Administration Console, click Activate Changes. New Channel 3 These changes does not require any server restarts. Once the T3 channels are created with port say 30014, proceed with creating the Kubernetes Service to access this port externally. Method 2 : Using WLST Script The following steps creates a custom T3 channel for all Managed Servers with name T3Channel_MS that has a listen port listen_port and a paired public port public_port. Create t3config_ms.py with below content: host = sys.argv[1] port = sys.argv[2] user_name = sys.argv[3] password = sys.argv[4] listen_port = sys.argv[5] public_port = sys.argv[6] public_address = sys.argv[7] managedNameBase = sys.argv[8] ms_count = sys.argv[9] print('custom host : [%s]' % host); print('custom port : [%s]' % port); print('custom user_name : [%s]' % user_name); print('custom password : ********'); print('public address : [%s]' % public_address); print('channel listen port : [%s]' % listen_port); print('channel public listen port : [%s]' % public_port); connect(user_name, password, 't3://' + host + ':' + port) edit() startEdit() for index in range(0, int(ms_count)): cd('/') msIndex = index+1 cd('/') name = '%s%s' % (managedNameBase, msIndex) cd('Servers/%s/' % name ) create('T3Channel_MS','NetworkAccessPoint') cd('NetworkAccessPoints/T3Channel_MS') set('Protocol','t3') set('ListenPort',int(listen_port)) set('PublicPort',int(public_port)) set('PublicAddress', public_address) print('Channel T3Channel_MS added ...for ' + name) activate() disconnect() Copy t3config_ms.py into Domain Home (e.g., /u01/oracle/user_projects/domains/soainfra) of Administration Server pod (e.g., soainfra-adminserver in soans namespace) $ kubectl cp t3config_ms.py soans/soainfra-adminserver:/u01/oracle/user_projects/domains/soainfra Execute wlst.sh t3config_ms.py by exec into Administration Server pod with below parameters host: <Master IP Address> port: 30012 # Administration Server T3 port user_name: weblogic password: Welcome1 # weblogic password listen_port: 30014 # New port for T3 Managed Servers public_port: 30014 # Kubernetes NodePort which will be used to expose T3 port externally public_address: <Master IP Address> managedNameBase: soa_server # Give Managed Server base name. For osb_cluster this will be osb_server ms_count: 5 # Number of configured Managed Servers Command: $ kubectl exec -it \<Administration Server pod> -n \<namespace> -- /u01/oracle/oracle_common/common/bin/wlst.sh \<domain_home>/t3config_ms.py \<master_ip> \<t3 port on Administration Server> weblogic \<password for weblogic> \<t3 port on Managed Server> \<t3 nodeport> \<master_ip> \<managedNameBase> \<ms_count> Sample Command: $ kubectl exec -it soainfra-adminserver -n soans -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/user_projects/domains/soainfra/t3config_ms.py xxx.xxx.xxx.xxx 30012 weblogic Welcome1 30014 30014 xxx.xxx.xxx.xxx soa_server 5 Step 2 : Create Kubernetes Service to expose T3 port 30014 as NodePort Service Create t3_ms_svc.yaml with below contents to expose T3 at Managed Server port 30014 for domainName and domainUID as "soainfra" and cluster as "soa_cluster" : apiVersion: v1 kind: Service metadata: name: soainfra-soa-cluster-t3-external namespace: soans labels: weblogic.clusterName: soa_cluster weblogic.domainName: soainfra weblogic.domainUID: soainfra spec: type: NodePort selector: weblogic.clusterName: soa_cluster weblogic.domainName: soainfra weblogic.domainUID: soainfra ports: - name: t3port protocol: TCP port: 30014 targetPort: 30014 nodePort: 30014 Create the NodePort Service for port 30014 with command: $ kubectl create -f t3_ms_svc.yaml Now you can access t3 for Managed Server with below URL t3://<master_ip>:30014

T3 ports for Managed Servers in Oracle SOA deployed in WebLogic Kubernetes operator Environment are not available by default. This document provides steps to create T3 channel and the corresponding...

Integration

Introducing the Box Adapter in Oracle Integration

As many of you might know already, at Oracle Open World (OOW) 2019 a few months ago, we announced our partnership with Box to empower our customers to connect their cloud and on-premises Oracle and third-party applications with Box via Oracle Integration (OIC). Read the announcement from OOW here. Box offers enterprises content management as a cloud service, enabling organizations to share files, collaborate between team members, and manage the lifecycle of content securely. As a cloud service, Box can scale as its customers needs grow in size and depth of complexity including attaching custom metadata to content and watermarking content for review. Today, we are pleased to present the availability of the Box Adapter (in preview mode), which offers bound inbound and outbound integration with Box on the Oracle Integration platform. Integration designers can now use this adapter in conjunction with the vast array of other adapters that provide connectivity to various different technologies, cloud services, and on-premise applications. The Box adapter supports outbound operations to Box that can manage files and folders as well as work with metadata for these artifacts. For managing files and folders, the adapter supports working with folder and their contents, working with files and uploading / downloading files, acquiring and updating shared links, and managing watermarks on files. For managing metadata, the adapter supports lifecycle management (creating, retrieving, updating, deleting) of metadata for files and folders. The adapter has the ability to read the custom metadata templates that a Box user may have created for their enterprise use. The Box adapter also supports inbound notifications from Box in the form of a webhook notification. During configuration, the adapter allows the Oracle Integration developer to choose the events that they would like to receive for a particular file or folder. The Box adapter uses Box's OAuth 2.0 authentication and authorization framework to obtain the proper access token to execute the API securely. For more information about OAuth 2.0 supported by Box and the steps to register an app to enable this authentication route, please visit:  https://developer.box.com/en/guides/authentication/. Of particular note, the Box adapter secures webhook notifications by verifying the notifications against signature keys that are used to sign every notification from Box to ensure that the notification is authentic. To ensure no downtime, Box employs a two signature keys approach for signing webhook notifications. Verifying either signature can be deemed as sufficient to verify the authenticity of the message. Thus one key can be regenerated and changed in Oracle Integration enabling the remaining key to still work. The other key can be updated once the first key has been updated ensuring that there is no downtime in notification verification. As usual, these credentials are saved securely in Oracle Integration. Full details in our Oracle Integration technical docs here

As many of you might know already, at Oracle Open World (OOW) 2019 a few months ago, we announced our partnership with Box to empower our customers to connect their cloud and on-premises Oracle and...

Integration

A Simple Guide to Connect to a Private FTP Server using FTP adapter

You can now integrate with an FTP server even when it is in private network and not accessible publicly. This is made possible with the latest feature where a Connectivity Agent can be configured to be used with the FTP adapter. The FTP adapter supports connectivity to: FTP/SFTP hosted on-premise - through a connectivity agent FTP/SFTP hosted on cloud - without a connectivity agent, as before Connection Properties: Provide connection property values: Enter the FTP/SFTP host address and port If using a secure FTP server, then select Yes for 'SFTP Connection' else select No. Security: Please select one of the security policy FTP Server Access Policy : for username/password authentication. FTP Public Key Authentication : as the name suggests, for Public Key authentication. FTP Multi Level Authentication : i.e, to authenticate using both username/password and public key. Configure Connectivity Agents: In case the FTP server is not directly accessible from Oracle Integration, for eg. if its on-premise, or behind a firewall, a Connectivity Agent needs to be configured for this connection. This can be done using the 'Configure Agents' section.  However, Connectivity Agent may not be required when the FTP server is publicly accessible. To know more about Connectivity Agent, check out these: New Agent Simplifies Cloud to On-premises Integration The Power of High Availability Connectivity Agent   File size support in FTP adapter with agent 1) With Schema - If using schema for transformation, then the file size limit is 10 MB 2) Without schema - The file size limit is 1 GB. For example, Download File operation does not support schema, and can send  a file up to 1 GB. This may take time considering the network latency between Connectivity Agent and OIC. Limitations when the FTP adapter is configured with the connectivity agent: PGP encryption/decryption. Unzip during Download File operation. You could use the stage activity if needed, for these purposes. FTP adapter is not supported in trigger in Basic Integration template

You can now integrate with an FTP server even when it is in private network and not accessible publicly. This is made possible with the latest feature where a Connectivity Agent can be configured to...

Deploying SOA Composites from Oracle JDeveloper to Oracle SOA in WebLogic Kubernetes Operator Environment

Inputs provided by Ashageeta Rao and Vivek Raj This post provides steps to deploy Oracle SOA composites/applications from Oracle JDeveloper (that runs outside the Kubernetes network) to the SOA instance in WebLogic Kubernetes Operator Environment. Pre-requisities Note: Replace entries inside <xxxx> specific to your environment Get the Kubernetes Cluster Master Address and verify the T3 port which will be used for creating application server connections. You can use below kubectl command to get the T3 port: kubectl get service <domainUID>-<AdministrationServerName>-external -n  <namespace>-o jsonpath='{.spec.ports[0].nodePort}' JDeveloper need to access Managed Server during deployment. In WebLogic operator Environment each Managed Servers are pods and cannot be accessed directly by JDeveloper. Hence we need to configure the Managed Server's reachability: Decide on external IP address to be used to configure access of Managed Server ( soa cluster). Master or worker node IP address can be used to configure Managed Server accessibility. In case you decide to use some other external IP address, that need to be accessible from Kubernetes Cluster. Here we will be using Kubernetes Cluster Master IP.   Get the pod names of Administration and Managed Servers (i.e. "<domainUID>-<server name>") which will be used to map in /etc/hosts.   Update `/etc/hosts` (or in Windows: `C:\Windows\System32\Drivers\etc\hosts`) on the host from where JDeveloper is running with below entires where <Master IP> <Administration Server pod name> <Master IP> <Managed Server1 pod name> <Master IP> <Managed Server2 pod name> Get the Kubernetes service name of the SOA Cluster so that we can make them access externally with Master IP ( or External IP). $ kubectl get service <domainUID>-cluster-<soa-cluster> -n <namespace>   Create a Kubernetes service to expose SOA cluster service (“<domainUID>-cluster-<soa-cluster>”) to available externally with same port of Managed Server: $ kubectl expose service  <domainUID>-cluster-<soa-cluster> --name <domainUID>-<soa-cluster>-ext --external-ip=<Master IP> -n <namespace> NOTE  : The Managed Server t3 port is not exposed by default and opening this will have a security risk as the authentication method here is based on a userid/password. It is not recommended to do this on production instances. To deploy SOA composites/applications from Oracle JDeveloper, Administration Server should have been configured to expose a T3 channel using the exposeAdminT3Channel setting when creating the domain, then the matching T3 service can be used to connect. By default when exposeAdminT3Channel is set WebLogic Kubernetes Operator Environment will expose NodePort for the T3 channel of the NetworkAccessPoint at 30012 (Use t3ChannelPort to configure port to different value). Create an Application Server Connection in JDeveloper Create a new application server connection in JDeveloper In the configuration page provide the WebLogic Hostname as Kubernetes Master Address Update the Port as T3 port ( default is  30012) obtained in the prerequisites step 1 Enter the WebLogic Domain i.e (domainUID) Test the Connection and it should be successful without any error Deployment of SOA Composites to SOA using JDeveloper In JDeveloper, right click the SOA project you want to deploy and select the Deploy menu. This  invokes the deployment wizard In the deployment wizard, select the application server connection that was created earlier. If the prerequisites has been configured correctly, the next step looks up the SOA servers and shows the Managed Servers for deploying the composite. Using the application server connection, Managed Servers (SOA cluster) are discovered and get listed on the select servers page. Select the SOA cluster and click Next. On Summary page, click Finish to start deploying the composites to SOA cluster. Once deployment is successful, verify with soa-infra URL to confirm the composites are deployed on both servers:

Inputs provided by Ashageeta Rao and Vivek Raj This post provides steps to deploy Oracle SOA composites/applications from Oracle JDeveloper (that runs outside the Kubernetes network) to the SOA...

One Stop Solution for OIC Certificate Management

Oracle Integration Certificate Management empowers admins to manage all their certificates and PGP keys at one place. The PGP Keys are used in Stage File for encryption and decryption. Prerequisite Enable following feature: oic.suite.settings.certificate  (Suite level certificate landing page) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 190924.1600.31522 Simplified and Progressive User Experience :  OIC provides the user with an easy tool for managing life cycle of certificate, through the Certificates page under Settings menu.    Sorting and Filters capabilities: Sort by: It allows sorting by Expiry date in ascending or descending order: Expiring Soon Expiring Later It also allows filter by: Status, Type, Category, and Installed by. By default table is loaded with Installed by User filter.          Progressive UI in the Certificates page.   Certificate details with better grouping of information. Key Functionalities :  All functionalities on the page are displayed in a list view page along with seamless interaction with drawer. Type of Certificates: X509 (TLS)  – An SSL/TLS X.509 certificate is a digital file that's usable for Secure Sockets Layer (SSL) or Transport Layer Security (TLS). The certificate can assist with authenticating and verifying the identity of a host or site thus enables Oracle Integration to connect with external service. Identity (Ex. .jks) - An identity certificate is a keystore which can contain various certificates with passwords. Trust  (Ex. .crt or .cert) SAML - SAML refers to the XML variant language used to encode information. Its a Message Protection certificate which has SAML token support. PGP - Pretty Good Privacy (PGP) is used for signing, encrypting, and decrypting texts. Private - Content can be decrypted with private PGP key. Public - Content can be encrypted with public PGP key.  Certificates Upload:       Step by Step Guide:  Click on the upload on top-right corner. A drawer opens up with the details to fill up. Enter alias name which identifies the certificate. Give a brief description (optional) about the certificate you are uploading. Select the type of Certificate you want to upload. You can choose from the list: X.509, SAML, and PGP. Choose the category of certificate. For a X.509 → Trust, Identity, SAML → Message Protection, and PGP → Public, Private. Choose a file from your local system to upload , please note: can be left blank in case of PGP which will be create as a draft certificate. For PGP upload and usage refer to : https://blogs.oracle.com/integration/using-stage-file-readwrite-operation-to-encryptdecrypt-files For Identity Certificate: (Refer screenshot below) In Alias name field, enter a key name and in case of multiple keys enter it as a comma separated stings. For Key Passwords, give corresponding set of comma separated passwords for the keys mentioned in alias name, in the same order. Provide the password for the uploaded keystore.   Certificate Table: Name : Alias name provided for the certificate. In case of Identity certificate it is the key name. Type: Type of the certificate uploaded (X.509, SAML, PGP). Category: Uploaded certificate category (Trust, Identity, Message Protection, Public, Private). Status: Status of the certificate. It could be either Draft or Configured. Certificate Expiry Tag: Display the time in which the certificate will expire. For Expired certificate it highlights in red.           

Oracle Integration Certificate Management empowers admins to manage all their certificates and PGP keys at one place. The PGP Keys are used in Stage File for encryption and decryption. Prerequisite Enabl...

Configurators - One stop solution for all your dependency configuration needs

For those of you already familiar with the blog for Integration dependency configuration, we have something better to offer. The previous blog talks about replacing a connection dependency in the integration, with the another connection resource of the same role using Rest apis. We know how tedious handling rest apis can get, hence we have now come up with a snazzy UI do the same operation. This feature is available at the integration level and it has also been extended to work at Package level.   Lets take a look at the Configurator, in detail in the upcoming section Configurator Prerequisite The minimum Oracle Integration version required for the feature is 36410. Accessing the Configurator As mentioned earlier, configurator is supported at Integration Level as well as at Package Level. Integration Level :  If you configure an integration belonging to a package, configurator will open up in a mode that lists all the integrations belonging to the package along with its dependent resources. It will also highlight the integration that clicked to configure.  If you configure an integration that does not belong to any package, configurator will open up in a mode that lists only the selected Integration and its dependent resources. Package Level :  If you configure a package, configurator will list all the integrations belonging to the package along with its dependent resources. You can access the Configurator -  At Integration level:  In the 'Import Integration' popup, the user will now have an option to 'Import and Configure'. Clicking on which will import the integration/ and then redirect you to the configurator page.                                                                On the integration landing page, user can open the action menu and select the Configure option which will redirect you to the configurator page.                              At Package level:  On the package landing page, user can open the action menu and select the Configure option which will redirect you to the configurator page. On the package landing page, click on the wrench icon you see on hover of a package to be redirected to the configurator page. Using the Configurator Now, lets take a look at the example below and understand the details of using the configurator. The configurator can be broadly classified into these 3 unique functionalities :  List of Dependent Resources Configurator lists the dependent resources present in a specific integration or all the integrations in a specific package. Dependent Resources include Connections, Lookups, Libraries and PGP keys. At Package Level, in addition to the above, it will include a list of integrations as well. It also displays the corresponding related information of each of the dependent resource (like Connection, Lookup etc) which includes status and usage information.  Status is the status of the resource - Draft, Configured etc At Package Level, the usage information gives us info about how many integrations within the package are using this particular resource At Package Level, in addition to the above, it will include integration information such as Integration Style and Status In this example of configuring a package, pkg_configurator, all the resources used in the package are listed below. We can see that there are 3 Integrations, 2 Connections, 1 Lookup and 1 Library and 2 Certificates being used.   We have also added sticky horizontal navigator at the top which will help user easily navigate to the dependent resource he is interested in. User Actions for each resource On Hover of each resource, user is presented with different options-  Edit and Replace (for Connections and Certificates), only Edit (for Lookups and Libraries), Add Schedule, Schedule and Update Integration Properties (for Integrations). Edit:  On Click , the user will be navigated to the corresponding edit page of the resource. Example - Connection Edit Page, PGP Key Edit page etc.  Edit action is supported for all resources - i,e Connections, Lookups, Libraries and PGP keys Replace: On Click, the user will see a popup containing connections that are the same role as original connection.The user can choose the connection with which you want to replace. Upon choosing, the users changes will be saved in memory so they can move on to making other changes. All the changes are collectively saved when the user clicks on the Done Button on top right side. Replace action is supported only for Connection and PGP Keys. At package level, replacing a resource would replace it across all the integrations in a package.   Revert: Once the user has replaced the connection/ certificate, he may wish to revert it back by clicking on the Revert button next to Replace. Revert will undo the replace change and take the dependent resource back to the original.   Add/Manage Schedule: On click of Add Schedule, the user will be navigated to the create schedule page with prepopulated schedule. User can save the schedule and navigate back to the configurator page via the Manage Schedule page.  On click of Schedule, the user will be navigated to the Manage Schedule page where he can perform actions such as Start Schedule, Stop Schedule, Delete Schedule etc     Update Integration Properties: On click of Update Property Values, a drawer will open up inline where the user can update any previously added integration properties.  Update Integration is only supported for Integrations that already have Integration Properties configured.   Details supported for each resource User can get more information about any resource by clicking on the expand button in the row. On click, a details section will be revelaed with the information captured in the screenshot.              So this has been a brief explanation about the Configurator. Hope you enjoy using it as much as we enjoyed building it!!

For those of you already familiar with the blog for Integration dependency configuration, we have something better to offer. The previous blog talks about replacing a connection dependency in the...

Accelerate API Integration With the Power of OpenAPI

The OpenAPI Specification defines a standard, programming language-agnostic interface description for REST APIs. We are pleased to announce OpenAPI support in Oracle integration cloud REST adapter. What this means is that all OIC integration flows with a REST trigger will publish an openAPI to describe its metadata. This machine readable description of the API will allow interactive API explorers as well as the developers to consume OIC integration flows with ease. On activation of an integration flow, the metadata link will show the new option to display the openAPI in addition to the swagger URL. We are also introducing an interactive openAPI explorer in the Oracle integration cloud REST adapter wizard that will help integration developers to explore and consume APIs described in openAPI format with a few clicks. This option can be selected to provide openAPI 1.0/2.0 (a.k.a. Swagger) as well as an openAPI 3.0 spec.  Steps to consume an API described in openAPI: Create a REST Adapter connection with Invoke role and select the new option and provide the link to the openAPI in the ‘Connection URL’ Use this connection within an integration flow. The REST adapter wizard will have an API explorer based on the openAPI description provided in the connection. Browse and select the required path and operation to complete the wizard.     OpenAPI is a rich specification. Currently some of the constructs in openAPI cannot be consumed. Please consult the oracle documentation for information about the constraints. 

The OpenAPI Specification defines a standard, programming language-agnostic interface description for REST APIs. We are pleased to announce OpenAPI support in Oracle integration cloud REST adapter.What...

Why and How to Integrate Oracle Policy Automation with Oracle Integration

Oracle Integration has recently introduced new functionalities to extend its connection capabilities. I’m especially talking about the enhanced Oracle Policy Automation adapter which is often used in integration projects when it’s required to extend SaaS applications. The Oracle Policy Automation (OPA) adapter is available in Oracle Integration to address different scenarios allowing to OPA decisions to be invoked at any point of the integration flows. For example, when a Service Cloud Incident is created for a medical device manufacturer (for instance a hearing aid), an integration instance can be triggered by OPA used to find out what to do with this particular type of incident and routing properly the managed information. The integration layer (in this case OIC) performs all necessary actions saving the decision somewhere, invoking other processes / web services or pushing data to multiple applications Cloud and / or on-premise. By using Oracle Integration we can easily integrate the OPA decisions logic into any enterprise application without the need to build a custom connector. There are some use cases where OPA is used in an integration scenario: Auto-triage incidents to ensure service-level agreements are met Calculate benefit payments using data stored in a legacy system Recalculate leave entitlements when regulations changes Calculate complex sales commissions   The Oracle Integration adapter today enables bidirectional communications allowing inbound and outbound communication patterns. An Oracle Policy Automation (OPA) adapter is now available in Oracle Integration (OIC) to: Allow OPA web interviews to trigger Oracle Integration integrations as the endpoints for data operations. Allow OPA decision assessments to be invoked at any point in an integration. After accessing your Oracle Integration instance, you can select the OPA adapter available from the palette therefore, you can configure that one with the role type required from your project (invoke, trigger or both).     Once the connection is created, you can reuse that one in every integration flow

Oracle Integration has recently introduced new functionalities to extend its connection capabilities. I’m especially talking about the enhanced Oracle Policy Automation adapter which is often used in...

Integration

How to Encrypt/Decrypt Files in OIC

Encrypt/Decrypt capabilities in Stage Files You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such as processing in the middle.  Stage File action in Integration canvas supports various file operations (list/read/write/zip/unzip). Existing OIC feature (oic.ics.stagefile.pgp.key.support) enables decrypt option while reading entire file and encrypt option while writing file. This feature is useful to process the file upto 10 MB size and doesn't support decryption option while doing read file in segments. For more details, see blog: Using Stage File Read/Write operation to encrypt/decrypt files.   This blog explains the new feature(oic.ics.stagefile.firstclass.encrypt-decrypt ) which allows OIC user to encrypt or decrypt file of size up to 1GB. Prerequisite Enable following features: oic.suite.settings.certificate  (It will allow user to manage certificate life cycle in OIC) oic.ics.stagefile.firstclass.encrypt-decrypt (It will allow user to Encrypt or Decrypt a large file in stage file) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 191216.1400.33050. Step By Step Guide Upload PGP public/private keys : Refer  "To upload PGP Keys"  mentioned in the blog  To Configure Stage Encrypt File action with PGP Key to encrypt file  Drag and drop the Stage File Action. A popup wizard will be opened where you need to provide the value for the field  "What do you want to call your action ? ". Click Next and Select "Choose Stage File Operation" as Encrypt File. Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Select PGP Key to encrypt file- Select the PGP Public Key to encrypt the file. This is the PGP public key you uploaded at the beginning.                      Click on Next and it will display the summary page. Now click on Done.                To configure Stage Decrypt File Operation with PGP Key to decrypt file Drag and drop the Stage File Action. A popup wizard will be opened where you need to provide the value for the field  "What do you want to call your action ? ".      Click Next and Select "Choose Stage File Operation" as Decrypt File. Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Select PGP Key to decrypt file- Select the PGP Private Key to decrypt the file.  Click Next and it will display the summary page. Now click on Done.            Samples Stage Encrypt File Integration to encrypt file IAR This Integration Encrypts and Writes the file to stage location using Stage Encrypt File Operation with PGP Public Key. Writes the encrypted file to output directory from stage location.  Stage Decrypt File Integration to decrypt encrypted file   IAR This Integration Reads and Decrypts the downloaded file using Stage Decrypt File Operation using  PGP Private Key Writes the decrypted file to output directory from stage location  

Encrypt/Decrypt capabilities in Stage Files You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in...

Using Stage File Read/Write operation to encrypt/decrypt files

You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such as processing in the middle.  The new feature makes it easy to configure PGP keys in Stage File Read/Write operation to decrypt/encrypt file up to 10 MB in size.   Prerequisite Enable following features: oic.suite.settings.certificate  (It will allow user to manage certificate life cycle in OIC) oic.ics.stagefile.pgp.key.support (It will allow user to upload and delete PGP keys in stage file) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 190904.0200.31130   Step By Step Guide Public Key is used for Encryption and Private Key for decryption. In order to use encrypt/decrypt files we have to upload PGP keys in OIC. To upload PGP Keys: From OIC Home page → Settings → Certificates page Click Upload at the top of the page. In the Upload Certificate dialog box, select the certificate type. Each certificate type enables Oracle Integration Cloud to connect with external services. PGP: Use this option for bringing PGP Certificates. Public Key: Enter Alias Name and Description Select Type as PGP Select Category as Public Select PGP File, Click Browse and select the public key file to be uploaded Select ASCII-Armor Encryption Format Select Cipher Algorithm Click Upload. Private Key: Enter Alias Name and Description Select Type as PGP Select Category as Private Select PGP File, Click Browse and select the private key file to be uploaded Enter the PGP Private Key Password of the private key being imported. Click Upload.   You can download the encrypted file to staged location using FTP Download File operation. To configure FTP Adapter Download File operation: Select Download File. Specify the input directory and download directory path(this path will be the input directory for stage read file).   You can then use Stage File action Read File operation to decrypt the encrypted file so it can be read and transformed. To configure Stage Read Entire File operation with PGP Key to decrypt file: Select Read Entire File Configure File Reference - Select Yes Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Decrypt - Check this option to decrypt the file (Use Decrypt Check Box to enable PGP selection) Select PGP Key - Select the PGP Private Key to decrypt the file   After the transformation, you can use Stage File action Write File operation to re-encrypt it. To configure Stage Write file operation with PGP Key to encrypt file: Select Write File Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Encrypt - Check this option to encrypt the file (Use Encrypt Check Box to enable PGP selection) Select PGP Key - Select the PGP Public Key to encrypt the file   Encrypted file can be sent to an external endpoint or sFTP server. To configure FTP Adapter Write File operation: Select Write File. Specify the directory path to which to transfer files Select the pattern name for files to transfer.   Samples Stage Write File Integration to encrypt file IAR This Integration Downloads input file from input directory to stage location Reads the downloaded file using Stage Read Entire File Operation using File Reference Encrypts and Writes the file to stage location using Stage Write File Operation with Encrypt option and PGP Public Key Writes the encrypted file to output directory from stage location   Stage Read File Integration to decrypt encrypted file IAR This Integration Downloads stage encrypted file from input directory to stage location Reads and Decrypts the downloaded file using Stage Read Entire File Operation using File Reference with Decrypt option and PGP Private Key Writes the decrypted file to stage location using Stage Write File Operation Writes the decrypted file to output directory from stage location

You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such...

How To Configure an Integration flow with Binary Content Using Rest Adapter as Trigger in Oracle Integration

Binary Content Type Support in Oracle Integration Rest Adapter Trigger Introduction:                  Oracle Integration Rest Adapter now supports application/octet-stream in the trigger request and response, with this new capability, it will be possible to invoke an integration using Binary content over REST. Similarly, it will be possible for an integration flow to return binary content in response to a request over REST. Before we do a deep dive on the feature let us start with understating what is application/octet-stream   About "application/octet-stream" MIME attachments:                  A MIME attachment with the content type "application/octet-stream" is a binary file. Typically, it will be an application or a document that must be opened in an application, such as a spreadsheet or word processor. If the attachment has a filename extension associated with it, you may be able to tell what kind of file it is. A .exe extension, for example, indicates it is a Windows or DOS program (executable), while a file ending in .doc is probably meant to be opened in Microsoft Word.                No matter what kind of file it is, an application/octet-stream attachment is rarely viewable in an email or web client. If you are using a workstation-based client, such as Thunderbird or Outlook, the application should be able to extract and download the attachment automatically. After downloading an attachment through any of these methods, you must then open the attachment in the appropriate application to view its contents. How is this leveraged in Oracle Integration Rest Adapter:              This feature allows client application to send/receive structured/unstructured content to be sent as raw data / stream of characters to and from Oracle Integration flow, The Customer can send files like PDF, CSV, and JPG to Oracle Integration for processing and Integration Flows can be modeled to send files like PDF, CSV, and JPG as response back to client applications              However there is no restriction on the content-type. We will not parse the request or the response and just write it as a byte stream along with the appropriate header              User has to select the binary option in the Rest Adapter trigger wizard in the request to consume a binary content file which is sent by the client application, similarly user has to select the binary option in the rest adapter invoke wizard in the response to produce a binary content to a client application. Oracle Integration Rest Adapter Trigger configuration and Mapping: Media Types Supported in the new feature:             Oracle Integration now has the capability to expose a rest endpoint which can consume and produce octet stream media types following image shows the media types supported in Rest configuration wizard   Other Media Type selection:                      If user wants to select another types that are not available in the drop down he can select the other media types in the drop down and he can provide the media type name as shown in the following screen.   Mapping Elements for Octet-Stream:              Once the Trigger and invoke is configured with binary using the Oracle Integration rest connection, the mapping activity comes with a single element (Stream Reference), this stream reference accepts a file without any schema because the request and response contains can consume any kind of data in any format.                       Mapping is a "StreamReference" attribute which will take the binary content reference to process the binary data from and to the client application, the following image shows the mapping elements in both request and and response of a rest trigger.   Publishing a Rest endpoint of Type Swagger and Open API which has Octet-Stream in Rest Trigger                                     The Oracle Integration exposed rest endpoint metadata of swagger and openapi type which consumes and produces a binary type can we used in any client(for example, a swagger editor or a postman client) and we can use this metadata to create a request.

Binary Content Type Support in Oracle Integration Rest Adapter Trigger Introduction:                  Oracle Integration Rest Adapter now supports application/octet-stream in the trigger request and...

Integration Pages - Progressive Web App UI Experience

Pre-requisites From June 8 these pages will be part of the standard OIC UI. What's New New UI is built using Oracle JavaScript Extension Toolkit (Oracle JET) utilizing full benefits of JavaScript, CSS3 and HTML5 design and development principles. This new UI is compliant with latest UX standards and offers consistent user experience across Integration designer pages. Following are the highlights of the new features and enhancements included in the new UI: Single configuration UI to view and configure dependent resources of an integration. Inline edit of connection resource configuration. Lookup editor enhancements for ease of editing large table sets using paging controls. Single click callout library configuration. Pre-filtered results on resource list page based on logged in username. Enhancements on the resource list page to filter by created and last updated username. Progressive loading of UI contents. Continuous and smooth scrolling of resource list contents. Ease of accessing primary actions on resources list page. Monitoring and Scheduling Integration pages in OIC have undergone similar changes. Goto Integration Monitoring and Scheduling pages - Progressive Web App UI Experience for more information. Changes in the top-level menu structure The top level menu in Suite UI has been restructured to make way for the new UI. When the logged in user has access to designer pages, a click on the 'Integrations' menu now opens a sub-menu with links to designer landing pages instead of page re-direction. Click the sub-menu links displays the respective landing page. Let's see the behavior explained through screens Existing Oracle Integration home page menu structure Clicking on the 'Integrations' menu redirects a user to Integration pages through a complete browser refresh. A switch back to the home page is done by clicking on the 'Home' icon which also does a full page refresh. New Menu Structure: Integrations (Designer Pages) , Monitoring, and Settings becomes top level menus in the new Suite home page. Integrations All the designer pages are moved under the 'Integrations' menu. As mentioned above, the 'Integrations' menu item in the OIC suite home has a drill-down option as indicated by the 'Arrow' next to the menu name. Clicking on 'Integrations' will slide out the existing menu and show the respective Integration features. Clicking on features/pages loads the respective content on the right side without refreshing the page. To go back to the home page menu items, users can click on the 'Arrow' point to the left side in the header section. Monitoring User can access all the from the top level 'Monitoring' menu.  User can access all the Integration monitoring pages under the 'Integration' sub-menu under the 'Monitoring' top level menu.        Settings User can access all the from the top level 'Settings' menu.  User can access all the Integration specific settings pages under the 'Integration' sub-menu under the 'Settings' top level menu. Designer UI Experience Common list view experience The designer pages follow the new common list view pattern which combines the toolbar and table view to full-fill the complete functionalities of the integration designer resources. The toolbar includes the features like searching, sorting and filtering with the other features of showing applied filter and sorting details, refresh option and summary text.  The table view has number 0f unique features and one of them is to show only required and minimal data on a row and the secondary or extra details are shown as part of detail view section of that row. The table row will show an overlay on mouse over will include the primary action buttons, action menu and open/close detail view. Toolbar Features Searching On the toolbar, the search icon will appear on left most and the search input text field will open when click on the search icon. The input text will show the placeholder text depends on the type of resource list page. Type the name and press <Enter> key will give the list of resources by applying the search by contains criteria.  There will be a cross icon next to search input field, click on the cross icon will clear and close the search input field and get the result without search input criteria. Filtering On the toolbar, the filter icon appears next to the search icon. The filter options are not shown on toolbar but all the filter options are appears on a popup which can be launched by clicking on the filter icon. The filter popup will have one or more filter options of different types which makes it easier to choose the desired filtering and sorting user wants to apply on the list.  Sorting The sorting menu is not available separately on the toolbar but this will be available as part of filter popup which we have explained in the above section. The sorting options will be applied with filter options by 'Apply' action on filter popup. Filter and Sorting Details The filter and sorting detail section will show all the filter and sort options applied. This section will render the details with the key value pair of applied filters and sorters. Each key value pair detail section has a cross icon to remove the filter or sorter individually. There will be 'Clear' option at the end of all the filter and sorter details to clear all the filters and sorter options. Summary Detail The summary detail section will show the number of total resources as a result after applying the search and filters, and how many resources are rendered and can be viewed in the list view port.  Default Filters The default filter has been introduced on Integrations, Connections, Lookups and Libraries designer listing pages. The default filter of updatedBy criteria with the logged in username. This filter will be applied on new browser session and this filter is preserved while navigating to different pages and even the subsequent login on the same browser session. Once the updatedBy filter is removed, then this default filter will not be applied even on subsequent login on the same browser session. Accessing Actions The actions are referred as primary actions and secondary actions which are accessible on overlay actions panel. This overlay actions panel will be rendered on mouse over of that row. On the actions panel, there would be maximum two primary actions based on the current status of that resource and the remaining actions are referred as secondary actions which will be rendered as menu items and that can be launched by hamburger icon on actions panel. Detail View Section The minimum and required details of any resource will be shown in the individual row and complete or extra details will be shown in its details section. The details section can be opened by the open details icon on the action overlay panel. Editor Experience Connection Editor Connection editor page is used to configure connection properties and security policy to be used in the orchestration canvas. In the new UI experience user can easily configure the endpoint information without going through multiple pop-ups.  Lookup Editor Lookup editor page is used to store mapping of various values used by different applications, so it can be used in the integration for auto mapping. Navigating across various mapping for a large table is fast using the paging control.  Library Editor Library editor page is used to configure functions to be used in the orchestration canvas and also to export/import metadata. In the new UI experience we make it easier to configure the function with just a single click. Users can just select the checkbox next to the functions in the left panel and it automatically gets configured for both Orchestration and XPath types. Users can modify the param types and Save it. Here is the improved UI experience on the editor page where all the files/functions part of the registered library are exposed in the 'Functions' panel. Clicking on the checkbox next to the function and you are set to go to use the function as orchestration callout or XPath. Timezone Preference By default Designer pages are displayed based on the browser's timezone. To override the default and to set a preferred timezone use Preferences menu under the User Menu. When timezone preference is saved the page refreshes to display contents based on timezone set in preferences. The change in timezone is saved in cookies and persists across browser sessions. Report Incident While in the designer pages click on Report Incident menu brings up the Report Incident dialog. Make relevant entries and select Create to report an incident. New Certificate Management Look at OIC Certificate Management page more information on certificate management in OIC. Also, for more information on use of user defined PGP public and private certificate for encryption and decryption, see  blog: Using Stage File Read/Write operation to encrypt/decrypt files New Features Integration Configuration New functionality and Single UI to perform various operations for an integration: View all dependent resources for the integration,  View the usage and status information of dependent resource  Configure dependent resource and persist the changes Perform all the above operation during import or after import at a later stage.

Pre-requisites From June 8 these pages will be part of the standard OIC UI. What's New New UI is built using Oracle JavaScript Extension Toolkit (Oracle JET) utilizing full benefits of JavaScript, CSS3...

Calling JD Edwards Orchestrations from Oracle Integration: Quickly and Easily

Background JD Edwards orchestrations empowers army of citizen developers/business analyst resources to design business applications REST APIs without writing a single line of code. JD Edwards orchestrations exposes business process steps tied together graphically through the robust semantics of REST standards. JD Edwards orchestrations are great way to simplify, integrate and automate repeated tasks through digital technologies. JD Edwards orchestrations are executed at AIS Server however they are designed via a tool called as Orchestrator studio, with JD Edwards tools release 9.2.4.0, Orchestrator studio is also part of AIS Server that further simplifies deployment of Orchestrator studio. Orchestrator studio is a low code, no code tool that allows business analyst to leverage their knowledge of business applications and create the flow using series of application tasks/steps and expose them as a REST end point. As JD Edwards AIS/Orchestrations are gaining traction & momentum, it has become tool of choice for JD Edwards customers who are looking to Integrate JD Edwards with Cloud SaaS applications, PaaS services or any other on premise applications.  Sample Use Case Consider you want to return all sales orders that are at a particular state lets say 540 i.e. Print Pick in JD Edwards Sales Order application. This means you need an REST end point that takes Sales Order state as an input with default being 540, and return in response array of JD Edwards sales orders. This information can be consumed to update any other third party applications with the state of the orders, or can be simply consumed from process application to show the list of the orders. Basic Ingredients  JD Edwards Installation with below components (Latest and great EnterpriseOne Tools Release is recommended, with least being 9.2.1.x). Orchestrator Studio AIS Server  OIC Agent (If JD Edwards is installed on premise or not accessible directly from the Oracle Integration.) JD Edwards artefacts JD Edwards Orchestration to query order at particular state as described above. OIC Instance Steps  Creating Connection with JD Edwards AIS Server Login to the OIC instance. Go to the Integrations page. Go to the connections page and click "Create" button. Search for REST adapter and click Select. Give the connection a meaningful Name and Identifier and click OK, please take note of this name you will need this name later. Provide information for following fields: Connection Type: Please select "REST API Base URL" from drop down. Connection URL: Enter AIS Server host and port as per your configuration, http://<aisserver>:<port>/jderest Select Basic Authentication in Security Policy Enter user name and password If required, Click "Configure Agents" button and select the respective agent  Click on Test to test the connection. Click on Save to save the connection. Creating Integration Flow Go to the main Integrations Page. Click on the "Create" button. Select the "App Driven Orchestration" as Integration Style, you can select "Scheduled Orchestration" Integration style based on your business requirements. Give Integration a desired name. For simplicity and demo purpose I am adding REST trigger to this Integration, Click "+" button and select "Sample REST Endpoint" as shown below. Provide your endpoint meaningful name and click Next button. Define endpoint relative resource URI, please note this is OIC resource URI hence you can give any meaningful resource URI as desired. Select options to add/review request parameters as well as response, as shown below. Add desired parameters for the OIC Orchestration, this time I am giving OrderStatus as discussed above this can be much more than order status. Select JSON payload response and click enter sample JSON <<<inline>>> link. Add response of the OIC Orchestration to have array of orders with field order number, customer number and customer name. Click on the "Ok" button. Click on the "Next" button. Click on the "Done" button and save endpoint, please note summary should look like below: Hover over the Integration flow on the canvas and click plus icon, you will be prompted to select the desired connection. Type the initials of connection created earlier in the process earlier and connections will be filtered based on the text. Select connection you have created earlier in the process. "Configure REST Endpoint" wizard shall be prompted, follow the wizard to configure JD Edwards Orchestration invoke through REST adapter. Provide information on following fields, please refer to below picture for more details: What do you want to call your endpoint? as "GetShippedSalesOrders". What is the endpoint's relative resource URI? as "/v3/orchestrator/JDEGetSalesOrders". Please note "v3" in the above URI depends on your Orchestration version, refer to this link for more information. In this URI, "/JDEGetSalesOrders is an orchestration name, you can replace this with your orchestration. What action you want to perform on the endpoint? Please select "POST" from drop down. Turn on "Configure a request payload for this end point". Turn on "Configure this endpoint to receive the response". Click on the "Next" button. Select request payload format as JSON, and click enter sample JSON <<<inline>>>, please find my sample payload below. Enter your sample JSON request for the configured Orchestration, as shown below. Click on the "Ok" button. Click  on the "Next" button. In the Response payload screen as well select response payload format as JSON and click enter sample JSON <<<inline>>>. Paste your Orchestration response here, you might get an error related to empty array notations as shown below: JD Edwards Orchestration response typically include's empty array notations in response, please replace those "[]" empty array notations with null as shown below.    Click on the "Next" button. Click on the "Done" button and save the invoke operation. Now that JD Edwards Orchestration has been added on the canvas, OIC has added two mapper to the canvas to map parameters from OIC flow to JDE orchestration and map the response of JDE orchestration to OIC flow response. Click on the edit icon of mapper request added by OIC between Trigger and JD Edwards REST Orchestration invoke, as shown below: Map Parameters as shown below, please note here we are mapping parameter from trigger REST call to JD Edwards Orchestration. Click Edit Icon of mapping response from JDE Orchestration to OIC Orchestration, by hovering mapper added by OIC between JD Edwards REST Orchestration invoke and OIC flow terminal, as shown below: Here we will map the response from JD Edwards Orchestration to the main Integration response. Please note the complexity of the structure of JD Edwards orchestration response, this is because this particular Orchestration is sending application grid information in response. We will simplify the structure response using Oracle Integration mapper. For that please drop the desired nodes from JD Edwards grid data row set element to Oracle Integration response sub-element as desired. This will automatically include for-each node to generate the array of orders based on the array of JD Edwards grid rowset element. This greatly transforms complex JD Edwards business application REST response to simple JSON structure without writing a single line of code. Click on the "Validate" button to validate the mapping. Click on the "Close" button. Click on the "Save" button. Now let's add tracking field to the Integration, click hamburger icon on top right hand side of Integration canvas and click Tracking menu option. Drop field "OrderStatus" as a primary tracking field for this Integration. Click on the "Save" button and close the "Business Identifier For Tracking" dialog. Save and Close the Integration. Activate the Integration. Test the Integration, In my case here is the response from Integration. Please note here response has been transformed into the simplified JSON structure containing array of JD Edwards orders without writing single line of code. Summary JD Edwards Orchestrator opens up all JD Edwards business applications through REST interface including custom business applications, this greatly simplifies how JD Edwards applications can be integrated with Cloud SaaS or any third party applications. Having the ability to invoke JD Edwards Orchestrations from Oracle Integration enables OIC customers to transact, interact and integrate with JD Edwards business applications. Benefits of calling JD Edwards Orchestrations from OIC Easily integrate JD Edwards application with Oracle Cloud applications like ERP Cloud, HCM Cloud, Engagement Cloud etc. Simplifies creation of business workflows spawning across JD Edwards and Oracle Cloud applications using Process applications in OIC. Transforms the response from complex JD Edwards business applications JSON to more simple JSON structures. What's Next I have plans to write more blogs on leveraging JD Edwards Orchestrations with OIC in due course of time, on the same endeavor I need your feedback on which one is critical for you. Hence I appreciate if you could please drop your vote through comments here on the blog or tweet me directly @prakash_masand on which one you would like to see first coming out of below. Leveraging JD Edwards Orchestrations for outbound integration with Cloud applications i.e. Integrating with Cloud application on occurrence of business application event in JD Edwards. Through table trigger event in JD Edwards. Through interactive application update in JD Edwards. Leveraging a business process application built in Oracle Integration to meet workflow requirements in JD Edwards. As an example you want to trigger a sales order change approval process (workflow) designed in OIC interactively from JD Edwards business applications.

Background JD Edwards orchestrations empowers army of citizen developers/business analyst resources to design business applications REST APIs without writing a single line of code. JD Edwards...

Integration

Integration Patterns - Publish/Subscribe (Part2)

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a messaging infrastructure. The previous post covered the required steps to create the connection, the integration and the trigger (using Postman) to publish a message. This second part will explain step-by-step how to consume those messages.   1. Create an Integration. We will select the Type Subscribe To OIC.     Provide a name a description and optionally associate this integration with a package.     Then you need to select a Publisher. These are available as soon as you configure a “Publish to OIC” integration.  In my instance I have 2 active Publishers.     Now we need to decide who is the consumer of the message. For this exercise I will simply write the message to a local file on my machine. In order to do that I need a File Adapter  and a Connectivity Agent* Setting up a File Adapter and the required Connectivity Agent is out of the scope for this article – but you can find the required information for the File Adapter here and how to setup the agent here. In a real scenario we would use an application or technology adapter to propagate that message to the end application/system. *The Oracle On-Premises Agent i.e Connectivity Agent is required for Oracle Integration Cloud to communicate to on-premise applications.      On the right-side pallet, you need to drag the File Adapter Connection onto the canvas. Once you do that, you get the below wizard. “what do you want to call your endpoint”: Here we can give a name for the File Operation  “Do you want to specify the structure for the contents file?” – Yes “Which one of the following choices would be used to describe the structure of the file content?” – Sample JSON document.   Now, we need to specify where to write the file and the pattern for the name. Specify an Output Directory: In this example I use a directory on my local machine File Name Pattern: The name of the file should be concatenated with %SEQ% which is an incremental variable that is used to avoid files having the same name. Hoovering with the mouse on the question mark provides more information on this.     The last step is the definition of the file structure. Since we selected a JSON format, I uploaded a very simple sample as seen below.     This is the end flow with both source and target endpoints configure. We just need to do some mapping now! By clicking in the Create Map icon we get the below screen, where we can simply drag the attribute message from source onto target. 2. Activate the Integration We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. You can create more subscribers - I cloned the existing one and named it SubscribeMessage_App2, so that we have two consumers for the published message.     3. Run the Integration Now we can use Postman to trigger the publish message – exactly the same as step 4 from post. 4. Monitor the Results When we check the Tracking option under Monitoring, we can see the PublishMessage instance and two SubscribeMessage instances – as expected.   The final step is to verify that 2 files were created in my local machine.   Simple, yet powerful. For more information on Oracle Integration Cloud please check https://www.oracle.com/middleware/application-integration/products.html    

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a...

Integration

How to send email with attachments in OIC?

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really easy to configure notification activity to add attachments along with the email. Prerequisite Enable feature flag:  oic.ics.console.notification-attachment Click here to learn on how to enable feature flag. The minimum Oracle Integration version required for the feature is 191020.0200.32001. Note - The notification attachment functionality is currently supported only in OIC Gen 1. Step By Step Guide to Send Notification with Attachment There are multiple ways in OIC to work with files. Some of the options are i) configure REST adapter to accept attachments as part of the request, ii) use Stage file activity to create a new file or iii) use FTP adapter to download the file to OIC from remote location for further processing. Any file reference(s) created by upstream operations can be used to configure attachments in the notification activity. Let us learn how we can configure notification action to send email with attachments in simple steps:  For this blog, we will clone the sample integration ‘Hello World’ that is available with OIC. Navigate to the integration landing page, clone the 'Hello World' integration and name it 'Hello World with Attachment' Navigate to the canvas page for the newly created 'Hello World with Attachment' integration Edit the configuration for the REST trigger and change the method type to POST, media type to multipart/form-data and configure request payload parameters to accept attachments. We will add a FTP connection (DownloadToOIC in the image below) and select 'download' operation to download the file to OIC. Here, I have configured FTP connection to download a 'Sample-FTP.txt' file which is already created and present in the remote location. Now, let us add a Stage File action to create a new file. Here, I have configured stage file to write the request parameters - name, email and flow id separated by a comma and name it as 'StageWriteSample.txt'. Refer to the blog to learn more about how to use Stage activity. This will allow us to configure multiple files as attachments in the notification activity later. The updated flow now looks as shown below Edit the notification activity (named as sendEmail in the sample integration) and we should see a new section "attachments" next to the body section. Clicking on the add button (plus sign) in the attachments section will take us to a new page to choose the attachment. We have three file references (highlighted in yellow) available to choose from - attachment from REST connection, file reference from the stage file write operation and file reference from the FTP download operation. We can select file reference(s) each at a time to send the files as attachments. User can edit or delete the attachment once added. The notification activity after configuration should have 3 attachments. Save and activate the integration and now your integration is ready to send emails with attachments. Sample email is shown below when the above flow is triggered. Hello World with Attachment integration created is attached for reference and can be used after configuring the FTP connection. The size limit on the email is 1 MB for OIC Gen 1 and 2 MB for OIC Gen 2. Both email body and attachment are considered in calculating the total size. Hope you enjoyed reading the blog and this new feature helps in solving your business use-cases!

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really...

Integration

Using the next generation Activity Stream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to keep switching between them to get the complete picture of the Instance.  With this new Activity Stream, we are clubbing Audit Trail with Tracing information and showing more compact and easily readable Activity Stream. Note: Next generation Activity Stream is applicable to Orchestration Integrations only. Prerequisite for Activity Stream The minimum Oracle Integration version required for the feature is 191030.0200.32180 Step By Step Guide to View Activity Stream Enable Tracing (with payload if required) for the integration. This is to view detailed payload information during development cycle. For production, it is recommended to keep Tracing turned off. Run the integration to create an instance. Navigate to Monitoring → Tracking page and check for the instance user wants to view Activity Stream Click on the primary identifier link of chosen instance to navigate to Tracking Details page.  Click on View Activity Stream menu from the Hamburger menu to display the new Activity Stream panel. NOTE:  To view payload, enable Tracing and payload. Follow How to enable and use tracing in less than 5 min to enable tracing   Features in Activity Stream: Click on Message/Payload to view (lazy load) the payload for the action Expandable loop section to view flow execution inside For-Each/While loop (available only if tracing with payload is enabled) Red node to indicate Error. Errored instances are displayed in a descending execution sequence to show the Error at the very top. Expand payload to full screen Date and Time are shown according to User Preferences Each Message/Payload section has Copy to Clipboard option, that allows user to copy the payload to clipboard Since payload information is derived from log files (which can rotate as and when newer data gets written), older instances might no longer display the payload information on Activity Stream There are two level of download  Download button at the top. User can download complete Activity Stream using this button Download button inside Message/Payload section to download specific Message/Payload            REST API: To View Activity Stream for a given instance ID: curl -1 <user-name>:<password> -k -v -X GET -H 'Content-Type:application/json' https://<host-name>/ic/api/integration/v1/monitoring/instances/<instance-id>/activityStream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to...

How SOA Suite Adapter Can Help Leverage your On-premises Investments

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It provides a rich design-time experience to create a single connection to SOA Suite / Service Bus, browse through the services running on them, and create integrations. For runtime, it relies on the standard SOAP and REST Adapters with or without the Connectivity Agent, depending on how the SOA Suite / Service Bus is accessible over the network. The current SOAP and REST adapters on OIC already provide integration to these services, but with this new adapter, you can do away with the hassles of multiple connections or fetching service metadata manually.  The SOA Suite adapter supports connectivity to: Oracle SOA Suite and/or Oracle Service Bus hosted on-premise Oracle SOA Suite and/or Oracle Service Bus hosted on SOA Cloud Services    Configuring SOA Suite Adapter to connect to a SOA Suite / Service Bus instance In the connection palette, select the SOA Suite Adapter. Provide a meaningful name for this connection and click on 'Create'. This opens up the page where the connection details can be configured.   Configure connectivity: To determine what URL to provide here, examine the topology of the SOA Suite / Service Bus instance i.e., whether the instance is accessible through : The Load Balancer URL The OTD or Cluster Frontend URL or just the Managed Server URL where the SOA Suite / Service Bus instance is running.   Configure Security: Provide the SOA Suite / Service Bus user credentials here.  If you are integrating with SOA Suite, make sure this user is a part of the 'Operators' group and has the 'SOAOperator' role on that server. Likewise if you are integrating with Service Bus, make sure this user is a part of the 'Deployers' group on that server.    Configure Connectivity Agents: In case the SOA Suite / Service Bus instance is not directly accessible from Oracle Integration, for eg. if deployed on-premise, or behind a firewall, a Connectivity Agent needs to be configured for this connection. This can be done using the 'Configure Agents' section.  However, Connectivity Agent may not be required when the SOA Suite / Service Bus URL is publicly accessible, for eg. if deployed on SOA Cloud Service. To know more about Connectivity Agent, check out these: New Agent Simplifies Cloud to On-premises Integration The Power of High Availability Connectivity Agent   Test and Save the connection: A simple 'Test' connection on this page verifies that the SOA Suite / Service Bus is accessible through the connection details provided, that the version of this instance is supported by the adapter, and that the user is authenticated and authorised to access this instance.   How to configure a SOA Suite invoke endpoint in an Orchestration Flow (This adapter can be configured only as an invoke activity to the services exposed by SOA Suite / Service Bus.) Drag and drop a SOA Suite adapter connection into an orchestration flow.  Name the endpoint and proceed to configure the invoke operation. If only SOA Suite or Service Bus instance is accessible through the URL provided in the connections page, the same is shown as a read-only label. But if both are accessible, they are shown as options. If the options are shown, select option 'SOA' or 'Service Bus' - to configure this endpoint to invoke SOA Composites or Service Bus projects respectively.   To configure this endpoint to invoke SOA Composites: (If both SOA and Service Bus are available as options, select option 'SOA') Select a partition to browse the composites in it Select a composite to view the services that it exposes.   To configure this endpoint to invoke Service Bus Projects: (If both SOA and Service Bus are available as options, select option 'Service Bus') Select a project to view the services that this it exposes:   Configuring Service details: Select a service from the desired SOA composite or Service Bus project, to integrate. If the selected service is a SOAP web service, the Operation, Request / Response objects, and the Message Exchange Patterns are displayed SOAP services with Synchronous Request-Response or One Way Notifications are supported. Asynchronous Requests are supported as One Way Notifications only. Callbacks are currently not supported. If the selected service is a RESTFul web service, proceed to the next page to complete further configurations for the Resource, Verb, Request and Response Content Types, Query Parameters, etc.  REST services which have the schemas defined (i.e., non-native REST services and non-end-to-end-json based REST services) are supported. The following Request and Response Content Types are supported: application/xml application/json Proceed to the next page to view the summary and complete the wizard. The newly created endpoint can now be seen in the orchestration flow. The request and response objects of this invoke are available for mapping in the orchestration.   Runtime invocation from OIC to SOA composites / Service Bus projects:  Once the request and response objects are mapped, this flow can be activated like any other flows on OIC. The activated flow would be ready to send requests to running SOA composites / Service Bus projects via SOAP or REST invocations. You can use the OIC Instance Tracking page to monitor the runtime invocation after flow is activated and invoked.   What this adapter needs on the SOA Suite / Service Bus side Supported SOA Suite Versions: Oracle SOA Suite v 12.2.1.4 onwards Oracle SOA Suite v 12.2.1.3 - with these patches applied.  SOA Suite: Patch 29952023  Service Bus: Patch 29963582 Supported OWSM policies: For SOAP webservices  oracle/http_basic_auth_over_ssl_service_policy oracle/wss_username_token_over_ssl_service_policy oracle/wss_http_token_over_ssl_service_policy oracle/wss_username_token_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported For RESTful webservices   oracle/http_basic_auth_over_ssl_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported  

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It...

Delisting of unsupported HCM SOAP APIs

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP services and ignore any other HCM SOAP services.  Support for displaying only the supported SOAP services has been added in the OIC release of 19.3.3.0.0. This support can be enabled via Feature Flag (oic.cloudadapter.adapters.hcmsoapapis-ignore). Behavior of new Integration flows: Once the feature flag is turned on, end users will be able to access only the HCM services listed in link mentioned in introduction, when they try to create new integration flows. The adapter wizard will list only the services documented as supported. Behavior of old Integration flows: End users will be able to edit, view and activate old integration flows as before. Old integration flows will not have any impact because of this change in adapter. If a new adapter endpoint is added to old integration flow, only the supported HCM services will be available for consumption. Note: The end user experience will be uniform across all Fusion Application Adapters like, Oracle Engagement Cloud Adapter, Oracle ERP Cloud Adapter along with Oracle HCM Cloud Adapter. This support does not have any impact on REST resources accessible via Fusion Adapters.

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP...

Integration

Integration Patterns - Publish/Subscribe (Part1)

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for connecting multiple systems. In a nutshell, this pattern is about: Source System publishes a message Target Systems subscribe to receiving that message This enables the propagation of that message into all the target systems that subscribe to it, as illustrated in the below picture. https://docs.oracle.com/cd/E19509-01/820-5892/ref_jms/index.html   This pattern is not new, in fact, it’s been around for decades. It powered distributed systems with its inherent loose coupling and independence.  Publishers and subscribers are loosely coupled which allows for the systems to run independently of each other. In the traditional client-server architecture, a client cannot send a message to a server that is offline. In the Pub/Sub model, message delivery is not conditioned by the server availability. Topics VS Queues The difference between a Topic and a Queue is that all subscribers to a Topic receive the same message when the message is published and only one subscriber to a Queue receives a message when the message is sent. This pattern is about Topics. The Hard way From a vendor neutral point of view, if an Organization needs a messaging infrastructure, it will typically need to setup hardware, install the OS and the messaging software, take care of configurations, creating and managing user, groups, roles, queues and topics…and this is only for the Development environment. Then we have Test and Production, which may require an HA cluster…you can see the direction this is going, it adds complexity. The Easy way Fortunately, OIC abstracts that complexity from the user. It’s Oracle managed, the Topics are created and managed by Oracle. From an integration developer point of view – the only requirement is to make use of the “ICS Messaging Service Adapter” – as we will explain in a bit. This brings the benefits of messaging to those that did not require the full extent of capabilities that a messaging infrastructure provides and were typically put away due to its complexity. Use Cases  There are plenty of uses cases that would benefit from this solution pattern: User changes address data in the HCM application New contact/account created in the Sales or Marketing applications ERP Purchase Orders need to be shared downstream Oracle’s OIC Adapters support many of the SaaS Business Events. How to enable that has been described in another blog entry:  https://blogs.oracle.com/imc/subscribe-to-business-events-in- fusion-based-saas-applications-from-oracle-integration-cloud-oic-part-2        Implement in 4 Steps For this use case, we will just use a REST request as the Publisher.   1. Create a REST trigger Go to Connections and create a new one. Select the REST Adapter. Provide a Name and a Description. The role should be Trigger. Press Create. Then you can save and close.   2. Create an Integration. We will select the Type Publish To OIC, which provides the required structure. Provide a name a description and optionally associate this integration with a package. Now we can drag the connection we created before, from the pallet on the right side, into the Trigger area on the canvas (left side) The REST wizard pops up. We can add a name and a description. The endpoint URI is /message – that’s the only parameter we need. We want to send a message; therefore the action is POST. Select the Checkbox for “Configure a request payload for this request”. Leave everything else as default.   The payload format we want is JSON and we can insert inline a sample – as seen in the picture. That’s all for the REST adapter configuration! You should also add a tracking identifier. The only available is the message element.   3. Activate the Integration. We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. (Your activation window might look a bit different, as this has the API Platform CS integrated for API publishing)     4. Test the Integration. After activation, you see a green banner on top of your screen with the endpoint metadata. Here you can find the endpoint URL to test the REST trigger we just created. Using Postman (or any other equivalent product) we can send a REST request containing the message we wish to broadcast.   And when we check the Tracking Instances under Monitoring…voilà, we see the instance of the Integration we just created. And here we have the confirmation that the payload was sent to the Topic!   In Part 2 of this blog series we cover the Subscribers!  

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for...

A Simple Guide to Asynchronous calls using Netsuite Adapter

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous CRUD calls to Netsuite and also provided extensive Search capabilities. With a new update, we are now adding support to perform all the above operations as asynchronous calls against Netsuite. As we would see in this post, the user can configure a simple toggle during Netsuite invoke configuration and let the adapter internally handle all the intricacies of asynchronous processing like submitting the asynchronous job, checking for its status and getting the results without the user needing to configure separate invokes for each. How to configure Netsuite Adapter to make Asynchronous Calls? When configuring an invoke activity using a Netsuite connection on the orchestration canvas, the user can now toggle between Synchronous or Asynchronous calls on the Netsuite Operation Selection Page by selecting the appropriate Processing Mode as shown in the image below. This selection is valid for Basic (CRUD) and Search operation types. Note that Miscellaneous operations don't support asynchronous invocations. The user can also click on the Learn More About Processing Mode link next to the Processing Mode option in the configuration wizard to get inline help on the feature. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Basic (CRUD) operations? As mentioned above, the user can configure a particular Netsuite invoke to use the Asynchronous processing mode by selecting the appropriate radio button during the endpoint configuration. Once configured, the Netsuite endpoint thus created, will automatically either submit a new asynchronous job or check the job status and get the results based on certain variables being mapped properly. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous basic operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following two variables → jobId and jobStatus. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by initializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → $jobStatus != "finished" and $jobStatus != "finishedWithErrors" and $jobStatus != "failed" 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId variable created in step 2 to the jobId defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId and jobStatus variables created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user can now configure a Switch activity with either the following condition or a variation of the same based on the business needs... If we follow the condition above, this switch activity would result in two routes being created in the flow... 6.a. jobStatus is either finished or finishedWithErrors or failed → The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an add customer asynchronous job, if the job finished successfully without errors, the user can get the internalId's of the created Customer records... 6.b. jobStatus is neither of the above values → This means that the asynchronous job is still running. Hence before we can get the job results, one can either do certain other operations or wait and loop back to the while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter will automatically either submit a new asynchronous job or check the job status and get the results based on the jobId being passed in the request. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Search operations? This is pretty similar to how we model for Asynchronous basic (CRUD) operations, the only differences arising due to the fact the result returned is paginated. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous search operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following three variables → jobId, pageIndex, totalPages. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by InitializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → integer( $pageIndex) <= integer( $totalPages) 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId and pageIndex variable created in step 2 to the jobId and pageIndex defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId variable created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user should now configure a Switch activity with the condition to check if the status of the submitted job is finished ... This switch activity would result in two routes being created in the flow... 6.a. status is finished → In this route, the user should create an Assign activity (represented by IncrementPageIndex in the flow diagram shown at the beginning of this section) which increments the pageIndex variable and assign the totalPages variable with the actual value from the results of the asynchronous job performed in the Netsuite invoke. The two images below show the two assignments needed in this Assign activity. For pageIndex variable... For totalPages variable ... The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an search customer asynchronous job, if the job finished successfully without errors, the user can get the Customer records which were searched for ... 6.b. status is anything other than finished → This is route 2 of the Switch activity introduced in step 6 above. This means that the asynchronous job is either still running, finishedWithErrors or failed. The user should introduce another Switch activity in this route to deal with jobs which are finishedWithErrors or failed. The otherwise condition for new Switch activity would mean that the job is still running, in which case the control should loop back to while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter now allows the user to make use of its extensive Search capabilities in an Asynchronous mode with full support for retrieving the paginated result set. How to request this feature? This feature is currently in controlled availability (Feature Flag → oic.cloudadapter.adapter.netsuite.AsyncSupport) and available on request. To learn more about features and "How to Request a Feature Flag", please refer to this blog post.

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous...

Introducing the Oracle OpenWorld Session "Compliance and Risk Management Realized with Analytics and Integration Services" - CAS2657

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services. I am excited to be presenting with these two knowledgeable people… Conny Bjorling, Skanska Group, and Lonneke Dikmans, eProseed. Please join me, Simone Geib, Director of Integration at Oracle, Conny and Lonneke as we describe Skanska’s common integration platform and what role Oracle Integration (OIC) plays as the central component of the platform. Conny and Lonneke will walk you through Skanska’s “Sanctions” project, which integrates Oracle Fusion, Microsoft Dynamics, Salesforce and bespoke systems with Oracle Analytics Cloud through Oracle Integration, to ensure that none of the customers and suppliers that Skanska works with are on a Sanctions list. We will also discuss a future part of the project where Skanska will introduce Integration Insight, a capability of Oracle Integration, to provide real time visibility into the business process through business milestones and metrics that are easily mapped to the implementation and aggregated and visualized in business dashboards.   Conny Bjorling is Head of Enterprise Architecture at Skanska Group. 20+ years of experience from senior IT and Finance roles with Retail (FMCG), Banking & Finance and Construction & Project Development. Focusing on Cloud adoption and agile architecture in the Cloud. Passionate about the business value of data.   Lonneke Dikmans is partner and CTO at eProseed. She has been working as a developer and architect with Oracle tools since 2001 and has hands on experience with Oracle Fusion Middleware and Oracle PaaS products like Oracle Kubernetes Engine, Oracle Blockchain Cloud Service, Oracle Integration Cloud Service, Mobile Cloud Service, API Platform Cloud Service and languages like Java, Python and Javascript (JET, node.js etc). Lonneke is a Groundbreaker Ambassador and Oracle Ace Director in Oracle Fusion Middleware. She publishes frequently online and shares her knowledge at conferences and other community events.

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with...

Aggregator Pattern in Oracle Integration Cloud via Parking Lot

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The detailed implementation of the parking lot pattern can be done in a variety of  storage technologies but strongly recommended a database table to be used for simplicity. In this blog, we will use the parking lot pattern in the Oracle Integration Cloud (OIC) to explore a solution in the Aggregator Pattern.   Problem Definition In OIC, the aggregator was often mentioned to collect and store individual messages until a complete set of correlated messages has been received, and then the aggregator as a filter publishes a single message, that derived from the complete set of correlated messages. The parking lot pattern will enable to provide a solution to attend to this scenario. Individual messages parked in the intermediate store and the aggregator in this OIC parking lot pattern serves as the special filter that receives a stream of messages and identifies whether the messages are correlated. Once a complete set of messages has marked as complete and received, an aggregated message that collected from each correlated message will be published as a single message to the output channel for further processing.   Design Solution     Process the input data/message based on the order they come in Each message will be parked in the storage for x minutes (parking time) so the system has a chance to aggregate correlated messages Maximal number of parallel process can be configured to throttle the outgoing calls A set of integration, connections and database scripts are used the current out of the box OIC features The solution is generic, which can be used in various typed business integrations without modification to the provided integrations Error handling of both system/network error and bad requests   Database Schema   The key piece to the parking lot pattern is the database schema in the design pattern. The Message table is explained here: Column Description ID (NUMBER) This is the unique ID/key for the row in the table. STATUS (VARCHAR) This will be used for state management and logical delete with the database adapter. There are three values this column will hold: 1. N - New (Not Processed) 2. P - Processing (In-flight interaction with slower system) 3. C - Complete (Slower system responded to interaction) The database adapter will poll for ‘N’ew rows and will mark the  row as ‘P’rocessing when it hands it over to an OIC integration. PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.     Implementation Details   Integration Details Sample Integration in Par File Connections Business Front-end Integrations Receive the typed business payload Call producer with opaque interface   EmployeeServiceFront(1.0)   OrderServiceFront(1.0) Invoke: EmployeeService   OrderService Producer Receive new record Create a new row in group table if not exit Mark the group status as ‘N’   RSQProducer(1.0) Trigger and Invoke: RSQProducer   Invoke: RSQ DB Group Consumer Schedule to run every x minutes Invoke a message consumer   RSQGroupConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Message Consumer Mark message status as ‘P’ Invoke Dispatcher Delete the message   RSQMessageConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB     Screen Shot of Actual Integration Steps You can deploy this par file.  It has the following connections that need configuring and activating. 1. Import the par file to the Packages in Oracle Integration:     2. Invoke and Trigger Connections: initial unconnected status   3. Configure and activate Database, the provided sample database called RSQ DB; Oracle Autonomous Transaction Processing (ATP) database is used in this scenario (ATP database information can be found here deploying an ATP instance in Oracle Cloud    4. Trigger and Invoke Message Consumer; in the sample it is called RSQMessageConsumer to cause load distribution of calls to message consumer. It requires the connection URL and corresponding admin authentication. Processes active messages of the given group. Receives group id/type from the group consumer. Loads active messages of the group ordered by sequence-id. The messages have to be at least # (parking time) old. Loops through active messages Marks message status as 'P' Invoke the Dispatcher using its opaque interface. Delete the message Calls a stored procedure to: Mark the group status to be c if there are no active messages Mark the group status to be 'N' if there are new active messages   5. Trigger the manager interface; in the sample, it connected the RQSManager is used to invoke the parking lot pattern interface in the Oracle Integration. It currently support three operations: Get configs; Update configs; Recover group     6.  Producer Integration connected with database is used to invoke producer interface; in the sample it is called the RSQProducer. Entry point of the parking lot pattern. Receives the resequencing message.   Creates a new row in group table if it's not already there. Marks the status of the group to 'N'. Creates a message in the message table.   7.  Dispatcher is a request/response integration which reconstructs the original payload and send to the real backend integration. Receives the message. Converts opaque payload to the original typed business payload. Uses the group id to find the business end point and invoke it synchronously. 8. Business Integrations are the real integrations that customers used to processes the business messages. They have their own typed interface. I used two test servers to demonstrate some post data   9. Error Handling Recover System/network error If the problem is caused by some system error like networking issues. After you fixing the problem, you can recover by resubmitting the failed message consumer instance.   10. Connections and Integrations Sample after Triggers and Invokes     Reference: http://www.ateam-oracle.com/the-parking-lot-pattern https://blogs.oracle.com/soacommunity/throttling-in-soa-suite-via-parking-lot-pattern-by-greg-mally https://blogs.oracle.com/integration/ordering-delivery-with-oracle-integration https://www.enterpriseintegrationpatterns.com/patterns/messaging/Aggregator.html      

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The...

Integration

Downstream Throttling in Oracle Integration Cloud via Parking Lot Pattern

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The detailed implementation of the parking lot pattern can be done in a variety of  storage technologies but strongly recommended a database table to be used for simplicity. In this blog, we will use the parking lot pattern in the Oracle Integration Cloud (OIC) to explore a solution in the downstream throttling.   Problem Definition In OIC, the downstream throttling was often mentioned as there might be an influx of data that overwhelm the slower downstream systems. Even though, it might be accomplished by the tuning knobs within OIC and WebLogic Server, but when the built-in tuning cannot be adjusted enough capacity to stop flooding the slower system. The parking lot pattern will enable to provide a solution to attend to this scenario.   Design Solution     Process the input data/message based on the order they come in Each message will be parked in the storage for x minutes (parking time) so the system has a chance to throttle the number of messages processed concurrently Maximal number of parallel process can be configured to throttle the outgoing calls A set of integration, connections and database scripts are used the current out of the box OIC features The solution is generic, which can be used in various typed business integrations without modification to the provided integrations Error handling of both system/network error and bad requests   Database Schema   The key piece to the parking lot pattern is the database schema in the design pattern. The Message table is explained here: Column Description ID (NUMBER) This is the unique ID/key for the row in the table. STATUS (VARCHAR) This will be used for state management and logical delete with the database adapter. There are three values this column will hold: 1. N - New (Not Processed) 2. P - Processing (In-flight interaction with slower system) 3. C - Complete (Slower system responded to interaction) The database adapter will poll for ‘N’ew rows and will mark the  row as ‘P’rocessing when it hands it over to an OIC integration. PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.     Implementation Details Integration Details Sample Integration in Par File Connections Business Front-end Integrations Receive the typed business payload Call producer with opaque interface   EmployeeServiceFront(1.0)   OrderServiceFront(1.0) Invoke: EmployeeService   OrderService Producer Receive new record Create a new row in group table if not exit Mark the group status as ‘N’   RSQProducer(1.0) Trigger and Invoke: RSQProducer   Invoke: RSQ DB Group Consumer Schedule to run every x minutes Invoke a message consumer   RSQGroupConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Message Consumer Mark message status as ‘P’ Invoke Dispatcher Delete the message   RSQMessageConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Dispatcher Receive the message Convert opaque payload to the original typed business payload Find the business end point and invoke   RSQDispatcher(1.0) Trigger and Invoke: RSQDispatcher, OrderService, TestService     Screen Shot of Actual Integration Steps You can deploy this par file.  It has the following connections that need configuring and activating. 1. Import the par file to the Packages in Oracle Integration:     2. Invoke and Trigger Connections: initial unconnected status   3. Configure and activate Database, the provided sample database called RSQ DB; Oracle Autonomous Transaction Processing (ATP) database is used in this scenario (ATP database information can be found here deploying an ATP instance in Oracle Cloud   4. Trigger and Invoke Message Consumer; in the sample it is called RSQMessageConsumer to cause load distribution of calls to message consumer. It requires the connection URL and corresponding admin authentication. Processes active messages of the given group. Receives group id/type from the group consumer. Loads active messages of the group ordered by sequence-id. The messages have to be at least # (parking time) old. Loops through active messages Marks message status as 'P' Invoke the Dispatcher using its opaque interface. Delete the message Calls a stored procedure to: Mark the group status to be 'C' if there are no active messages Mark the group status to be 'N' if there are new active messages     5. Trigger the manager interface; in the sample, it connected the RQSManager is used to invoke the parking lot pattern interface in the Oracle Integration. It currently support three operations: Get configs; Update configs; Recover group       6.  Producer Integration connected with database is used to invoke producer interface; in the sample it is called the RSQProducer. Entry point of the parking lot pattern. Receives the resequencing message.   Creates a new row in group table if it's not already there. Marks the status of the group to 'N'. Creates a message in the message table.   7.  Dispatcher is a request/response integration which reconstructs the original payload and send to the real backend integration. Receives the message. Converts opaque payload to the original typed business payload. Uses the group id to find the business end point and invoke it synchronously. 8. Business Integrations are the real integrations that customers used to processes the business messages. They have their own typed interface. I used two test servers to demonstrate some post data   9. Error Handling Recover System/network error If the problem is caused by some system error like networking issues. After you fixing the problem, you can recover by resubmitting the failed message consumer instance.     10. Connections and Integrations Sample after Triggers and Invokes     The blog was inspired by the A-Team Chronicles writeup Reference: http://www.ateam-oracle.com/the-parking-lot-pattern https://blogs.oracle.com/soacommunity/throttling-in-soa-suite-via-parking-lot-pattern-by-greg-mally https://blogs.oracle.com/integration/ordering-delivery-with-oracle-integration https://www.enterpriseintegrationpatterns.com/patterns/messaging/Aggregator.html    

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The...

One week to the start of Oracle Open World 2019 – Are You Ready?

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration Program Guide lists the Oracle Integration sessions during #OOW19: A small selection is also listed below: PRO5871 - Oracle Integration Strategy & Roadmap - Launch Your Digital Transformation Monday, September 16, 12:15 PM - 01:00 PM | Moscone South - Room 156C Join this session to hear exciting innovations coming to Oracle Integration. See a live demonstration of next-generation machine learning (ML) integration enhancements and robotic process automation, all seamlessly connected into a hybrid SaaS and an on-premise integration. Learn how customer Vertiv successfully delivered its digital transformation with Oracle Integration Cloud by connecting Microsoft Dynamics CRM, Oracle E-Business Suite, Oracle PRM, Oracle ERP Cloud, and Oracle HCM Cloud for real time collaboration and faster business agility. Jumpstart your future today.   PRO5873 - Oracle Integration Roadmap Wednesday, September 18, 12:30 PM - 01:15 PM | Moscone South - Room 156B In this session explore the product roadmap for Oracle Integration including new and exciting initiatives such as AI/ML-based capabilities for recommending best next actions for integration and process, intelligent recommendations on mappings, a new UI with recipes for integration and process, anomaly detection, and enhanced connectivity to Oracle and third-party apps. This session also covers new process automation capabilities such as robotic process automation adapter (RPA-UiPath) and ML-driven case management process execution.   CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services Monday, September 16, 10:00 AM - 10:45 AM | Moscone South - Room 155B In modern global companies it is important to make sure that customers, suppliers, and (prospective) employees are not on a sanction list. Of course, this can be checked manually at onboarding, but what if a relation is added to a list after onboarding, while you are in business with them? Skanska and eProseed built a solution that fetches data from source systems and matches them with data from the Dow Jones Watchlist. This is done daily, minimizing the risk and increasing efficiency through automation. In this session see the solution and learn about the benefits of this cloud solution for Skanska, as well as the lessons learned.   PRO5805 - Oracle Integration: Monitoring and Business Analytics on Autopilot Wednesday, September 18, 04:45 PM - 05:30 PM | Moscone South - Room 156B Ever wondered how the crucial connections and processes in your environment are behaving? In this session learn to monitor and track your integrations and processes in real time. It is not enough to know things are running smoothly; business analytics are driving the evolution and growth of every industry as never before. Today’s competitive markets demand that stakeholders be able to understand, monitor, and react to changing market conditions in real time, and business owners require more control and expect more visibility over their processes. This session showcases how operational monitoring empowers IT while integration insight empowers executives with real-time insight into their business process, allowing IT and executives to take immediate action.   Hands on Labs If you want to do more than listen to presentations and get your hands on Oracle Integration, sign up for one of our two Hands on Labs: HOL6041 and HOL6043 - Connect Applications, Automate Processes, Gain Insight with Oracle Integration Monday, September 16, 08:30 AM - 09:30 AM | Moscone West - Room 3022A and Wednesday, September 18, 09:00 AM - 10:00 AM | Moscone West - Room 3022A These 2 hands on labs provide a unique opportunity to experience Oracle Integration. Learn how to enhance process and connectivity capabilities of CX, ERP, and HCM applications with Oracle Integration Cloud. Create a forms-driven process to extend existing applications, discover how to synchronize data by integrating with other applications, and learn how to collect business metrics from processes and integrations and collate it into a dashboard that can be presented to business owners and executives. This session explores the power of having a single integrated solution for process, integration, and real-time dashboards delivered by Oracle Integration. Discover how the solution provides business insight by collecting and collating business metrics from your processes and integrations and presents them to your business owners and executives. Demos In between sessions, you should consider a stroll to The Exchange in Moscone South at the Exhibition Level (Hall ABC) and stop by our demo pods to see to see real world examples of how Oracle Integration future proofs your digital journey with pre-built application adapters, simple and complex integration recipes, and process templates. Stop by to discuss how integration is more than connecting applications, it is also about extending applications in a minimally invasive fashion. Visit us to see how to gain visual business insight from your integration flows. Oracle Integration Product Management and Engineering will be there to answer your questions and brainstorm about your integration use cases. We have 2 demo pods at The Exchange: INT-008 - Connect Applications, Automate Processes and RPA Digital Workforce, Gain Insight INT-002 - Leverage Oracle Integration to Bring Operations and Business Insight Together And an additional demo pod at Moscone West – Level 2 (Applications) ADM-003 - Connect Cloud and on-prem Applications with Adapters, B2B, MFT, SOA and hybrid solutions   Oracle Integration and Digital Assistant Customer Summit Last, but not least, there is still time to register for the Oracle Integration and Digital Assistant Customer Summit on 19-September-19 at the W Hotel San Francisco, followed by our Customer Appreciation Dinner. For more information, visit You're Invited: Oracle Integration and Digital Assistant Customer Summit at Oracle OpenWorld 2019   We are all looking forward to seeing you at Oracle Open World 2019 in San Francisco!

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration...

Integration

Bulk Recovery of Fault Instances

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable Fault Errors? All faulted instances in asynchronous flows in Oracle Integration Cloud Service are recoverable and can be resubmitted. Synchronous flows cannot be resubmitted. You can resubmit errors in the following ways: Single failed message resubmissions Bulk failed message resubmissions Today operator can manually resubmit failed messages individually from the integration console monitor dashboard.  In this blog we are going to focus on how to create an integration flow that can be used to auto resubmit faulted instances in bulk.  Here are the High Level Steps  Here are the steps to create an integration flow to implement the automated bulk recovery of errors. Note we also provide a sample that is available for download. STEP 1: Create New Scheduled Orchestration Flow  STEP 2: Add Schedule Parameters  It is always a good practice to parametrize the variable so you can configure the flow based on business need by overriding them, here are the schedule parameters configured in this bulk resubmit fault instances integration sample. strErrorQueryFilter : fault query filter parameter. This defines which error instances are to be selected for recovery. Valid values:  timewindow: 1h, 6h, 1d, 2d, 3d, RETENTIONPERIOD. Default is 1h. code: integration code version: integration version id: error id(instance id) primaryValue: value of primary tracking variable secondaryValue: value of secondary tracking variable See API documentation. strMaxBatchSize: Maximum number of error instances to resubmit per run. (default 50) This limits the number of recovery requests to avoid overloading the system. strMinBatchSize: Minimum number of error instances to resubmit per run. (default 2) This defers running the recovery until the given number of errors have accumulated. strRetryCount:  Maximum number of retry attempts an individual error instance. (default 3) This prevents repeatedly resubmitting a failed instance. strMaxThershold: Threshold number of errors to abort recovery and notify user. (default 500) This allows resubmission to be ignored if an excessive number of errors have been detected, indicating that some sort of user intervention may be required. STEP 3: Update the Query Filter to Include only Recoverable Errors concat(concat("{",$strErrorQueryFilter,",recoverable:'true'"),"}") STEP 4: Query All Recoverable Error Instances in the System matching the Query Filter GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter STEP 4: Determine Recovery Action STEP 4a: if Total Recovery Error Instances Found is more than Max Threshold (totalResults > strMaxThershold) then Send a Notification. In this case there may be too many errors, indicating a more serious problem, it is best practice to review manually and once the issue is fixed to temporarily override the strMaxThershold value to allow recovery of failed instances. STEP 4b: else if No Recovery Error Instances Found (totalResults <= 0) then End Flow. STEP 4c: else Continue to resubmit strMaxBatchSize Found Errors in a single batch. NOTE: We limit the number of errors re-submitted in a single batch to avoid overloading the system, we suggest a limit of 50 instances. STEP 5: Query Recovery Errors (limit to Batch Size) GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter&limit=strMaxBatchSize&offset=0 STEP 6 Filter Results to Avoid too Many Retries STEP 6a: if totalResults found <  strMinBatchSize , then skip the batch re-submit and stop the flow STEP 6b: else if totalResults > strMinBatchSize, then Invoke REST API to submit fault errors IDs Bulk Re-submit Error API. Here we can filter out the Fault Instance that are already retry but did fail again,  as shown below  - Drag and Drop For each items - Add if function from Mapper on top of items - Add <= condition element - Add Left Operator = retryCount from source  - Add Right Operator = strMaxRetryAttempt from variable retryCount < = $strMaxRetryAttempt STEP 7: Resubmit Errors POST - /ic/api/integration/v1/monitoring/errors/resubmit   STEP 8:  Check `resubmitRequested` = true, / false,  STEP 9:  Send Email Notification with Recovery Submit status details as below shown (Optional): User can model the integration to invoke a process (using OIC process capability for human interaction and long running tasks) or take any action based on re-submit response via a child flow, or other 3rd party Integration. This may be to post the re-submit information to some system for future analysis/ review. One can utilize the local invoke feature to model the parent to child flow hand off. STEP 9:  Activate the Integration, STEP 10: Schedule the Integration to Run on every X period of time One can also run OnDemand with the option of SubmitNow  Email Notification Here is the Email Notification one would receive Case1: When Bulk Resubmit success  email is sent as below (Sample). Case2: When Too Many Fault Instance and Alert Email Sent as below (Sample). Ok, by now you have completed Development of Integration and schedule to run on your Integration Cloud Instances. How to Customize your Integration to Run Recovery for a Specific Integration or Connection Because different integration or error types may have different recovery requirements you may want to have different query parameters and/or scheduled intervals. For this you need to clone the above Integration and override schedule parameter to query only specific fault Instance for a given Integration or Connection Type based on query filter. so you can keep separate instance running for a specific business use case. Here is how you do it: STEP1 - Clone the above Integration. STEP2. Update the Schedule Parameter strErrorQueryFilter timewindow : '3d', code : 'SC2RNSYNC', version : '01.00.0000' code : 'SC2RNSYNC', version : '01.00.0000', connection :'EH_RN' timewindow : '3d', primaryValue : 'TestValue' You may also want to modify other parameters or even to modify the integration to take alternative actions. STEP3: Schedule to Run This will give you ability to config the bulk re-submit for given set of integration level or connection level. Sample Integration (IAR)  -  Download Here Summary This blog explained how to automatically resubmit errored instances, allowing control of rate of recovery, type of errors to recover and showed how to customize the recovery integration through cloning and modifying parameters. We hope that you find this a useful pattern for your integrations. Thanks You!    

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable...

Integration

CICD Implementation for OIC

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts are used to support the backing up of integration artifacts from the source environment (OIC POD) to the repository (Bitbucket). Shell scripts are also used to retrieve the saved integration artifacts from the repository and deploy the integrations to a target environment (another OIC POD). Following features are currently supported in this implementation: 1)   Export integrations and save the artifacts (IARs, connection json files) to the remote repository: Allow user to either export all integrations, or only one or more integration(s) from the source OIC environment. Commit/Push the exported integration artifacts to the repository. Provide Summary Reports. 2)   Pull integration artifacts from the repository and either import or deploy the integrations to the target OIC environment: Allows user to select one or more integrations to do import only, or to deploy the integrations to a target environment. (To deploy an integration means to import IAR, update connections and activate the integration) Pre-requisites Following are the required setups in your Linux machine: 1)    JDK 1.8 Installation Make sure to update your JDK to version 1.8 or higher 2)    Jenkins Installation Ensure your Linux machine has access to a Jenkins instance. Required Jenkins Plugins Install the following Jenkins Plugins which are required by the CICD scripts: Parameterized Trigger plugin Delivery Pipeline plugin (version 1.3.2) HTML Publisher plugin 3)   Git Client Make sure to use Git client 2.7.4 or later in your Linux machine. 4)   Bitbucket/Github (repository) Do the following to have access to the remote repository: Setup SSH access to the remote repositories (Bitbucket/Github).  A Bitbucket server administrator can enable SSH access to the Git repositories in Bitbucket server for you.  This allows your Bitbucket user to perform secure Git operations between your Linux machine and the Bitbucket server instance. Note:    Bitbucket repository was used with this implementation. Create the local repository by cloning the remote repository: To create local repository, you can run the below commands from your <bitbucket_home>, where <bitbucket_home> is where you want your local repository to reside (i.e. /scratch/bitbucket): cd   <bitbucket_home> git clone  <bitbucket_ssh_based_url> 5)   Jq You can download JQ (JSON Parser) from: https://stedolan.github.io/jq/download/ Once downloaded, run the following commands: rename ‘jq-linux64’ to ‘jq’ chmod  +x  jq copy the ‘jq’ file to /usr/bin using sudo. Note:  It is required to have the minimal of the Git client and Jq utility to be installed on the same server where you are running the scripts.  Jenkins and Bitbucket repository can be on remote servers. Scripts setup Perform the following to setup on your Linux machine to run the bash shell scripts: Create a new <cicd_home> directory in your local Linux machine (i.e. /scratch/cicd_home). Note:  <cicd_home> is where all the CICD related files will reside.   Download the oic_cicd_files.zip file to your <cicd_home> directory. Run unzip to extract the directories and files.  Once unzipped, you should see the below file structure under your <cicd_home> directory: From <cicd_home>, run the below command to ensure that you are using git version 2.21.0 or later: > git  --version    For CI (Continuous Integration) Two shell scripts are provided for CI process: export_integrations.sh This script exports integrations (IARs along with the corresponding connection json files) from source OIC environment.  The script allows user to either export ALL integrations, or to export one or more specified integrations. For exporting one or more integrations, under <cicd_home>/01_export_integrations directory, edit the config.json file and update to include the integration Identifier (code) and the version number that you want to backup, one integration per line in below json format: [       { "code": "<integration1_Id>", "version":  "<integration1_version>" }       { "code": "<integration2_Id>", "version":  "<integration2_version>" }       .. ] For example:  [ { "code": "SAMPL_INCID_DETAI_FROM_SERVI_CLO", "version":  "01.00.0000" } ] Note:    The above steps are not required if you want to export All integrations.  The config.json file will be created automatically by script. push_to_repository.sh This script utilizes the Git utility to Commit and Push integration artifacts to the remote Repository.  This allows developer to save the current working integrations, and to fall back to previous versions as need be.   For CD (Continuous Delivery) Two shell scripts are provided for CD process: pull_from_repository.sh This script pulls the integration artifacts from the remote repository, and stores the artifacts under a local location. deploy_integrations.sh This script deploys integration(s) to target OIC environment. User has the option to only import the integrations, or to deploy the integrations (import IARs, update connections and activate integrations). Perform the following steps to either import or deploy integrations: 1) Under <cicd_home>/04_Deploy_Integrations/config directory, edit the integrations.json file to include the below information of the integrations to be imported/deployed:   Integration Identifier (code) and the integration version number Connection Identifier (code) of the related connections used by the integration. For example:      { "integrations":         [           {                "code":"SAMPL_INCID_DETAI_FROM_SERVI_CLO",                 "version":"01.00.0000",                 "connections": [                         { "code":"MY_REST_ENDPOINT_INTERFAC" },                         { "code":"SAMPLE_SERVICE_CLOUD" }                 ]             }           ]         }   2) Prior to deploying the integration, update the corresponding <connection_id>.json file to contain the expected values for the connection (i.e. WSDL URL, username, password etc). For example: SAMPLE_SERVICE_CLOUDE.json contains: {      "connectionProperties":[           {              "propertyGroup":"CONNECTION_PROPS",              "propertyName":"targetWSDLURL",              "propertyType":"WSDL_URL",              "propertyValue":"<WSDL_URL_Value>"            }        ],        "securityPolicy":"USERNAME_PASSWORD_TOKEN",        "securityProperties":[             {                  "propertyGroup":"CREDENTIALS",                  "propertyName":"username",                  "propertyType":"STRING",                  "propertyValue":"<user_name>"             },             {                   "propertyGroup":"CREDENTIALS",                   "propertyName":"password",                   "propertyType":"PASSWORD",                   "propertyValue":"<user_password>"              }           ]      }   Import Jenkins Jobs While you can create the Jenkins Jobs manually, you have the option to import the Jenkins jobs by using the jenkins_jobs.zip file. To import Jenkins Jobs 1)   Download and unzip the jenkins_jobs.zip file to your <Jenkins_home>/.job directory.  Where <jenkins_home> is the location where your Jenkins instance is installed 2)   Restart Jenkins server 3)   Once Jenkins server is restarted, login to Jenkins (UI) and: Update all parameters for the below four Jobs as per your environment: 01_Export_Integrations_and_Push 02_Pull_Integrations_and_Deploy 02a_Pull_Integrations_from_Repository 02b_Deploy_Integrations_to_Target For example: GIT_INSTALL_PATH:         /user/local/git Update the ‘Run_Location’ parameter in all other child jobs as per your environment (where the script used by each of the child job is located).  For example: In the configuration of the Export_Integrations job: RUN_LOCATION:   <cicd_home>/01_export_integrations Where <cicd_home>/01_export_integrations is the full path to where the corresponding script (export_integrations.sh) resides, Note:  make sure the path does not contain ending ‘/’. For the other Repository related child jobs (i.e Pull_from_Repository and Push_to_Repository, etc.) also update the GIT_INSTALL_PATH parameter to where your Git is being run from. NOTE: If there is no need to update the connection information for the Integrations, then the job 02_Pull_Integrations_and_Deploy can be used to pull Integration artifacts from the repository and also deploy the Integrations. If the connection information needs to be updated (i.e. User name, User password, WSDL URL, etc), then: First run 02a_Pull_Integrations_from_Repository to pull Integration artifacts from repository Update the connection json files to contain relevant information Call 02b_Deploy_Integrations_to_Target to deploy the Integrations   Create Jenkins Pipeline Views To create Pipeline View, ensure to install Delivery Pipeline Plugin as mentioned earlier. Perform the following steps: 1) Login to Jenkins 2) From Jenkins main screen, click on ‘+’ to Add a new View:   Enter view name: 01_OIC_CD_Pipeline_View   Go under Pipelines and click on Add to add Components: Component Name:      OIC_CI_Pipeline                                      (or any relevant name for the view)               Initial Job:     01_Export_Integrations_and_Push          (this is the root job in the pipeline) Click Apply then OK. Select the following options: Enable start of new pipeline build Enable rebuild Show total build time Theme (select ‘Contrast’) (Keep default values for all other options) The following view will be available for your Pipeline Job: (Create CD view using the same steps above)   Reports Reports are available for Export_Integrations, Push_to_Repository and Deploy_Integrations jobs. For Report to be displayed properly, we need to relax the Security Policy rule so that the Style codes in the HTML file can be executed. Relax Content Security Policy rule To relax this rule, from Jenkins, do the following: Manage Jenkins / Manage Nodes Click settings (gear icon) ​​ click Script console on left and type in the following command: System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "")               Click Run. If you see the 'Result:' output below "Result" header, then the protection is disabled successfully. Otherwise, click ‘Run’ again.        4. Restart the Jenkins server   To view Report for Export_Integrations job, for example: Click on OIC Export Integrations Report link to view Report after the job is run:                            Steps to create Report 1) From the selected Job screen (i.e, Export_Integrations), click on Configure 2) Under Post-build Actions, add Publish HTML reports for the job 3) Use the following parameters (as example): HTML directory to archive: ./ Index page[s]:  <the created html file> Report title:      <enter proper title for the Report>   4) Click Apply then Save   Execute Jenkins Jobs To run the CICD scripts, execute the below Jenkins pipeline jobs: For CI: 01_Export_Integrations_and_Push Run this job to execute CI scripts.  Wait for the parent job and its downstream jobs to complete running, then click on the child jobs, Export_Integraions or Push_to_Repository, and the Report link to see the results. For CD:      If there is no need to update connections, then run: 02_Pull_Integrations_and_Deploy Report is available under the Jenkins job Deploy_Integrations screen.      If connection information needs to be updated prior to deploying the integrations, then first pull the integration artifacts from the repository: 02a_Pull_Integrations_from_Repository Update the connection json file(s) as need be, then deploy the integrations to target OIC environment by running: 02b_Deploy_Integrations_to_Target Report is available under the Jenkins job Deploy_Integrations_to_Target screen.  

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts...

Update library in continuous manner even after being consumed by integration

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing functions. Feature Flag This feature is available when oic.ics.console.library.update feature flag is enabled. Here is how library update works Let's consider a simple Math library that defines basic add function that takes two parameters and is used in an integration. Suppose you want to add other math functions like subtract, multiply and divide and change the way add operation is executed. You may update the registered library with new JS file that has these new functions using the Update menu on the library list page. Upload the new JS file using the update library dialog. When attempting to update the library with new code, the system validates the new library file and ensures it meets the following conditions. Function signatures in the new file being uploaded should match the signatures of functions in the existing library that are used in integrations. You may add new functions but removing a function used in integrations results in rejection of the new file. If the new library file adheres to these conditions the library is updated and library edit page is displayed for further configuration changes. Please note that if the returns parameter for a function used in an integration was changed in the updated library the system does not flag an error but it invalidates the downstream mapping in integrations and should be re-mapped. Deactivate and activate the integration for changes to take effect in integrations that use the updated library. Following is an example where the validation conditions are satisfied and the system accepts the uploaded library file. Add function in math.js is used in integrations so the signature of this function in the updated library file is unchanged even though the add function definition is changed. As mentioned above the containing library file name is also part of a function signature so the file name is unchanged in the updated library. The updated library may contain other function definitions. Example mentioned below is an illustration where validation fails and uploaded library file is rejected. As the signature of functions in the new library file do not match with the library in the system, the new library file is rejected.

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing...

Migrating from ICS4SaaS to OIC4SaaS

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service has been sold with Oracle SaaS products and has appeared on the SaaS price list. As this service is not available on the Oracle’s OCI infrastructure, Oracle provides a migration path for ICS4SaaS customers to migrate their workloads to OCI. Oracle introduced a new offering called Oracle Integration for Oracle SaaS (aka OIC4SaaS). This offering is based on the Oracle Integration (OIC) service, which runs exclusively on the OCI infrastructure. The migration path is similar (but not identical to) the migration path for the corresponding tech (PaaS) SKUs, namely migration of ICS to OIC. SKUs for ICS for SaaS The SKUs for ICS for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Service includes per month Part # Oracle Integration Cloud service for Oracle SaaS 850 Hosted Environment 1 Hosted Env 10GB in and out per day B87181 Additional non-Oracle SaaS connection 1000 Hosted connection 1 connection of choice B87182 Additional 10GB per day 1000 Each 10GB in and out per day B87183 Oracle Integration Cloud Service for Oracle SaaS Midsize 585 Hosted Environment -  B87609 Additional non-Oracle SaaS Midsize connections 650 Hosted Connection - B87610 Additional 10GB per day Midsize 900 Each - B87611 Note that for the purposes of migration to OIC for SaaS, the midsize SKUs above (last 3 rows) behave the same as their corresponding ICS4SaaS SKUs (first 3 rows). SKUs for OIC for SaaS: The SKUs for OIC for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Part # Oracle Integration Cloud Service for Oracle SaaS – Standard 600 1 Million messages / month B91109 Oracle Integration Cloud Service for Oracle SaaS – Enterprise 1200 1 Million messages / month B91110 Note that for both ICS for SaaS and OIC for SaaS, the same restriction applies where each integration must have an endpoint in an Oracle Cloud SaaS application. For further details on OIC for SaaS, refer to the Oracle Fusion Cloud Service Description document. Migration paths Oracle allows you to choose whether to migrate from all flavors of ICS to either the OIC subscription offering (OIC for SaaS) or to OIC under Universal Credits. In fact, all 4 paths below are supported: Source Target Comments ICS OIC See migration documentation here ICS for SaaS OIC Migration procedures are the same as ICS -> OIC above ICS for SaaS OIC for SaaS This migration path is the focus of this document ICS OIC for SaaS Migration procedures are the same as ICS for SaaS -> OIC for SaaS   Why are migration procedures different for OIC for SaaS? When migrating ICS to OIC, you need to create and use OCI cloud storage. This storage is used to store the exported metadata from ICS. This enables a secure mechanism to store the metadata of the entire ICS instance, which includes security credentials to your SaaS and other applications and systems. An OIC for SaaS account is dedicated to OIC. Customers pay a subscription price for OIC and other services which are part of Universal Credits (including object storage) are not provision-able. Therefore, the migration procedures are different. Oracle provides and has tested two options for migrating ICS to OIC for SaaS Migration: Piece-Meal migration and Wholesale Migration. The preferred option is wholesale migration.   Migration Option #1: Piece-Meal Migration In this option, you migrate your integrations one-by-one via the export and import features of ICS/OIC. Oracle provides the ability to export and import individual integrations (and lookups). See exporting and importing components in the Oracle documentation. Using this capability, you can export each of your integrations from ICS4SaaS, and then import them into OIC4SaaS. Note that the export does not include security information such as login/password to your end applications, so after the import you must add the security information. This option obviates the need for OCI cloud storage, as the export can be saved to a local file. However, you will be required to export/import each integration individually and you will be required to re-add security credentials. Consider this option only when you have a relatively small (<10) number of integrations to migrate, and you do not want to obtain a universal Credit account.   Migration Option #2: Wholesale Migration In this option, you may migrate all your integrations along with all metadata and security information in a single bulk operation. This option does depend on the availability of OCI Cloud Storage. Therefore, you will need to separately procure a Universal Credit account and make the Cloud Storage in this account available for the migration process. This Universal Credit account is in addition to your OIC4SaaS environment. The rest of the migration path is the same as migration from ICS to OIC. If you already have a Universal Credit account, you can use that. If not, you can obtain one. In fact, Oracle offers free 30-day trials which can be leveraged for this purpose. Even if you choose a paid account. cloud storage is relatively inexpensive, and only required for the duration of the migration. After migration is complete, you can delete the Cloud storage. If you use a 3o-day trial for migration, you can even delete the account after migration, though we hope you will decide to use it and take advantage of the rich services and capabilities available there. NOTE: Wholesale migration is the preferred option. Consider gaining access to a Universal Credit account (including 30-day trial) to enable wholesale migration. What if I have multiple ICS4SaaS instances? Chances are that you have a Stage and Production instance, and perhaps other instances. Like the ICS to OIC migration, Oracle recommends a 1-for-1 migration when you have multiple ICS4SaaS instances.  That is, each ICS4SaaS instance gets migrated to its own OIC4SaaS instance. What if I have multiple ICS4SaaS accounts? You can request to have multiple OIC4SaaS accounts to match your ICS4SaaS environment. It is also possible to consolidate your OIC4SaaS instances into a single account. Note that each instance typically shares the same user base, as they all share the same IDCS tenancy. However, Oracle is rolling out the ability for OIC (and OIC4SaaS) to leverage secondary IDCS stripes to allow each of your instances to have a unique set of OIC service administrators and instance administrators. When migrating to OIC4SaaS, should I select Standard or Enterprise? If your integrations are all Cloud to Cloud, the Standard version should suffice. If you require integration using one of the On-premise adapters, or if you want to take advantage of Process Automation (which is not available in IC4SaaS) then you should choose the Enterprise version. Both versions offer the same pricing model, based on the number of 1 million messages per month you require. How does pricing compare? ICS4SaaS and OIC4SaaS are offered in two completely different pricing models. ICS4SaaS was sold per connection, whereas OIC4SaaS is sold via message volume (one unit = 1M messages per month). OIC4SaaS generally has more favorable pricing than ICS4SaaS for most customers, though this is dependent on specific customers’ integration requirements and usage patterns. For migration scenarios, Oracle will work with customers to ensure their pay the same price (or less) for equivalent functionality in OIC4SaaS. Note that the ICS4SaaS additional non-Oracle SaaS connections (B87182) does not have an equivalent SKU in OIC for SaaS. The base SKUs for OIC for SaaS (B91109, B91110) include non-Oracle SaaS connections, and no separate purchase is required. I am ready to migrate. How do I get started? Contact your Oracle sales representative to help guide you through the process. Migration to OCI includes a commercial migration component, so that you will start paying for OIC4SaaS while no longer paying for ICS4SaaS. Oracle provides a 4-month window for migrations, where your ICS4SaaS instances will be available (at no charge) to give you ample time to perform the migration and associated testing.Oracle continues to invest in enhancements to the migration processes, so please be sure to ask your Oracle sales representative about the latest available tooling which can be applied to your specific enivronment.            

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service...

How to use File Reference in Stage File

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by Stage File action. Advantages :  Any upstream operation which provides a file reference can be directly processed. Thus simplifies orchestration.  Ex. Rest Connection allows downloading of attachment into OIC/.attachment folder. It provides file reference but does not provide file name or directory.   OIC/ICS Operations that provides References: Attachment Reference (Rest Adapter : Attachments) Stream Reference (Rest Invoke response) MTOM (Soap Invoke response) FileReference ( FTP) Base64FileReference(encodeString) function.      This is how the Stage File action Configure Operation page will look like for: Read Entire File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Read File in Segments operation​ 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Unzip File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the Zip File Name' and 'Specify the Zip File Directory' will be replaced with 'Specify the File Reference' field.   To specify the File Reference, you can click the Expression Builder icon to build an expression.   This is how file reference can be used to utilize local file processing capabilities provided by Stage File action.

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by...