X

The Integration blog covers the latest in product updates, best practices, customer stories, and more.

Recent Posts

Integration

Conditional Mappings in Oracle Integration

Sometimes, when modeling integrations, we need to map data dynamically depending on other data.   In this blog, we will look at creating conditional mappings using Oracle Integration. Use case: Consider this pseudo code sample of the mapping logic. How can we achieve this in the mapper UI? if PER03 == 'TE' {     Contact.Phone = PER04 } if PER05 == 'TE' {     Contact.Phone = PER06 } if PER07 == 'TE' {     Contact.Phone = PER08 } Solution: First, enable “Advanced” mode and open the Components palette.  This exposes XSLT statements which we need to create conditional mappings.     Locate the phone element in the target tree.  This is the element where we want conditional mappings.  If phone is a lighter color and italicized, that means the element does not yet exist in the mapper’s output. Right click and select Create Target Node.  We will not be able to insert conditions around phone without this step. Drag and drop the choose element as a child of phone.  The cursor position surrounding phone indicates whether choose will be inserted as a child (bottom left) or as a parent (upper right).  In this case, we will insert as a child.     Now that we have choose in the tree, drag and drop when as a child of choose 3 times to create placeholders for our 3 conditions.  Note, you could also drop a when statement as a sibling before or as a sibling after another when. Each condition also needs a corresponding mapping value.  Drag and drop value-of as a child of each when.  Now we have the tree structure needed to create our conditional expressions and mapping expressions.     Let’s create the expressions for the first condition and mapping. if PER03 == 'TE' {     Contact.Phone = PER04 } To create the condition, select the first when in the target tree.  Drag and drop PER03 from the source tree into the expression builder.  Complete the expression by typing = “TE” and click the checkmark to save the expression. To create the mapping, select the value-of under the first when.  Drag and drop PER04 into the target value-of. The first conditional mapping is complete.  Repeat these steps for the remaining second and third conditional mappings to achieve the desired logic. Save the map and integration. if PER05 == 'TE' {     Contact.Phone = PER06 } if PER07 == 'TE' {     Contact.Phone = PER08 } The finished product will look like this. Congratulations! We have just used Oracle Integration to create conditional mappings.

Sometimes, when modeling integrations, we need to map data dynamically depending on other data.   In this blog, we will look at creating conditional mappings using Oracle Integration. Use case: Consider...

Integration

Kafka Adapter for OIC

The Kafka adapter for Oracle Integration Cloud came out earlier this month, and it was one of the most anticipated releases.     So what is Kafka? You can find all about it on https://kafka.apache.org/, but in a nutshell: Apache Kafka is a distributed streaming platform with three main key capabilities: Publish and subscribe to streams of records. Store streams of records in a fault-tolerant durable way. Process streams of records as they occur. Kafka is run as a cluster on one or more servers that can span multiple data centres. The Kafka cluster stores streams of records in categories called topics, and each record consists of a key, a value, and a timestamp.   Kafka Adapter Capabilities The Apache Kafka Adapter enables you to create an integration in Oracle Integration that connects to an Apache Kafka messaging system for the publishing and consumption of messages from a Kafka topic. These are some of the Apache Kafka Adapter benefits: Consumes messages from a Kafka topic and produces messages to a Kafka topic. Enables you to browse the available metadata using the Adapter Endpoint Configuration Wizard (that is, the topics and partitions to which messages are published and consumed). Supports a consumer group. Supports headers. Supports the following message structures: XML schema (XSD) and schema archive upload Sample XML Sample JSON Supports the following security policies: Simple Authentication and Security Layer Plain (SASL/PLAIN) SASL Plain over SSL, TLS, or Mutual TLS  More details on the documentation page: https://docs.oracle.com/en/cloud/paas/integration-cloud/apache-kafka-adapter/kafka-adapter-capabilities.html   How to set up everything? I installed Kafka on an Oracle Cloud VM running Oracle Linux. This was quite straightforward. If you are new to Kafka, there are plenty of online available resources for a step by step installation. You need Zookeeper and Apache Kafka – (Java is a prerequisite in the OS). I have a very simple configuration with 1 broker/node only, running on localhost. From an OIC standpoint you must satisfy the following prerequisites to create a connection with the Apache Kafka Adapter: Know the host and port of the bootstrap server to use to connect to a list of Kafka brokers. For Security - username & password (unless you choose no security policy) For SASL over SSL, TLS, or Mutual TLS - have the required certificates. The OIC connectivity agent needs to be up and running. I installed the connectivity agent in the same machine as Kafka, but this can be installed anywhere if they are in the same network.   How to create a connection? Choose Apache Kafka as the desired adapter. Name your connection and provide an optional description.     Bootstrap Server: I used localhost:9092* – this is because the actual connectivity is handled by the agent, so in reality we are connecting to the Kafka server as if we were inside the machine where it runs. You can also use the private ip of the machine instead of localhost. *9092 is the default Kafka port, but you can verify the one you are using in <Kafka_Home>/config/server.properties Security: I choose no security policy but in a real-life scenario this needs to be considered. More on this can be found in the official documentation! Agent Group: Select the group to which your agent belongs.   Finally, test and verify the connection is successful.   Create an Integration (Consume Messages) So now, we can create a Scheduled Integration, and drag the Kafka Adapter from the Palette onto the Canvas. We can Produce or Consume Messages. Let’s look at Consume Messages.   We have 2 options for consuming messages. With or without offset Part of the unique characteristic of Kafka (as compared with JMS) is the client’s ability to select from where to read the messages – offset reading. If we choose offset reading, we need to specify it, and the message consumption will start from there, as seen in the picture below. Picture from https://kafka.apache.org     Select a Topic: My Kafka server only has 1 Topic available – DTopic. Specify the Partition: Kafka topics are divided into several partitions. Each one can be placed on a separate machine so that multiple consumers can read from a topic at the same time. In our case there is only 1 partition – we can choose the one to read from, or give Kafka the control to choose - If we do not select a specific partition and use the Default selection, Kafka considers all available partitions and decides which one to use Consumer Group: Kafka consumers are part of a consumer group. Those consumers will read from the same and each consumer in the group will receive messages from different partitions in the topic.   Picture from O’Reilly - Kafka: The Definitive Guide The main way to scale data consumption from a Kafka topic is by adding more consumers to a consumer group. Picture from O’Reilly - Kafka: The Definitive Guide   I added this Integration to a consumer group called: “ test-consumer-group” which only has 1 consumer. Specify Option for consuming messages: Read latest: Reads the latest messages starting at the time at which the integration was activated. Read from beginning: Select to read messages from the beginning. Message structure & headers: I choose not to define the message structure, and the same for headers.   This is what the Integration looks like. It does not implement any specific use-case, it’s a pure showcase of the Kafka adapter capabilities. Note that we are not mapping any data in the mapping activity.   Now, going back to the Kafka server, we can produce some messages. By using ./kafka-console-producer.sh script we can produce messages in the console. >message for oic When you run the Integration, that message is read by OIC as showed here in the Payload Activity Stream. The option to consume messages was - Read latest, otherwise we would get more in the output.     Easy and straightforward – which is the main benefit of Adapters, remove all client complexity, and focus on the use case implementation.   Create an Integration (Produce Messages) Lastly, how can we produce messages.   I created a new topic - DTopic2 - to receive the messages. Yes, I know, not very imaginative for naming conventions! I select the desired topic, partition as default and do not want to specify message structure nor headers.   We need to map data, which translates to: What is the data we want to produce in the topic?   To keep it simple we hard-code the attribute Content with the following message:   So, now we start up the console consumer to track the new messages being sent to the topic. I run the Integration, and we can see in the Kafka console the message being consumed!   And if we go to the OIC monitoring we see exactly the same!     This shows how easy it is to create and manage Kafka clients, both producer and consumer, from within OIC. For more information please check: https://docs.oracle.com/en/cloud/paas/integration-cloud/apache-kafka-adapter    Sources: kafka.apache.org / O’Reilly Kafka: The Definitive Guide / Oracle Documentation    

The Kafka adapter for Oracle Integration Cloud came out earlier this month, and it was one of the most anticipated releases.     So what is Kafka? You can find all about it on https://kafka.apache.org/, b...

Integration

Boost Your WebForm Productivity with our New Expression Builder Features

We're introducing several new Oracle Integration improvements we hope will markedly boost your web form expression productivity. These enhancements are an example of our ongoing efforts to address your feedback! Expression Editor Redesign  We've redesigned the form expression editor to make it easier to build and keep track of event logic. The expression editor content is now cleaner, more compact, and easier to understand. Many of the changes were made in response to feedback from customers and the User Assistance team. For example, function variables are now aligned, and expression summaries are now clearly differentiated from input fields.  Before example:  After example:  To see the new expression editor in action simply follow these steps: 1. Open a Form. 2. Add an Input Text control to the Form canvas. 3. Click on the Input Text. 4. Add any event from the General Properties panel. 5. Click the Edit icon next to the Event to open the expression editor.  6. Start exploring! Try adding different types of blocks such as Actions, Ifs, loops, Connectors, Filters, and Reusable Snippets.    Reusable Snippets Reusable Snippets allow users to extract and name a group of blocks (Actions, Ifs, Loops, Connectors, and Filters) and use that group of blocks in other Events in their Form Presentation. This feature saves users from having to recreate the same event logic over and over again. Instead, users can create a reusable snippet and use it wherever they want to implement the same logic. Moreover, users can manage their reusable snippets in a central location. If users want to make changes to a reusable snippet, they can modify the master copy and their changes will be reflected wherever the reusable snipped is used. To learn how to use reusable snippets, follow the steps below for extracting, using, and managing reusable snippets. Extracting Reusable Snippets Let's pick up right after step 6 from the Expression Editor Redesign steps. If you haven't followed those steps already, go ahead and follow them now.  7. Make sure you have added at least one block to your expression editor. Click the Extract Snippet button in the top right-hand corner.  8. Give your Reusable Snippet a name if you want. There is already a default name provided.  9. Select the blocks you want to extract by using the toggles on the right-hand side of the blocks.  10. Click the OK button in the top right corner to finish extracting your Reusable Snippet.  11. Click the OK button in the bottom right corner to save and close the dialog.  Next, let's learn how to use Reusable Snippets.  12. Add a new event. You can add it to the same control if you wish.  13. Open the expression editor for the new event. 14. Click the + Reusable Snippet button from the bottom toolbar of the expression editor dialog. 15. In the Reusable Snippet dropdown, you will see all the Reusable Snippets you have created. Select one to use in your event, and you will see the event logic in read-only mode.  16. Click OK and close the dialog.  Note: You can also detach a Reusable Snippet if you no longer want the changes to the master copy of the Reusable Snippet to be reflected wherever the reusable snippet is used. To detach, click the Detach icon (the very left square icon of the four icons in the top right of the Reusable Snippet block). If you detach a Reusable Snippet the blocks inside will be like any other blocks in your event, and you will be able to edit them.  Managing Reusable Snippets   Finally, let's learn how to manage Reusable Snippets.  17. Go to the Presentation properties panel. You will see a list of all the Reusable Snippets in your Presentation. From here you can create new Reusable Snippets, edit existing Reusable Snippets, and "delete" Reusable Snippets. Deleting will detach all instances of a Reusable Snippet in your events.  18. Click the Edit icon next to the Reusable Snippet name to open the Configure Reusable Snippet Dialog. In this dialog, you can rename your Reusable Snippet and edit the event logic of your snippet. Remember that updating a reusable snippet here updates it wherever it's used in your events. Congratulations! You now know how to work with reusable snippets in your process application forms! See https://docs.oracle.com/en/cloud/paas/integration-cloud/user-processes/create-web-forms.html for more info!   Credits:  Kalyn Chang, Nicolas Laplume, and Carolina Arce Terceros          

We're introducing several new Oracle Integration improvements we hope will markedly boost your web form expression productivity. These enhancements are an example of our ongoing efforts to address your...

Netsuite Custom Field Discovery

Prerequisite Before using an already existing netsuite connection, refresh metadata needs to be done on it. Make sure the last refresh status is complete for the connection.   This feature exposes custom fields for standard objects as named fields in the mapper and during netsuite endpoint creation for advanced search and saved search operations. This feature applies to all basic(except delete) and search operations of netsuite. And for both sync and async processing modes. For Basic CRUD operations, the custom fields is exposed on the mapper as a named field. The custom field name is derived from the name given to custom field in netsuite. This makes it easier to map without needing to know the internalId and scriptId of a particular custom field for standard object. For eg, here is the mapping done for netsuite update operation. The image below shows a request mapping from Rest(trigger) to Netsuite Update operation on Customer Standard Object .   You can check that there are two fields that have been mapped for the netsuite update operation. ICSEmailId and AdvertisingPreferences. ICSEmailId is a simple type custom field, no further work is required on the part of the integration developer. Just use it like any other simpletype field. AdvertisingPreferences is a complex type custom field. It correlates to a multiselect custom field in netsuite. For complex type custom fields, listitemId correlates to the internalId of the listItem. For the invoke request to netsuite update operation to succeed, integration developer needs to ensure listItemId value is mapped. For mapping more than one listItem, just repeat the ListItem and do the required mapping. Here is how to get the internalid of list Item of a complex type custom field on netsuite UI . Go to customization menu->Lists,records,Fields->Lists->Go to the particular custom field in question. Below image provides an example of response side mapping for Customer Standard Object for netsuite get operation.  For search operations. The approach to be taken by the integration developer for each of the operation type is as follows. 1) Basic Search. On the request side, mapper will surface all discoverable custom fields under customFieldList element under the standard object. The customer does not need to provide internalId or scriptId for the customfield. However  it is required to provide the custom field specific search values as well as operator values. For select and multiSelect type customField, searchValue element under the named customfield needs to  be used. If more than one listItem needs to be specified, searchValue element needs to be clone. The internalId for the listItem can be obtained from netsuite UI as discussed before in the blog. The response side for basic search is same as described above for the get operation response. The response payload from netsuite wil contain named custom fields in the response. Diagram showing request mapping for basic search  custom field for a standard object. Diagram showing custom field search for Advertising Preferences which is a multiSelect custom Field, specifiying two list members(by cloning the searchValue element)   For joined Search, on the request side, the mapping to be done remains more or less same as for Basic Search. The only difference being that we can also make use of discovered custom fields in the mapper for joined business object. On the response side for joined search, the response is the same as one for basic search/get operation. One can make use of for-each operation shown below to extract the discovered custom field 3) for Advanced Search and Saved Search, the request side mapping to be done, remains same as for basic and joined search. In addition to above, we can also select discovered custom fields for response sub-object during endpoint creation.  Diagram below shows that.   Please note that customers can still use the old way of mapping custom fields for all the operations. By using internalId and scriptId as before.

Prerequisite Before using an already existing netsuite connection, refresh metadata needs to be done on it. Make sure the last refresh status is complete for the connection.   This feature exposes custom...

Integration

See How Easily You Can Access Integration's metadata

Many times we may want to use the name of the integration, its version inside the OIC integration flow and we may not want to hardcode the values. And also we may want to access dynamic value like runtime instanceId, invoked by etc., inside the integration flow. All these are possible now with the introduction of a new feature called 'Integration Metadata Access' and it allows access to most of the commonly useful metadata. In this blog, we will see what are the metadata that we can access and how we can use it in the integration flow. The minimum Oracle Integration version required for the feature is 20.34310 List of exposed metadata  Integration Name Identifier Version Runtime data Instance ID Invoked by name Environment data Service instance name Base URL All these metadata are read-only fields and can be used in any orchestration like Assign activity, Log activity, Notification activity etc., Step by Step Guide: Create a new integration or edit an existing integration flow Add a new Log action Edit the log message and in the source tree you can see the list of metadata Drag and drop the required metadata to the expression builder Save and activate the integration Trigger the integration flow using the endpoint Go to Monitoring > Tracking page Open the particular run and click on 'View Activity Stream' and you should see the Log message which logs the integration name

Many times we may want to use the name of the integration, its version inside the OIC integration flow and we may not want to hardcode the values. And also we may want to access dynamic value like...

Invoke a Co-located Integration from a Parent Integration

The capability to ‘Invoke an Integration from another Integration’ is now GA – in other words, the ability to easily implement Modular Design is now GA. This topic has already been covered some time ago here, but now the feature is GA, available to every OIC user, and it’s worthwhile a refresh!   What did it really change? Before this feature, we could achieve the same result, but that would require to expose the desired Integration with a REST Trigger and we would need to create a Connection to enable calls to that Integration. Now we can simply call the Integration and avoid the need to handle the Connection and the endpoint changes in different environments. There is no need to configure the Connection in the Integration , where we would need to define request/response payloads, headers, parameters and many other settings available in the REST connector. It is much more practical!     This is how it looks, we simply drag the Integration icon from the Actions Palette onto the canvas.   Why is this important? Having the ability to call other Integrations  allows us to divide parts of our work into smaller manageable blocks which gives flexibility and offers decoupling. Work re-usability is one of the most fundamental concepts in development. Let’s see what Wikipedia has to say about modular design. Modular design, or modularity in design, is an approach that subdivides a system into smaller parts called modules which can be independently created, modified, replaced or exchanged between different systems A decoupled system allows changes to be made to any one system without having an effect on any other system.   Sounds about right to me!   Example Use Case Recently I came across a use case that would benefit tremendously from this approach. The customer I talked with, had a consolidated CRM (Oracle Engagement Cloud) that was synchronised with several legacy and on premise systems across several locations. Let’s look to the synchronisation of Accounts. The activity CreateAccount can be complex, far beyond the actual API request CreateAccount. One may need to verify if the actual account already exists, and then decide to update an existing one or to create a new one. From a data modelling perspective, the entity Account can have many relationships – Contacts, Addresses, Child Accounts – etc – If we factor all of them, this can lead to a more complex set of steps to create/update an account. It’s not important to delve on all the details, but just to understand that the activity CreateAccount has the potential to be complex. Because the Customer’s on premise CRM’s had different interfaces, we had to create two different Integration workflows within OIC. Best practices say that we should reuse the CreateAccount activity.   How to implement this? For this use case, we need to implement an AppDriven Orchestration Integration called CreateAccount, with a REST trigger (so that it can be called from another integration). As you can see below, in this Integration called CreateAccount_Demo we start with verifying if that account already exists in Engagement Cloud and then we decide if we create a new one or if we update an existing account.   Now we can create an AppDriven Orchestration (Or Scheduled) Integration that will connect to the on premise CRM’s. One will use the SAP adapter and the other the technology SOAP adapter and both will synchronise the accounts back to Oracle Engagement Cloud. This diagram explains the result, where CreateAccount is used by both Integrations.   The ability to invoke an integration directly from within an integration is made possible with the Action: “Call Integration” as seen below.   Drag that icon onto the canvas and you will get a configuration Wizard. Choose an appropriate Name and provide a Description.   All your Active Integrations of type Trigger are displayed.   The CreateAccount_Demo Integration only implements a POST operation.     In the end you would get something similar these Integration below, where the CreateAccount_Demo Integration is being reused by both Integrations with just a simple drag and drop ?       This is just an example use case of how this improved functionality could bring added value to your OIC implementation. The documentation gives more information on what can be done! If you are interested in the new features already released please check the What’s New page!  

The capability to ‘Invoke an Integration from another Integration’ is now GA – in other words, the ability to easily implement Modular Design is now GA. This topic has already been covered some...

Announcing Early Access of SOA Suite for Kubernetes

The SOA Suite Team is excited to announce the early access availability of Oracle SOA Suite on Containers and Kubernetes. This program will lead to certification of SOA Suite deployment using Containers on Kubernetes in Production environments Scope The scope of eventual deliverable is as follows Provide Container images for Oracle SOA Suite including Oracle Service Bus Certify these Container images for deployment on Kubernetes for Production workloads In later phases, we will expand certification to additional components based on feedback received from the Early Access program Objective With growing adoption of Containers and Kubernetes in Datacenters, this effort targets Supporting Oracle SOA Suite and Oracle Service Bus Containers in Production environments Enable Datacenter consolidation/modernization efforts Enable SOA Suite's co-existence with cloud native applications Features Following are the salient features of this release Container images created using Oracle SOA Suite 12.2.1.3 release Certified deployment using Weblogic Operator (2.4) to deploy and manage Oracle SOA Suite and Oracle Service Bus with ease Support for searching and analyzing logs with ease using the ELK Stack Integration to Powerful metrics collection and alerting system with Prometheus and Graphana Support for multiple Load Balancers like Traefik, Voyager, and NGINX As part of the release, we are making supporting files, deployment scripts, and samples available on GitHub. You can access them here Weblogic Kubernetes Operator Documentation - https://oracle.github.io/weblogic-kubernetes-operator/ Oracle SOA Suite and Oracle Service Bus Quick Start Guide - https://github.com/oracle/weblogic-kubernetes-operator/tree/master/kubernetes/samples/scripts/create-soa-domain Requirements The requirements for the Operator are as follows: Oracle Linux 7 (UL5+) or Red Hat Enterprise Linux 7 (UL4+) Kubernetes 1.13.5+,1.14.3+ and 1.15.2+ (check with kubectl version). Flannel networking v0.11.0-amd64 (check with docker images | grep flannel). Docker 18.9.1 (check with docker version). Helm 2.14.3+ (check with helm version). Oracle FMW Infrastructure Container Image 12.2.1.3 (Download from OCR fmw-infrastructure:12.2.1.3-200109) Oracle SOA Suite 12.2.1.3.0 + Latest CPU (30638100: 12.2.1.3 Bundle Patch 191208 (12.2.1.3.191208.1658.0113)) Oracle Database 12c or above Limitations Please note that you may encounter the following limitations : https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-fmw-domains/soa-suite/#limitations Feedback We are actively soliciting your feedback. You can provide feedback using - Slack : #soa-k8s on oracle-weblogic.slack.com GitHub Repo: https://github.com/oracle/weblogic-kubernetes-operator/tree/master/kubernetes/samples/scripts/create-soa-domai

The SOA Suite Team is excited to announce the early access availability of Oracle SOA Suite on Containers and Kubernetes. This program will lead to certification of SOA Suite deployment using...

Integration

Use Global Variables and Data Stitch to log request payloads

In this blog, we will look at 2 new Integration features Global Variables, and Data Stitch.  Data Stitch allows us to make assignments to complex type variables.  We will show how the features can be leveraged to log invoke request payloads in case of fault. Prerequisite Enable following features: oic.ics.console.integration.stitch-action oic.ics.console.integration.complex-variables To enable feature flags – Refer to Blog Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 200113.1400.33493 Use case:  When invoke fails, we want to log the request payload.  Currently, request payloads are visible after the invoke, but not visible inside the fault handlers. Solution: We will create a Global Variable based on the request payload.  Global Variables are visible anywhere in the integration, including fault handlers. Click on the (X) icon on the right-hand toolbar to access Global Variables.   Give the global variable a name.  For type, select Object, indicating a complex type. Selecting Object opens the schema tree.  There, select an appropriate element, indicating our global variable will inherit this element’s type. Next, we will use Data Stitch to assign a value to this variable. Add Data Stitch action prior to the invoke to copy the request payload to the Global Variable. Name the Data Stitch action and click configure. In the Variable field (To), type “payload” to use the quick search to find and select the previously created Global Variable. Or, click Browse All Data to open the schema tree.  Under sources, drag and drop the Global Variable root element into the Variable field. With the Variable field populated, the Operation field defaults to Assign, and the Value field appears.  Accept the Operation field default as Assign.  In the Value field (From), use the schema tree to select the desired request payload. Optionally, we can click on the tool icon to the right to toggle the Variable and Value fields into developer mode to look at the full xpath expression. We have completed this Stitch action.  Click “X” to close the editor and apply changes.  Next, we will create the logger action to log the Global Variable in the fault handler. Go to the appropriate fault handler for the invoke. Add logger action. In the Logger editor, the Global Variable will be shown as an available source. Drag and drop it into the expression.  Make sure to also use an appropriate function such as getContentAsString in order to print the text properly. Save and activate the integration.   Test it against an invoke failure scenario, and you will see the request payload in the ics-flow.log: Example: [2020-03-09T21:05:15.982-07:00] [integration_server1] [NOTIFICATION] [] [oracle.ics.trace.soa.bpel] [tid: 154] [userId: icsadmin] [ecid: 8b44b372-eefe-4f03-ae9f-ecea6d4d020d-000ee5c4,0] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] [FlowId: 0000N32bZJS9d_HpIsT4if1ULFFC0002bU]  [ICS User Logging]: [Code:CREA_ORDE_BY_REGI_STIT_LOG_PAYL][Version:01.00.0000][Instance:21][Operation:Logger][ActionID:lg0][ActionName:logPayload]: <Account xmlns="http://xmlns.stark.com"><ns23:AccountDetails xmlns:ns23="http://xmlns.stark.com"><ns23:ExternalSystemId>a</ns23:ExternalSystemId><ns23:SystemRowId>a</ns23:SystemRowId><ns23:ExternalSourceSystem>a</ns23:ExternalSourceSystem><ns23:AccountStatus>a</ns23:AccountStatus><ns23:AccountStatusChangeReason>a</ns23:AccountStatusChangeReason><ns23:AccountTypeCode>a</ns23:AccountTypeCode><ns23:AccountNumber>a</ns23:AccountNumber><ns23:ContactId>a</ns23:ContactId><ns23:CurrencyCode>a</ns23:CurrencyCode><ns23:PrimaryOrganization>APAC</ns23:PrimaryOrganization></ns23:AccountDetails></Account> Congratulations, we have just used Global Variable and Data Stitch to log payloads in case of fault!

In this blog, we will look at 2 new Integration features Global Variables, and Data Stitch.  Data Stitch allows us to make assignments to complex type variables.  We will show how the features can...

Integration

Use Data Stitch to simplify integrations

In this blog, we will look at a new integration feature, Data Stitch, and show how it can simplify integrations to help us reduce maintenance costs.  Data Stitch allows us to make assignments to complex type variables. Prerequisite Enable following features: oic.ics.console.integration.stitch-action oic.ics.console.integration.complex-variables To enable feature flags – Refer to Blog Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 200113.1400.33493 Use case:  We have deployed 3 instances of the application service in 3 regions (APAC, EMEA, AMER). 3 separate REST connections are used to handle 3 separate endpoint URIs and credentials for each region. We have an integration with 3 invokes to the CreateOrder REST API using the 3 connections. The payloads to these invokes are identical, only difference is the connection used. Maps can be heavy.  They may handle cardinality differences between source and target, or involve loops, functions, etc.  If we need to build, test, maintain 3 maps it can be costly and error prone.  Ideally, we only want to do this mapping once to improve developer productivity. Solution: We can re-use the first map to the APAC CreateOrder request, and then leverage Data Stitch to copy this request to the EMEA and AMER CreateOrder requests. Move the map to APAC CreateOrder outside of the APAC Region condition Delete the Map to EMEA CreateOrder and AMER CreateOrder. Add Data Stitch action before EMEA CreateOrder to assign the APAC CreateOrder request payload which is already mapped to the EMEA CreateOrder request payload. Give the Data Stitch action a name and click configure. In the Variable field (To), click Browse All Data to open the schema tree. Under Sources, select the EMEA_CreateOrder_Request/execute/Account variable, and drag & drop it into the Variable field. With the Variable field populated, the Operation field defaults to Assign, and the Value field appears.  Accept the Operation field default as Assign.  In the Value field (From), use the schema tree to select APAC_CreateOrder_Request/execute/Account. Optionally, imagine the EMEA request payload is not precisely the same as the APAC request.  Suppose the request is identical except I need to override one field in the case of EMEA.  To achieve this, I can add a second Stitch statement to override the specific field inside the Account object.  Stitch statements are executed sequentially at runtime. To achieve this, I click the “+” icon to add another Stitch statement. In the Variable field (To), select EMEA_CreateOrder_Request/execute/Account/AccountDetails/ExternalSystemId.  Since AccountDetails is an array, we may need to qualify which element of the array is to be used. Click on the tool icon to the right to toggle the Expression Box to developer mode for advanced expression editing.  In this case, we will qualify the expression with AccountDetails[1] to pickup the first record in the account details array. Accept the Operation field default as Assign.  In the Value field (From), you could use the schema tree to select the desired value, or just type a string, such as “ABC123”. We have completed this Stitch action.  Click “X” to close the editor and apply changes.  Repeat the steps to create another Stitch action to assign the APAC CreateOrder variable to the AMER CreateOrder variable in the AMER branch. When complete, the integration will look like this. Now, our integration only requires one Map to our first invoke.  We used Data Stitch to copy the payload to our subsequent invokes.  We may have also used Data Stitch to override specific fields for some of the invokes.  Our maintenance costs for this integration have been reduced because in the future we only need to maintain one map instead of 3.  Data Stitch is a valuable tool which can simplify our integrations!

In this blog, we will look at a new integration feature, Data Stitch, and show how it can simplify integrations to help us reduce maintenance costs.  Data Stitch allows us to make assignments to...

Persisting SOA Adapters Customizations

with inputs from Vivek Raj The lifetime for any customization done in a file on a server pod is upto the lifetime of that pod, the changes are not persisted once the pod goes down or restarted. For example: Below configuration updates `DbAdapter.rar` to create a new connection instance and creates DatasSource CoffeeShop on Administration Console for the same with jdbc/CoffeeShopDS. file location: /u01/oracle/soa/soa/connectors/DbAdapter.rar <connection-instance> <jndi-name>eis/DB/CoffeeShop</jndi-name> <connection-properties> <properties> <property> <name>XADataSourceName</name> <value>jdbc/CoffeeShopDS</value> </property> <property> <name>DataSourceName</name> <value></value> </property> <property> <name>PlatformClassName</name> <value>org.eclipse.persistence.platform.database.Oracle10Platform</value> </property> </properties> </connection-properties> </connection-instance> If you need to persist the customizations for any of the adapter files under SOA oracle home in the server pod, you need to follow one of the below methods. Method 1: Customize the Adapter file using the Administration console: Login to WebLogic Administration console : Deployments -> ABC.rar -> Configuration -> Outbound Connection Pools Create a new connection that is required : New -> provide connection name -> finish Go back to this new connection : update the required properties under it and save Go back to deployments : select the ABC.rar -> Update This step asks for `Plan.xml` location. This location by default will be in `${MW_HOME}/soa/soa` which is not under Persistent Volume. Hence when you specify above location, provide the domains PV location such as `{DOMAIN_HOME}/soainfra/servers` etc. Now the `Plan.xml` will be persisted under this location for each Managed Servers. Method 2: Customize the Adapter file on the Worker Node: Copy the `ABC.rar` from the server pod to a PV path: command: kubectl cp <namespace>/<SOA Managed Server pod name>:<full path of .rar file> <destination path inside PV> kubectl cp soans/soainfra-soa-server1:/u01/oracle/soa/soa/connectors/ABC.rar ${DockerVolume}/domains/soainfra/servers/ABC.rar Unrar the ABC.rar. Update the new connection details in `weblogic-ra.xml` file under META_INF. In WebLogic Administration console, Under Deployments -> select `ABC.rar` and click update. Here, select the `ABC.rar` path as the new location which is: `${DOMAIN_HOME}/user_projects/domains/soainfra/servers/ABC.rar` and update Verify that the `plan.xml` or updated .rar should be persisted in PV.  

with inputs from Vivek Raj The lifetime for any customization done in a file on a server pod is upto the lifetime of that pod, the changes are not persisted once the pod goes down or restarted. For...

Expose T3 protocol for managed servers in SOA Domain on Kubernetes

T3 ports for Managed Servers in Oracle SOA deployed in WebLogic Kubernetes operator Environment are not available by default. This document provides steps to create T3 channel and the corresponding Kubernetes Service to expose the T3 protocol for Managed Servers in SOA Domain. Exposing SOA Managed Server T3 Ports With the following steps you will be creating T3 port at 30014 on all Managed Servers for soa_cluster with below details: Name: T3Channel_MS Listen Port: 30014 External Listen Address: <Master IP Address> External Listen Port: 30014 Note: In case you are using different NodePort to expose T3 for Managed Server externally, then use same value for "External Listen Port Step 1 : Create T3 Channels for Managed Servers WebLogic Server supports several ways to configure T3 channel. Below steps describe the methods to create T3 channel using WebLogic Server Administration Console or using WLST Scripts. Method 1 : Using WebLogic Server Administration Console Login to WebLogic Server Administration Console and obtain the configuration lock by clicking on Lock & Edit In the left pane of the Console, expand Environment and select Servers. On the Servers page, click on the soa_server1 and go to Protocols page SOA Server Protocols Select Channel and then Click "New" Enter the Network Channel Name as say "T3Channel_MS", Select Protocol as "t3" and Click "Next" New Channel 1 Enter Listen Port as "30014", External Listen Address as "<Master IP>" and External Listen Port as "30014". Leave empty for "Listen Address". Click "Finish" to create the Network Channel for soa_server1. New Channel 2 Perform step 3 to 6 for all Managed Servers in soa_cluster. When creating Network Channel for other Managed Servers, make sure to use same values as for all parameters including "Network Channel Name". To activate these changes, in the Change Center of the Administration Console, click Activate Changes. New Channel 3 These changes does not require any server restarts. Once the T3 channels are created with port say 30014, proceed with creating the Kubernetes Service to access this port externally. Method 2 : Using WLST Script The following steps creates a custom T3 channel for all Managed Servers with name T3Channel_MS that has a listen port listen_port and a paired public port public_port. Create t3config_ms.py with below content: host = sys.argv[1] port = sys.argv[2] user_name = sys.argv[3] password = sys.argv[4] listen_port = sys.argv[5] public_port = sys.argv[6] public_address = sys.argv[7] managedNameBase = sys.argv[8] ms_count = sys.argv[9] print('custom host : [%s]' % host); print('custom port : [%s]' % port); print('custom user_name : [%s]' % user_name); print('custom password : ********'); print('public address : [%s]' % public_address); print('channel listen port : [%s]' % listen_port); print('channel public listen port : [%s]' % public_port); connect(user_name, password, 't3://' + host + ':' + port) edit() startEdit() for index in range(0, int(ms_count)): cd('/') msIndex = index+1 cd('/') name = '%s%s' % (managedNameBase, msIndex) cd('Servers/%s/' % name ) create('T3Channel_MS','NetworkAccessPoint') cd('NetworkAccessPoints/T3Channel_MS') set('Protocol','t3') set('ListenPort',int(listen_port)) set('PublicPort',int(public_port)) set('PublicAddress', public_address) print('Channel T3Channel_MS added ...for ' + name) activate() disconnect() Copy t3config_ms.py into Domain Home (e.g., /u01/oracle/user_projects/domains/soainfra) of Administration Server pod (e.g., soainfra-adminserver in soans namespace) $ kubectl cp t3config_ms.py soans/soainfra-adminserver:/u01/oracle/user_projects/domains/soainfra Execute wlst.sh t3config_ms.py by exec into Administration Server pod with below parameters host: <Master IP Address> port: 30012 # Administration Server T3 port user_name: weblogic password: Welcome1 # weblogic password listen_port: 30014 # New port for T3 Managed Servers public_port: 30014 # Kubernetes NodePort which will be used to expose T3 port externally public_address: <Master IP Address> managedNameBase: soa_server # Give Managed Server base name. For osb_cluster this will be osb_server ms_count: 5 # Number of configured Managed Servers Command: $ kubectl exec -it \<Administration Server pod> -n \<namespace> -- /u01/oracle/oracle_common/common/bin/wlst.sh \<domain_home>/t3config_ms.py \<master_ip> \<t3 port on Administration Server> weblogic \<password for weblogic> \<t3 port on Managed Server> \<t3 nodeport> \<master_ip> \<managedNameBase> \<ms_count> Sample Command: $ kubectl exec -it soainfra-adminserver -n soans -- /u01/oracle/oracle_common/common/bin/wlst.sh /u01/oracle/user_projects/domains/soainfra/t3config_ms.py xxx.xxx.xxx.xxx 30012 weblogic Welcome1 30014 30014 xxx.xxx.xxx.xxx soa_server 5 Step 2 : Create Kubernetes Service to expose T3 port 30014 as NodePort Service Create t3_ms_svc.yaml with below contents to expose T3 at Managed Server port 30014 for domainName and domainUID as "soainfra" and cluster as "soa_cluster" : apiVersion: v1 kind: Service metadata: name: soainfra-soa-cluster-t3-external namespace: soans labels: weblogic.clusterName: soa_cluster weblogic.domainName: soainfra weblogic.domainUID: soainfra spec: type: NodePort selector: weblogic.clusterName: soa_cluster weblogic.domainName: soainfra weblogic.domainUID: soainfra ports: - name: t3port protocol: TCP port: 30014 targetPort: 30014 nodePort: 30014 Create the NodePort Service for port 30014 with command: $ kubectl create -f t3_ms_svc.yaml Now you can access t3 for Managed Server with below URL t3://<master_ip>:30014

T3 ports for Managed Servers in Oracle SOA deployed in WebLogic Kubernetes operator Environment are not available by default. This document provides steps to create T3 channel and the corresponding...

Integration

Introducing the Box Adapter in Oracle Integration

As many of you might know already, at Oracle Open World (OOW) 2019 a few months ago, we announced our partnership with Box to empower our customers to connect their cloud and on-premises Oracle and third-party applications with Box via Oracle Integration (OIC). Read the announcement from OOW here. Box offers enterprises content management as a cloud service, enabling organizations to share files, collaborate between team members, and manage the lifecycle of content securely. As a cloud service, Box can scale as its customers needs grow in size and depth of complexity including attaching custom metadata to content and watermarking content for review. Today, we are pleased to present the availability of the Box Adapter (in preview mode), which offers bound inbound and outbound integration with Box on the Oracle Integration platform. Integration designers can now use this adapter in conjunction with the vast array of other adapters that provide connectivity to various different technologies, cloud services, and on-premise applications. The Box adapter supports outbound operations to Box that can manage files and folders as well as work with metadata for these artifacts. For managing files and folders, the adapter supports working with folder and their contents, working with files and uploading / downloading files, acquiring and updating shared links, and managing watermarks on files. For managing metadata, the adapter supports lifecycle management (creating, retrieving, updating, deleting) of metadata for files and folders. The adapter has the ability to read the custom metadata templates that a Box user may have created for their enterprise use. The Box adapter also supports inbound notifications from Box in the form of a webhook notification. During configuration, the adapter allows the Oracle Integration developer to choose the events that they would like to receive for a particular file or folder. The Box adapter uses Box's OAuth 2.0 authentication and authorization framework to obtain the proper access token to execute the API securely. For more information about OAuth 2.0 supported by Box and the steps to register an app to enable this authentication route, please visit:  https://developer.box.com/en/guides/authentication/. Of particular note, the Box adapter secures webhook notifications by verifying the notifications against signature keys that are used to sign every notification from Box to ensure that the notification is authentic. To ensure no downtime, Box employs a two signature keys approach for signing webhook notifications. Verifying either signature can be deemed as sufficient to verify the authenticity of the message. Thus one key can be regenerated and changed in Oracle Integration enabling the remaining key to still work. The other key can be updated once the first key has been updated ensuring that there is no downtime in notification verification. As usual, these credentials are saved securely in Oracle Integration. Full details in our Oracle Integration technical docs here

As many of you might know already, at Oracle Open World (OOW) 2019 a few months ago, we announced our partnership with Box to empower our customers to connect their cloud and on-premises Oracle and...

Integration

A Simple Guide to Connect to a Private FTP Server using FTP adapter

You can now integrate with an FTP server even when it is in private network and not accessible publicly. This is made possible with the latest feature where a Connectivity Agent can be configured to be used with the FTP adapter. The FTP adapter supports connectivity to: FTP/SFTP hosted on-premise - through a connectivity agent FTP/SFTP hosted on cloud - without a connectivity agent, as before Connection Properties: Provide connection property values: Enter the FTP/SFTP host address and port If using a secure FTP server, then select Yes for 'SFTP Connection' else select No. Security: Please select one of the security policy FTP Server Access Policy : for username/password authentication. FTP Public Key Authentication : as the name suggests, for Public Key authentication. FTP Multi Level Authentication : i.e, to authenticate using both username/password and public key. Configure Connectivity Agents: In case the FTP server is not directly accessible from Oracle Integration, for eg. if its on-premise, or behind a firewall, a Connectivity Agent needs to be configured for this connection. This can be done using the 'Configure Agents' section.  However, Connectivity Agent may not be required when the FTP server is publicly accessible. To know more about Connectivity Agent, check out these: New Agent Simplifies Cloud to On-premises Integration The Power of High Availability Connectivity Agent   File size support in FTP adapter with agent 1) With Schema - If using schema for transformation, then the file size limit is 10 MB 2) Without schema - The file size limit is 1 GB. For example, Download File operation does not support schema, and can send  a file up to 1 GB. This may take time considering the network latency between Connectivity Agent and OIC. Limitations when the FTP adapter is configured with the connectivity agent: PGP encryption/decryption. Unzip during Download File operation. You could use the stage activity if needed, for these purposes. FTP adapter is not supported in trigger in Basic Integration template

You can now integrate with an FTP server even when it is in private network and not accessible publicly. This is made possible with the latest feature where a Connectivity Agent can be configured to...

Deploying SOA Composites from Oracle JDeveloper to Oracle SOA in WebLogic Kubernetes Operator Environment

Inputs provided by Ashageeta Rao and Vivek Raj This post provides steps to deploy Oracle SOA composites/applications from Oracle JDeveloper (that runs outside the Kubernetes network) to the SOA instance in WebLogic Kubernetes Operator Environment. Pre-requisities Note: Replace entries inside <xxxx> specific to your environment Get the Kubernetes Cluster Master Address and verify the T3 port which will be used for creating application server connections. You can use below kubectl command to get the T3 port: kubectl get service <domainUID>-<AdministrationServerName>-external -n  <namespace>-o jsonpath='{.spec.ports[0].nodePort}' JDeveloper need to access Managed Server during deployment. In WebLogic operator Environment each Managed Servers are pods and cannot be accessed directly by JDeveloper. Hence we need to configure the Managed Server's reachability: Decide on external IP address to be used to configure access of Managed Server ( soa cluster). Master or worker node IP address can be used to configure Managed Server accessibility. In case you decide to use some other external IP address, that need to be accessible from Kubernetes Cluster. Here we will be using Kubernetes Cluster Master IP.   Get the pod names of Administration and Managed Servers (i.e. "<domainUID>-<server name>") which will be used to map in /etc/hosts.   Update `/etc/hosts` (or in Windows: `C:\Windows\System32\Drivers\etc\hosts`) on the host from where JDeveloper is running with below entires where <Master IP> <Administration Server pod name> <Master IP> <Managed Server1 pod name> <Master IP> <Managed Server2 pod name> Get the Kubernetes service name of the SOA Cluster so that we can make them access externally with Master IP ( or External IP). $ kubectl get service <domainUID>-cluster-<soa-cluster> -n <namespace>   Create a Kubernetes service to expose SOA cluster service (“<domainUID>-cluster-<soa-cluster>”) to available externally with same port of Managed Server: $ kubectl expose service  <domainUID>-cluster-<soa-cluster> --name <domainUID>-<soa-cluster>-ext --external-ip=<Master IP> -n <namespace> NOTE  : The Managed Server t3 port is not exposed by default and opening this will have a security risk as the authentication method here is based on a userid/password. It is not recommended to do this on production instances. To deploy SOA composites/applications from Oracle JDeveloper, Administration Server should have been configured to expose a T3 channel using the exposeAdminT3Channel setting when creating the domain, then the matching T3 service can be used to connect. By default when exposeAdminT3Channel is set WebLogic Kubernetes Operator Environment will expose NodePort for the T3 channel of the NetworkAccessPoint at 30012 (Use t3ChannelPort to configure port to different value). Create an Application Server Connection in JDeveloper Create a new application server connection in JDeveloper In the configuration page provide the WebLogic Hostname as Kubernetes Master Address Update the Port as T3 port ( default is  30012) obtained in the prerequisites step 1 Enter the WebLogic Domain i.e (domainUID) Test the Connection and it should be successful without any error Deployment of SOA Composites to SOA using JDeveloper In JDeveloper, right click the SOA project you want to deploy and select the Deploy menu. This  invokes the deployment wizard In the deployment wizard, select the application server connection that was created earlier. If the prerequisites has been configured correctly, the next step looks up the SOA servers and shows the Managed Servers for deploying the composite. Using the application server connection, Managed Servers (SOA cluster) are discovered and get listed on the select servers page. Select the SOA cluster and click Next. On Summary page, click Finish to start deploying the composites to SOA cluster. Once deployment is successful, verify with soa-infra URL to confirm the composites are deployed on both servers:

Inputs provided by Ashageeta Rao and Vivek Raj This post provides steps to deploy Oracle SOA composites/applications from Oracle JDeveloper (that runs outside the Kubernetes network) to the SOA...

One Stop Solution for OIC Certificate Management

Oracle Integration Certificate Management empowers admins to manage all their certificates and PGP keys at one place. The PGP Keys are used in Stage File for encryption and decryption. Prerequisite Enable following feature: oic.suite.settings.certificate  (Suite level certificate landing page) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 190924.1600.31522 Simplified and Progressive User Experience :  OIC provides the user with an easy tool for managing life cycle of certificate, through the Certificates page under Settings menu.    Sorting and Filters capabilities: Sort by: It allows sorting by Expiry date in ascending or descending order: Expiring Soon Expiring Later It also allows filter by: Status, Type, Category, and Installed by. By default table is loaded with Installed by User filter.          Progressive UI in the Certificates page.   Certificate details with better grouping of information. Key Functionalities :  All functionalities on the page are displayed in a list view page along with seamless interaction with drawer. Type of Certificates: X509 (TLS)  – An SSL/TLS X.509 certificate is a digital file that's usable for Secure Sockets Layer (SSL) or Transport Layer Security (TLS). The certificate can assist with authenticating and verifying the identity of a host or site thus enables Oracle Integration to connect with external service. Identity (Ex. .jks) - An identity certificate is a keystore which can contain various certificates with passwords. Trust  (Ex. .crt or .cert) SAML - SAML refers to the XML variant language used to encode information. Its a Message Protection certificate which has SAML token support. PGP - Pretty Good Privacy (PGP) is used for signing, encrypting, and decrypting texts. Private - Content can be decrypted with private PGP key. Public - Content can be encrypted with public PGP key.  Certificates Upload:       Step by Step Guide:  Click on the upload on top-right corner. A drawer opens up with the details to fill up. Enter alias name which identifies the certificate. Give a brief description (optional) about the certificate you are uploading. Select the type of Certificate you want to upload. You can choose from the list: X.509, SAML, and PGP. Choose the category of certificate. For a X.509 → Trust, Identity, SAML → Message Protection, and PGP → Public, Private. Choose a file from your local system to upload , please note: can be left blank in case of PGP which will be create as a draft certificate. For PGP upload and usage refer to : https://blogs.oracle.com/integration/using-stage-file-readwrite-operation-to-encryptdecrypt-files For Identity Certificate: (Refer screenshot below) In Alias name field, enter a key name and in case of multiple keys enter it as a comma separated stings. For Key Passwords, give corresponding set of comma separated passwords for the keys mentioned in alias name, in the same order. Provide the password for the uploaded keystore.   Certificate Table: Name : Alias name provided for the certificate. In case of Identity certificate it is the key name. Type: Type of the certificate uploaded (X.509, SAML, PGP). Category: Uploaded certificate category (Trust, Identity, Message Protection, Public, Private). Status: Status of the certificate. It could be either Draft or Configured. Certificate Expiry Tag: Display the time in which the certificate will expire. For Expired certificate it highlights in red.           

Oracle Integration Certificate Management empowers admins to manage all their certificates and PGP keys at one place. The PGP Keys are used in Stage File for encryption and decryption. Prerequisite Enabl...

Configurators - One stop solution for all your dependency configuration needs

For those of you already familiar with the blog for Integration dependency configuration, we have something better to offer. The previous blog talks about replacing a connection dependency in the integration, with the another connection resource of the same role using Rest apis. We know how tedious handling rest apis can get, hence we have now come up with a snazzy UI do the same operation. This feature is available at the integration level and it has also been extended to work at Package level.   Lets take a look at both, Integration Configurator and Package Configurator, in detail in the upcoming sections. Integration Configurator Prerequisite Enable following feature: oic.ics.featureflag.spa.designer (Integration designer pages) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 191110.1400.32380. Key Functionalities As mentioned earlier, the integration configurator is a friendly UI built on top of the existing REST APIs(mentioned in the previous blog), that help users replace the dependencies in an integration, with some more added functionalities that elevate user experience. The functionalities are :  View all the dependent resources of an integration in one page  Dependent Resources include Connections, Lookups, Libraries and PGP keys Edit all the dependent resources of an integration from one page  Clicking on edit should take you to the corresponding edit page of the resource. Example Connection Edit Page, Lookup Edit page etc Edit action is supported for all resources - i,e Connections, Lookups, Libraries and PGP keys Replace resources in the integration A dependent resource can only be replaced by another resource of same type whose status is 'Configured'. We do not allow replacing with a Draft resource Replace action is supported for Connection and PGP Keys. Accessing Integration Configurator There are 2 ways to launch the integration configurator: During Import of the integration While importing an integration, if the user wishes to configure the resources used by the integration, he can click on the 'Import and Configure' button in the Import Integration popup. From Actions Menu  User can click on the 'Configure' action from the actions menu on the integration landing page as shown below.           The configure integration page comes up with information about all the dependent resources on the page. Please find detailed explanation about the numbered items in the image: Name of the connection/lookup/library resource used in the integration. If its a connection, then replace it with any of the other configured connection in the system that has the same connection role(Trigger/Invoke/Trigger and Invoke) as existing connection.  In case of PGP key, it will have to same type of PGP key. For example: a Public PGP key can only be replaced by another public PGP key and not a private PGP key. Other integrations using this resource in the entire instance.  Edit action for the resource on click of which you can navigate to the edit page of each of the resource, so you can configure it manually, instead of replacing.   Package Configurator If you have understood Integration Configurator, mastering the Package Configurator will be a walk in the park. Package Configurator has the same features as Integration Configurator in addition to the package structure. Prerequisite Enable following feature: oic.intg.uiapi.package.configurator  (Enable package configurator feature in UI and REST API ) oic.ics.featureflag.spa.designer (Integration designer pages) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 200113.1400.33491 Key Functionalities With the help of package configurator the user will be able to :  View the dependent resources present in any and all the integrations of a package - Dependent Resources include Connections, Lookups, Libraries and PGP keys. View the corresponding related information of each of the dependent resource which includes status and usage information.  Status is the status of the resource - Draft, Configured etc The usage information essentially gives us info about how many integrations within the package are using this particular resource Configure the dependent resources present in any and all the integrations of a package.  Configuring the dependent resources imply these two actions -  'Edit of the said resource' and 'Replace the resource with another resource'. Edit  Clicking on edit should take you to the corresponding edit page of the resource. Example Connection Edit Page, Lookup Edit page etc Edit action is supported for all resources - i,e Connections, Lookups, Libraries and PGP keys.  Replace  Replacing a resource would replace it across all the integrations in a package. A dependent resource can only be replaced by another resource of same type whose status is 'Configured'. We do not allow replacing with a Draft resource. Replace action is supported only for Connection and PGP Keys. Accessing Package Configurator You can access the Package Configurator either   At the time of import :               In the import package popup, the user will now have an option to 'Import and Configure'. Clicking on which will import the package and then redirect you to the package configurator page. From the action menu:            From the package landing page, user can open the action menu and select the Configure option. This will redirect the user to Package Configurator page.   Lets take a look at the example below and understand the details of using the package configurator. The package configurator can be broadly classified as having these 3 unique functionalities :  List of Dependent Resources :             All the resources used In the package, samples.oracle.ups.package are listed below. We can see that there are 5 Connections, 3 Lookups and 1 Library being used.  We can also see the status of each of resource.  User Actions supported for each row :            On Hover of each resource, user is presented with 2 options-  Edit and Replace (for connections and certificates, and only 1 option if Edit for lookups and libraries).           On Click Edit, it will take the user to the  corresponding resource's edit page.            On Click of replace, in connection row, the user will see a popup containing connections that are the same role as original connection. The user can choose the connection with which you want to replace. Upon choosing, the users changes will be saved in memory so they can move on to making other changes. The changes will be saved only when the user clicks on the Done Button on top right side. Details section for each row :                      Also, you can get more info about any of these dependent resources by clicking on expand button corresponding to the row of dependent resource. It will reveal the details section, which will have the information captured in the screenshot.                                               If you have opted to replace a resource, then that information is also persisted in the details section under the header - 'The connection for the integrations in this package was changed from'. The user also has an option to revert the replaced resource to the original by clicking on the 'Revert' link next below the above mentioned header.                     So this has been a brief explanation that will help in using the two Configurators. Hope you enjoy using it as much as we enjoyed building it!!

For those of you already familiar with the blog for Integration dependency configuration, we have something better to offer. The previous blog talks about replacing a connection dependency in the...

Accelerate API Integration With the Power of OpenAPI

The OpenAPI Specification defines a standard, programming language-agnostic interface description for REST APIs. We are pleased to announce OpenAPI support in Oracle integration cloud REST adapter. What this means is that all OIC integration flows with a REST trigger will publish an openAPI to describe its metadata. This machine readable description of the API will allow interactive API explorers as well as the developers to consume OIC integration flows with ease. On activation of an integration flow, the metadata link will show the new option to display the openAPI in addition to the swagger URL. We are also introducing an interactive openAPI explorer in the Oracle integration cloud REST adapter wizard that will help integration developers to explore and consume APIs described in openAPI format with a few clicks. This option can be selected to provide openAPI 1.0/2.0 (a.k.a. Swagger) as well as an openAPI 3.0 spec.  Steps to consume an API described in openAPI: Create a REST Adapter connection with Invoke role and select the new option and provide the link to the openAPI in the ‘Connection URL’ Use this connection within an integration flow. The REST adapter wizard will have an API explorer based on the openAPI description provided in the connection. Browse and select the required path and operation to complete the wizard.   * This feature is currently in controlled availability. If you are eager to try this out, enable the feature with id oic.cloudadapter.adapter.rest.openapiurl using this blog. OpenAPI is a rich specification. Currently some of the constructs in openAPI cannot be consumed. Please consult the oracle documentation for information about the constraints. 

The OpenAPI Specification defines a standard, programming language-agnostic interface description for REST APIs. We are pleased to announce OpenAPI support in Oracle integration cloud REST adapter.What...

Integration

Testing REST trigger-based Integrations in OIC Console

Test Integration feature allows users to test a non-scheduled integration with REST endpoint by invoking it directly from OIC console without relying on any third party software. Prerequisite for Test Integration Feature Enable feature flag: oic.ics.featureflag.spa.designer oic.ics.console.integration.invoke-integration-support The minimum Oracle Integration version required for the feature is 191110.1400.32380 How it works Activate the Integration. Click on "How to Run" link. A popup will be displayed as below. Click on Test link to go to Test Integration page. Test Integration page will have 3 sections: Operation, Request, and Response. Operation and Request section will have the endpoint's metadata populated. Operation Operation section contains Operation option(if Integration is configured to have multiple operations) along with HTTP method and relative URI (for the selected operation in case of multiple operations). User can choose any of the available operations. Request Request section will have following fields: URI Parameters, Headers, Body, & cURL URI Parameters field will have the list of expected path(or template)  and  query parameters. Headers field shows all the custom headers including Accept & Content-Type based on the Integration configuration. Input body can be provided in the Body field which will have a placeholder describing the expected body type. User can copy the equivalent curl command from the cURL field. Curl command will be generated based on the endpoint's metadata and input provided by the user. User can click on the Test button to invoke the integration and check the Response section for response details. Banner message will be displayed once the Integration(endpoint) is invoked. Then, user can check the response details in the Response section. Response Response section will have the response body(if any) and Headers displayed in Body, Headers field respectively along with the Http Status, and Instance Id(if any). User can click on the Instance Id to go to Tracking details page for monitoring the progress of generated instance.  

Test Integration feature allows users to test a non-scheduled integration with REST endpoint by invoking it directly from OIC console without relying on any third party software. Prerequisite for...

Why and How to Integrate Oracle Policy Automation with Oracle Integration

Oracle Integration has recently introduced new functionalities to extend its connection capabilities. I’m especially talking about the enhanced Oracle Policy Automation adapter which is often used in integration projects when it’s required to extend SaaS applications. The Oracle Policy Automation (OPA) adapter is available in Oracle Integration to address different scenarios allowing to OPA decisions to be invoked at any point of the integration flows. For example, when a Service Cloud Incident is created for a medical device manufacturer (for instance a hearing aid), an integration instance can be triggered by OPA used to find out what to do with this particular type of incident and routing properly the managed information. The integration layer (in this case OIC) performs all necessary actions saving the decision somewhere, invoking other processes / web services or pushing data to multiple applications Cloud and / or on-premise. By using Oracle Integration we can easily integrate the OPA decisions logic into any enterprise application without the need to build a custom connector. There are some use cases where OPA is used in an integration scenario: Auto-triage incidents to ensure service-level agreements are met Calculate benefit payments using data stored in a legacy system Recalculate leave entitlements when regulations changes Calculate complex sales commissions   The Oracle Integration adapter today enables bidirectional communications allowing inbound and outbound communication patterns. An Oracle Policy Automation (OPA) adapter is now available in Oracle Integration (OIC) to: Allow OPA web interviews to trigger Oracle Integration integrations as the endpoints for data operations. Allow OPA decision assessments to be invoked at any point in an integration. After accessing your Oracle Integration instance, you can select the OPA adapter available from the palette therefore, you can configure that one with the role type required from your project (invoke, trigger or both).     Once the connection is created, you can reuse that one in every integration flow

Oracle Integration has recently introduced new functionalities to extend its connection capabilities. I’m especially talking about the enhanced Oracle Policy Automation adapter which is often used in...

Integration

How to Encrypt/Decrypt Files in OIC

Encrypt/Decrypt capabilities in Stage Files You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such as processing in the middle.  Stage File action in Integration canvas supports various file operations (list/read/write/zip/unzip). Existing OIC feature (oic.ics.stagefile.pgp.key.support) enables decrypt option while reading entire file and encrypt option while writing file. This feature is useful to process the file upto 10 MB size and doesn't support decryption option while doing read file in segments. For more details, see blog: Using Stage File Read/Write operation to encrypt/decrypt files.   This blog explains the new feature(oic.ics.stagefile.firstclass.encrypt-decrypt ) which allows OIC user to encrypt or decrypt file of size up to 1GB. Prerequisite Enable following features: oic.suite.settings.certificate  (It will allow user to manage certificate life cycle in OIC) oic.ics.stagefile.firstclass.encrypt-decrypt (It will allow user to Encrypt or Decrypt a large file in stage file) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 191216.1400.33050. Step By Step Guide Upload PGP public/private keys : Refer  "To upload PGP Keys"  mentioned in the blog  To Configure Stage Encrypt File action with PGP Key to encrypt file  Drag and drop the Stage File Action. A popup wizard will be opened where you need to provide the value for the field  "What do you want to call your action ? ". Click Next and Select "Choose Stage File Operation" as Encrypt File. Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Select PGP Key to encrypt file- Select the PGP Public Key to encrypt the file. This is the PGP public key you uploaded at the beginning.                      Click on Next and it will display the summary page. Now click on Done.                To configure Stage Decrypt File Operation with PGP Key to decrypt file Drag and drop the Stage File Action. A popup wizard will be opened where you need to provide the value for the field  "What do you want to call your action ? ".      Click Next and Select "Choose Stage File Operation" as Decrypt File. Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Select PGP Key to decrypt file- Select the PGP Private Key to decrypt the file.  Click Next and it will display the summary page. Now click on Done.            Samples Stage Encrypt File Integration to encrypt file IAR This Integration Encrypts and Writes the file to stage location using Stage Encrypt File Operation with PGP Public Key. Writes the encrypted file to output directory from stage location.  Stage Decrypt File Integration to decrypt encrypted file   IAR This Integration Reads and Decrypts the downloaded file using Stage Decrypt File Operation using  PGP Private Key Writes the decrypted file to output directory from stage location  

Encrypt/Decrypt capabilities in Stage Files You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in...

Using Stage File Read/Write operation to encrypt/decrypt files

You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such as processing in the middle.  The new feature makes it easy to configure PGP keys in Stage File Read/Write operation to decrypt/encrypt file up to 10 MB in size.   Prerequisite Enable following features: oic.suite.settings.certificate  (It will allow user to manage certificate life cycle in OIC) oic.ics.stagefile.pgp.key.support (It will allow user to upload and delete PGP keys in stage file) To enable feature flags - Refer to Blog on Enabling Feature Flags in Oracle Integration The minimum Oracle Integration version required for the feature is 190904.0200.31130   Step By Step Guide Public Key is used for Encryption and Private Key for decryption. In order to use encrypt/decrypt files we have to upload PGP keys in OIC. To upload PGP Keys: From OIC Home page → Settings → Certificates page Click Upload at the top of the page. In the Upload Certificate dialog box, select the certificate type. Each certificate type enables Oracle Integration Cloud to connect with external services. PGP: Use this option for bringing PGP Certificates. Public Key: Enter Alias Name and Description Select Type as PGP Select Category as Public Select PGP File, Click Browse and select the public key file to be uploaded Select ASCII-Armor Encryption Format Select Cipher Algorithm Click Upload. Private Key: Enter Alias Name and Description Select Type as PGP Select Category as Private Select PGP File, Click Browse and select the private key file to be uploaded Enter the PGP Private Key Password of the private key being imported. Click Upload.   You can download the encrypted file to staged location using FTP Download File operation. To configure FTP Adapter Download File operation: Select Download File. Specify the input directory and download directory path(this path will be the input directory for stage read file).   You can then use Stage File action Read File operation to decrypt the encrypted file so it can be read and transformed. To configure Stage Read Entire File operation with PGP Key to decrypt file: Select Read Entire File Configure File Reference - Select Yes Specify the File Reference - Click the Expression Builder icon to build an expression to specify the file reference. Decrypt - Check this option to decrypt the file (Use Decrypt Check Box to enable PGP selection) Select PGP Key - Select the PGP Private Key to decrypt the file   After the transformation, you can use Stage File action Write File operation to re-encrypt it. To configure Stage Write file operation with PGP Key to encrypt file: Select Write File Specify the File Name - Click the Expression Builder icon to build an expression to specify the file name. Specify the Output Directory - Click the Expression Builder icon to build an expression to specify the output directory. Encrypt - Check this option to encrypt the file (Use Encrypt Check Box to enable PGP selection) Select PGP Key - Select the PGP Public Key to encrypt the file   Encrypted file can be sent to an external endpoint or sFTP server. To configure FTP Adapter Write File operation: Select Write File. Specify the directory path to which to transfer files Select the pattern name for files to transfer.   Samples Stage Write File Integration to encrypt file IAR This Integration Downloads input file from input directory to stage location Reads the downloaded file using Stage Read Entire File Operation using File Reference Encrypts and Writes the file to stage location using Stage Write File Operation with Encrypt option and PGP Public Key Writes the encrypted file to output directory from stage location   Stage Read File Integration to decrypt encrypted file IAR This Integration Downloads stage encrypted file from input directory to stage location Reads and Decrypts the downloaded file using Stage Read Entire File Operation using File Reference with Decrypt option and PGP Private Key Writes the decrypted file to stage location using Stage Write File Operation Writes the decrypted file to output directory from stage location

You may have a scenario where the requirement is to retrieve an encrypted file from sFTP server and send that to external REST endpoint in encrypted/unencrypted mode with additional capabilities such...

How To Configure an Integration flow with Binary Content Using Rest Adapter as Trigger in Oracle Integration

Binary Content Type Support in Oracle Integration Rest Adapter Trigger Introduction:                  Oracle Integration Rest Adapter now supports application/octet-stream in the trigger request and response, with this new capability, it will be possible to invoke an integration using Binary content over REST. Similarly, it will be possible for an integration flow to return binary content in response to a request over REST. Before we do a deep dive on the feature let us start with understating what is application/octet-stream   About "application/octet-stream" MIME attachments:                  A MIME attachment with the content type "application/octet-stream" is a binary file. Typically, it will be an application or a document that must be opened in an application, such as a spreadsheet or word processor. If the attachment has a filename extension associated with it, you may be able to tell what kind of file it is. A .exe extension, for example, indicates it is a Windows or DOS program (executable), while a file ending in .doc is probably meant to be opened in Microsoft Word.                No matter what kind of file it is, an application/octet-stream attachment is rarely viewable in an email or web client. If you are using a workstation-based client, such as Thunderbird or Outlook, the application should be able to extract and download the attachment automatically. After downloading an attachment through any of these methods, you must then open the attachment in the appropriate application to view its contents. How is this leveraged in Oracle Integration Rest Adapter:              This feature allows client application to send/receive structured/unstructured content to be sent as raw data / stream of characters to and from Oracle Integration flow, The Customer can send files like PDF, CSV, and JPG to Oracle Integration for processing and Integration Flows can be modeled to send files like PDF, CSV, and JPG as response back to client applications              However there is no restriction on the content-type. We will not parse the request or the response and just write it as a byte stream along with the appropriate header              User has to select the binary option in the Rest Adapter trigger wizard in the request to consume a binary content file which is sent by the client application, similarly user has to select the binary option in the rest adapter invoke wizard in the response to produce a binary content to a client application. Oracle Integration Rest Adapter Trigger configuration and Mapping: Media Types Supported in the new feature:             Oracle Integration now has the capability to expose a rest endpoint which can consume and produce octet stream media types following image shows the media types supported in Rest configuration wizard   Other Media Type selection:                      If user wants to select another types that are not available in the drop down he can select the other media types in the drop down and he can provide the media type name as shown in the following screen.   Mapping Elements for Octet-Stream:              Once the Trigger and invoke is configured with binary using the Oracle Integration rest connection, the mapping activity comes with a single element (Stream Reference), this stream reference accepts a file without any schema because the request and response contains can consume any kind of data in any format.                       Mapping is a "StreamReference" attribute which will take the binary content reference to process the binary data from and to the client application, the following image shows the mapping elements in both request and and response of a rest trigger.   Publishing a Rest endpoint of Type Swagger and Open API which has Octet-Stream in Rest Trigger                                     The Oracle Integration exposed rest endpoint metadata of swagger and openapi type which consumes and produces a binary type can we used in any client(for example, a swagger editor or a postman client) and we can use this metadata to create a request.

Binary Content Type Support in Oracle Integration Rest Adapter Trigger Introduction:                  Oracle Integration Rest Adapter now supports application/octet-stream in the trigger request and...

Integration Designer Pages - Progressive Web App UI Experience

Pre-requisites Enable oic.ics.featureflag.spa.designer feature flag to see the new Integration designer pages. What's New New UI is built using Oracle JavaScript Extension Toolkit (Oracle JET) utilizing full benefits of JavaScript, CSS3 and HTML5 design and development principles. This new UI is compliant with latest UX standards and offers consistent user experience across Integration designer pages. Following are the highlights of the new features and enhancements included in the new UI: Single configuration UI to view and configure dependent resources of an integration. Inline edit of connection resource configuration. Lookup editor enhancements for ease of editing large table sets using paging controls. Single click callout library configuration. Pre-filtered results on resource list page based on logged in username. Enhancements on the resource list page to filter by created and last updated username. Progressive loading of UI contents. Continuous and smooth scrolling of resource list contents. Ease of accessing primary actions on resources list page. Changes in the top-level menu structure The top level menu in Suite UI has been restructured to make way for the new UI. When the logged in user has access to designer pages, a click on the 'Integrations' menu now opens a sub-menu with links to designer landing pages instead of page re-direction. Click the sub-menu links displays the respective landing page. Let's see the behavior explained through screens Existing Oracle Integration home page menu structure Clicking on the 'Integrations' menu redirects a user to Integration pages through a complete browser refresh. A switch back to the home page is done by clicking on the 'Home' icon which also does a full page refresh. New Menu Structure: Integrations (Designer Pages) , Monitoring, and Settings becomes top level menus in the suite home page with the structure. Integrations All the designer pages are moved under the 'Integrations' menu. As mentioned above, after the SPA Designer feature is enabled the 'Integrations' menu item in the OIC suite home will have a drill-down option which can be seen with the 'Arrow' next to the name. Clicking on 'Integrations' will slide out the existing menu and show the respective Integration features. Clicking on features/pages loads the respective content on the right side without refreshing the page. To go back to the home page menu items, users can click on the 'Arrow' point to the left side in the header section. Monitoring User can access all the from the top level 'Monitoring' menu.  User can access all the Integration monitoring pages under the 'Integration' sub-menu under the 'Monitoring' top level menu.        Settings User can access all the from the top level 'Settings' menu.  User can access all the Integration specific settings pages under the 'Integration' sub-menu under the 'Settings' top level menu.  .    Designer UI Experience Common list view experience The designer pages follow the new common list view pattern which combines the toolbar and table view to full-fill the complete functionalities of the integration designer resources. The toolbar includes the features like searching, sorting and filtering with the other features of showing applied filter and sorting details, refresh option and summary text.  The table view has number 0f unique features and one of them is to show only required and minimal data on a row and the secondary or extra details are shown as part of detail view section of that row. The table row will show an overlay on mouse over will include the primary action buttons, action menu and open/close detail view. Toolbar Features Searching On the toolbar, the search icon will appear on left most and the search input text field will open when click on the search icon. The input text will show the placeholder text depends on the type of resource list page. Type the name and press <Enter> key will give the list of resources by applying the search by contains criteria.  There will be a cross icon next to search input field, click on the cross icon will clear and close the search input field and get the result without search input criteria. Filtering On the toolbar, the filter icon appears next to the search icon. The filter options are not shown on toolbar but all the filter options are appears on a popup which can be launched by clicking on the filter icon. The filter popup will have one or more filter options of different types which makes it easier to choose the desired filtering and sorting user wants to apply on the list.  Sorting The sorting menu is not available separately on the toolbar but this will be available as part of filter popup which we have explained in the above section. The sorting options will be applied with filter options by 'Apply' action on filter popup. Filter and Sorting Details The filter and sorting detail section will show all the filter and sort options applied. This section will render the details with the key value pair of applied filters and sorters. Each key value pair detail section has a cross icon to remove the filter or sorter individually. There will be 'Clear' option at the end of all the filter and sorter details to clear all the filters and sorter options. Summary Detail The summary detail section will show the number of total resources as a result after applying the search and filters, and how many resources are rendered and can be viewed in the list view port.  Default Filters The default filter has been introduced on Integrations, Connections, Lookups and Libraries designer listing pages. The default filter of updatedBy criteria with the logged in username. This filter will be applied on new browser session and this filter is preserved while navigating to different pages and even the subsequent login on the same browser session. Once the updatedBy filter is removed, then this default filter will not be applied even on subsequent login on the same browser session. Accessing Actions The actions are referred as primary actions and secondary actions which are accessible on overlay actions panel. This overlay actions panel will be rendered on mouse over of that row. On the actions panel, there would be maximum two primary actions based on the current status of that resource and the remaining actions are referred as secondary actions which will be rendered as menu items and that can be launched by hamburger icon on actions panel. Detail View Section The minimum and required details of any resource will be shown in the individual row and complete or extra details will be shown in its details section. The details section can be opened by the open details icon on the action overlay panel. Editor Experience Connection Editor Connection editor page is used to configure connection properties and security policy to be used in the orchestration canvas. In the new UI experience user can easily configure the endpoint information without going through multiple pop-ups.  Lookup Editor Lookup editor page is used to store mapping of various values used by different applications, so it can be used in the integration for auto mapping. Navigating across various mapping for a large table is fast using the paging control.  Library Editor Library editor page is used to configure functions to be used in the orchestration canvas and also to export/import metadata. In the new UI experience we make it easier to configure the function with just a single click. Users can just select the checkbox next to the functions in the left panel and it automatically gets configured for both Orchestration and XPath types. Users can modify the param types and Save it. Here is the improved UI experience on the editor page where all the files/functions part of the registered library are exposed in the 'Functions' panel. Clicking on the checkbox next to the function and you are set to go to use the function as orchestration callout or XPath. Timezone Preference By default Designer pages are displayed based on the browser's timezone. To override the default and to set a preferred timezone use Preferences menu under the User Menu. When timezone preference is saved the page refreshes to display contents based on timezone set in preferences. The change in timezone is saved in cookies and persists across browser sessions. Report Incident While in the designer pages click on Report Incident menu brings up the Report Incident dialog. Make relevant entries and select Create to report an incident. New Features Integration Configuration New functionality and Single UI to perform various operations for an integration: View all dependent resources for the integration,  View the usage and status information of dependent resource  Configure dependent resource and persist the changes Perform all the above operation during import or after import at a later stage.

Pre-requisites Enable oic.ics.featureflag.spa.designer feature flag to see the new Integration designer pages. What's New New UI is built using Oracle JavaScript Extension Toolkit (Oracle JET)...

Calling JD Edwards Orchestrations from Oracle Integration: Quickly and Easily

Background JD Edwards orchestrations empowers army of citizen developers/business analyst resources to design business applications REST APIs without writing a single line of code. JD Edwards orchestrations exposes business process steps tied together graphically through the robust semantics of REST standards. JD Edwards orchestrations are great way to simplify, integrate and automate repeated tasks through digital technologies. JD Edwards orchestrations are executed at AIS Server however they are designed via a tool called as Orchestrator studio, with JD Edwards tools release 9.2.4.0, Orchestrator studio is also part of AIS Server that further simplifies deployment of Orchestrator studio. Orchestrator studio is a low code, no code tool that allows business analyst to leverage their knowledge of business applications and create the flow using series of application tasks/steps and expose them as a REST end point. As JD Edwards AIS/Orchestrations are gaining traction & momentum, it has become tool of choice for JD Edwards customers who are looking to Integrate JD Edwards with Cloud SaaS applications, PaaS services or any other on premise applications.  Sample Use Case Consider you want to return all sales orders that are at a particular state lets say 540 i.e. Print Pick in JD Edwards Sales Order application. This means you need an REST end point that takes Sales Order state as an input with default being 540, and return in response array of JD Edwards sales orders. This information can be consumed to update any other third party applications with the state of the orders, or can be simply consumed from process application to show the list of the orders. Basic Ingredients  JD Edwards Installation with below components (Latest and great EnterpriseOne Tools Release is recommended, with least being 9.2.1.x). Orchestrator Studio AIS Server  OIC Agent (If JD Edwards is installed on premise or not accessible directly from the Oracle Integration.) JD Edwards artefacts JD Edwards Orchestration to query order at particular state as described above. OIC Instance Steps  Creating Connection with JD Edwards AIS Server Login to the OIC instance. Go to the Integrations page. Go to the connections page and click "Create" button. Search for REST adapter and click Select. Give the connection a meaningful Name and Identifier and click OK, please take note of this name you will need this name later. Provide information for following fields: Connection Type: Please select "REST API Base URL" from drop down. Connection URL: Enter AIS Server host and port as per your configuration, http://<aisserver>:<port>/jderest Select Basic Authentication in Security Policy Enter user name and password If required, Click "Configure Agents" button and select the respective agent  Click on Test to test the connection. Click on Save to save the connection. Creating Integration Flow Go to the main Integrations Page. Click on the "Create" button. Select the "App Driven Orchestration" as Integration Style, you can select "Scheduled Orchestration" Integration style based on your business requirements. Give Integration a desired name. For simplicity and demo purpose I am adding REST trigger to this Integration, Click "+" button and select "Sample REST Endpoint" as shown below. Provide your endpoint meaningful name and click Next button. Define endpoint relative resource URI, please note this is OIC resource URI hence you can give any meaningful resource URI as desired. Select options to add/review request parameters as well as response, as shown below. Add desired parameters for the OIC Orchestration, this time I am giving OrderStatus as discussed above this can be much more than order status. Select JSON payload response and click enter sample JSON <<<inline>>> link. Add response of the OIC Orchestration to have array of orders with field order number, customer number and customer name. Click on the "Ok" button. Click on the "Next" button. Click on the "Done" button and save endpoint, please note summary should look like below: Hover over the Integration flow on the canvas and click plus icon, you will be prompted to select the desired connection. Type the initials of connection created earlier in the process earlier and connections will be filtered based on the text. Select connection you have created earlier in the process. "Configure REST Endpoint" wizard shall be prompted, follow the wizard to configure JD Edwards Orchestration invoke through REST adapter. Provide information on following fields, please refer to below picture for more details: What do you want to call your endpoint? as "GetShippedSalesOrders". What is the endpoint's relative resource URI? as "/v3/orchestrator/JDEGetSalesOrders". Please note "v3" in the above URI depends on your Orchestration version, refer to this link for more information. In this URI, "/JDEGetSalesOrders is an orchestration name, you can replace this with your orchestration. What action you want to perform on the endpoint? Please select "POST" from drop down. Turn on "Configure a request payload for this end point". Turn on "Configure this endpoint to receive the response". Click on the "Next" button. Select request payload format as JSON, and click enter sample JSON <<<inline>>>, please find my sample payload below. Enter your sample JSON request for the configured Orchestration, as shown below. Click on the "Ok" button. Click  on the "Next" button. In the Response payload screen as well select response payload format as JSON and click enter sample JSON <<<inline>>>. Paste your Orchestration response here, you might get an error related to empty array notations as shown below: JD Edwards Orchestration response typically include's empty array notations in response, please replace those "[]" empty array notations with null as shown below.    Click on the "Next" button. Click on the "Done" button and save the invoke operation. Now that JD Edwards Orchestration has been added on the canvas, OIC has added two mapper to the canvas to map parameters from OIC flow to JDE orchestration and map the response of JDE orchestration to OIC flow response. Click on the edit icon of mapper request added by OIC between Trigger and JD Edwards REST Orchestration invoke, as shown below: Map Parameters as shown below, please note here we are mapping parameter from trigger REST call to JD Edwards Orchestration. Click Edit Icon of mapping response from JDE Orchestration to OIC Orchestration, by hovering mapper added by OIC between JD Edwards REST Orchestration invoke and OIC flow terminal, as shown below: Here we will map the response from JD Edwards Orchestration to the main Integration response. Please note the complexity of the structure of JD Edwards orchestration response, this is because this particular Orchestration is sending application grid information in response. We will simplify the structure response using Oracle Integration mapper. For that please drop the desired nodes from JD Edwards grid data row set element to Oracle Integration response sub-element as desired. This will automatically include for-each node to generate the array of orders based on the array of JD Edwards grid rowset element. This greatly transforms complex JD Edwards business application REST response to simple JSON structure without writing a single line of code. Click on the "Validate" button to validate the mapping. Click on the "Close" button. Click on the "Save" button. Now let's add tracking field to the Integration, click hamburger icon on top right hand side of Integration canvas and click Tracking menu option. Drop field "OrderStatus" as a primary tracking field for this Integration. Click on the "Save" button and close the "Business Identifier For Tracking" dialog. Save and Close the Integration. Activate the Integration. Test the Integration, In my case here is the response from Integration. Please note here response has been transformed into the simplified JSON structure containing array of JD Edwards orders without writing single line of code. Summary JD Edwards Orchestrator opens up all JD Edwards business applications through REST interface including custom business applications, this greatly simplifies how JD Edwards applications can be integrated with Cloud SaaS or any third party applications. Having the ability to invoke JD Edwards Orchestrations from Oracle Integration enables OIC customers to transact, interact and integrate with JD Edwards business applications. Benefits of calling JD Edwards Orchestrations from OIC Easily integrate JD Edwards application with Oracle Cloud applications like ERP Cloud, HCM Cloud, Engagement Cloud etc. Simplifies creation of business workflows spawning across JD Edwards and Oracle Cloud applications using Process applications in OIC. Transforms the response from complex JD Edwards business applications JSON to more simple JSON structures. What's Next I have plans to write more blogs on leveraging JD Edwards Orchestrations with OIC in due course of time, on the same endeavor I need your feedback on which one is critical for you. Hence I appreciate if you could please drop your vote through comments here on the blog or tweet me directly @prakash_masand on which one you would like to see first coming out of below. Leveraging JD Edwards Orchestrations for outbound integration with Cloud applications i.e. Integrating with Cloud application on occurrence of business application event in JD Edwards. Through table trigger event in JD Edwards. Through interactive application update in JD Edwards. Leveraging a business process application built in Oracle Integration to meet workflow requirements in JD Edwards. As an example you want to trigger a sales order change approval process (workflow) designed in OIC interactively from JD Edwards business applications.

Background JD Edwards orchestrations empowers army of citizen developers/business analyst resources to design business applications REST APIs without writing a single line of code. JD Edwards...

Integration

Integration Patterns - Publish/Subscribe (Part2)

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a messaging infrastructure. The previous post covered the required steps to create the connection, the integration and the trigger (using Postman) to publish a message. This second part will explain step-by-step how to consume those messages.   1. Create an Integration. We will select the Type Subscribe To OIC.     Provide a name a description and optionally associate this integration with a package.     Then you need to select a Publisher. These are available as soon as you configure a “Publish to OIC” integration.  In my instance I have 2 active Publishers.     Now we need to decide who is the consumer of the message. For this exercise I will simply write the message to a local file on my machine. In order to do that I need a File Adapter  and a Connectivity Agent* Setting up a File Adapter and the required Connectivity Agent is out of the scope for this article – but you can find the required information for the File Adapter here and how to setup the agent here. In a real scenario we would use an application or technology adapter to propagate that message to the end application/system. *The Oracle On-Premises Agent i.e Connectivity Agent is required for Oracle Integration Cloud to communicate to on-premise applications.      On the right-side pallet, you need to drag the File Adapter Connection onto the canvas. Once you do that, you get the below wizard. “what do you want to call your endpoint”: Here we can give a name for the File Operation  “Do you want to specify the structure for the contents file?” – Yes “Which one of the following choices would be used to describe the structure of the file content?” – Sample JSON document.   Now, we need to specify where to write the file and the pattern for the name. Specify an Output Directory: In this example I use a directory on my local machine File Name Pattern: The name of the file should be concatenated with %SEQ% which is an incremental variable that is used to avoid files having the same name. Hoovering with the mouse on the question mark provides more information on this.     The last step is the definition of the file structure. Since we selected a JSON format, I uploaded a very simple sample as seen below.     This is the end flow with both source and target endpoints configure. We just need to do some mapping now! By clicking in the Create Map icon we get the below screen, where we can simply drag the attribute message from source onto target. 2. Activate the Integration We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. You can create more subscribers - I cloned the existing one and named it SubscribeMessage_App2, so that we have two consumers for the published message.     3. Run the Integration Now we can use Postman to trigger the publish message – exactly the same as step 4 from post. 4. Monitor the Results When we check the Tracking option under Monitoring, we can see the PublishMessage instance and two SubscribeMessage instances – as expected.   The final step is to verify that 2 files were created in my local machine.   Simple, yet powerful. For more information on Oracle Integration Cloud please check https://www.oracle.com/middleware/application-integration/products.html    

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a...

Integration

How to send email with attachments in OIC?

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really easy to configure notification activity to add attachments along with the email. Prerequisite Enable feature flag:  oic.ics.console.notification-attachment Click here to learn on how to enable feature flag. The minimum Oracle Integration version required for the feature is 191020.0200.32001. Note - The notification attachment functionality is currently supported only in OIC Gen 1. Step By Step Guide to Send Notification with Attachment There are multiple ways in OIC to work with files. Some of the options are i) configure REST adapter to accept attachments as part of the request, ii) use Stage file activity to create a new file or iii) use FTP adapter to download the file to OIC from remote location for further processing. Any file reference(s) created by upstream operations can be used to configure attachments in the notification activity. Let us learn how we can configure notification action to send email with attachments in simple steps:  For this blog, we will clone the sample integration ‘Hello World’ that is available with OIC. Navigate to the integration landing page, clone the 'Hello World' integration and name it 'Hello World with Attachment' Navigate to the canvas page for the newly created 'Hello World with Attachment' integration Edit the configuration for the REST trigger and change the method type to POST, media type to multipart/form-data and configure request payload parameters to accept attachments. We will add a FTP connection (DownloadToOIC in the image below) and select 'download' operation to download the file to OIC. Here, I have configured FTP connection to download a 'Sample-FTP.txt' file which is already created and present in the remote location. Now, let us add a Stage File action to create a new file. Here, I have configured stage file to write the request parameters - name, email and flow id separated by a comma and name it as 'StageWriteSample.txt'. Refer to the blog to learn more about how to use Stage activity. This will allow us to configure multiple files as attachments in the notification activity later. The updated flow now looks as shown below Edit the notification activity (named as sendEmail in the sample integration) and we should see a new section "attachments" next to the body section. Clicking on the add button (plus sign) in the attachments section will take us to a new page to choose the attachment. We have three file references (highlighted in yellow) available to choose from - attachment from REST connection, file reference from the stage file write operation and file reference from the FTP download operation. We can select file reference(s) each at a time to send the files as attachments. User can edit or delete the attachment once added. The notification activity after configuration should have 3 attachments. Save and activate the integration and now your integration is ready to send emails with attachments. Sample email is shown below when the above flow is triggered. Hello World with Attachment integration created is attached for reference and can be used after configuring the FTP connection. The size limit on the email is 1 MB for OIC Gen 1 and 2 MB for OIC Gen 2. Both email body and attachment are considered in calculating the total size. Hope you enjoyed reading the blog and this new feature helps in solving your business use-cases!

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really...

Integration

Using the next generation Activity Stream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to keep switching between them to get the complete picture of the Instance.  With this new Activity Stream, we are clubbing Audit Trail with Tracing information and showing more compact and easily readable Activity Stream. Note: Next generation Activity Stream is applicable to Orchestration Integrations only. Prerequisite for Activity Stream Enable feature flag:  oic.ics.console.monitoring.tracking.improved-activity-stream The minimum Oracle Integration version required for the feature is 191030.0200.32180 Step By Step Guide to View Activity Stream Enable Tracing (with payload if required) for the integration. This is to view detailed payload information during development cycle. For production, it is recommended to keep Tracing turned off. Run the integration to create an instance. Navigate to Monitoring → Tracking page and check for the instance user wants to view Activity Stream Click on the primary identifier link of chosen instance to navigate to Tracking Details page.  Click on View Activity Stream menu from the Hamburger menu to display the new Activity Stream panel. NOTE:  To view payload, enable Tracing and payload. Follow How to enable and use tracing in less than 5 min to enable tracing   Features in Activity Stream: Click on Message/Payload to view (lazy load) the payload for the action Expandable loop section to view flow execution inside For-Each/While loop (available only if tracing with payload is enabled) Red node to indicate Error. Errored instances are displayed in a descending execution sequence to show the Error at the very top. Expand payload to full screen Date and Time are shown according to User Preferences Each Message/Payload section has Copy to Clipboard option, that allows user to copy the payload to clipboard Since payload information is derived from log files (which can rotate as and when newer data gets written), older instances might no longer display the payload information on Activity Stream There are two level of download  Download button at the top. User can download complete Activity Stream using this button Download button inside Message/Payload section to download specific Message/Payload            REST API: To View Activity Stream for a given instance ID: curl -1 <user-name>:<password> -k -v -X GET -H 'Content-Type:application/json' https://<host-name>/ic/api/integration/v1/monitoring/instances/<instance-id>/activityStream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to...

How SOA Suite Adapter Can Help Leverage your On-premises Investments

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It provides a rich design-time experience to create a single connection to SOA Suite / Service Bus, browse through the services running on them, and create integrations. For runtime, it relies on the standard SOAP and REST Adapters with or without the Connectivity Agent, depending on how the SOA Suite / Service Bus is accessible over the network. The current SOAP and REST adapters on OIC already provide integration to these services, but with this new adapter, you can do away with the hassles of multiple connections or fetching service metadata manually.  The SOA Suite adapter supports connectivity to: Oracle SOA Suite and/or Oracle Service Bus hosted on-premise Oracle SOA Suite and/or Oracle Service Bus hosted on SOA Cloud Services    Configuring SOA Suite Adapter to connect to a SOA Suite / Service Bus instance In the connection palette, select the SOA Suite Adapter. Provide a meaningful name for this connection and click on 'Create'. This opens up the page where the connection details can be configured.   Configure connectivity: To determine what URL to provide here, examine the topology of the SOA Suite / Service Bus instance i.e., whether the instance is accessible through : The Load Balancer URL The OTD or Cluster Frontend URL or just the Managed Server URL where the SOA Suite / Service Bus instance is running.   Configure Security: Provide the SOA Suite / Service Bus user credentials here.  If you are integrating with SOA Suite, make sure this user is a part of the 'Operators' group and has the 'SOAOperator' role on that server. Likewise if you are integrating with Service Bus, make sure this user is a part of the 'Deployers' group on that server.    Configure Connectivity Agents: In case the SOA Suite / Service Bus instance is not directly accessible from Oracle Integration, for eg. if deployed on-premise, or behind a firewall, a Connectivity Agent needs to be configured for this connection. This can be done using the 'Configure Agents' section.  However, Connectivity Agent may not be required when the SOA Suite / Service Bus URL is publicly accessible, for eg. if deployed on SOA Cloud Service. To know more about Connectivity Agent, check out these: New Agent Simplifies Cloud to On-premises Integration The Power of High Availability Connectivity Agent   Test and Save the connection: A simple 'Test' connection on this page verifies that the SOA Suite / Service Bus is accessible through the connection details provided, that the version of this instance is supported by the adapter, and that the user is authenticated and authorised to access this instance.   How to configure a SOA Suite invoke endpoint in an Orchestration Flow (This adapter can be configured only as an invoke activity to the services exposed by SOA Suite / Service Bus.) Drag and drop a SOA Suite adapter connection into an orchestration flow.  Name the endpoint and proceed to configure the invoke operation. If only SOA Suite or Service Bus instance is accessible through the URL provided in the connections page, the same is shown as a read-only label. But if both are accessible, they are shown as options. If the options are shown, select option 'SOA' or 'Service Bus' - to configure this endpoint to invoke SOA Composites or Service Bus projects respectively.   To configure this endpoint to invoke SOA Composites: (If both SOA and Service Bus are available as options, select option 'SOA') Select a partition to browse the composites in it Select a composite to view the services that it exposes.   To configure this endpoint to invoke Service Bus Projects: (If both SOA and Service Bus are available as options, select option 'Service Bus') Select a project to view the services that this it exposes:   Configuring Service details: Select a service from the desired SOA composite or Service Bus project, to integrate. If the selected service is a SOAP web service, the Operation, Request / Response objects, and the Message Exchange Patterns are displayed SOAP services with Synchronous Request-Response or One Way Notifications are supported. Asynchronous Requests are supported as One Way Notifications only. Callbacks are currently not supported. If the selected service is a RESTFul web service, proceed to the next page to complete further configurations for the Resource, Verb, Request and Response Content Types, Query Parameters, etc.  REST services which have the schemas defined (i.e., non-native REST services and non-end-to-end-json based REST services) are supported. The following Request and Response Content Types are supported: application/xml application/json Proceed to the next page to view the summary and complete the wizard. The newly created endpoint can now be seen in the orchestration flow. The request and response objects of this invoke are available for mapping in the orchestration.   Runtime invocation from OIC to SOA composites / Service Bus projects:  Once the request and response objects are mapped, this flow can be activated like any other flows on OIC. The activated flow would be ready to send requests to running SOA composites / Service Bus projects via SOAP or REST invocations. You can use the OIC Instance Tracking page to monitor the runtime invocation after flow is activated and invoked.   What this adapter needs on the SOA Suite / Service Bus side Supported SOA Suite Versions: Oracle SOA Suite v 12.2.1.4 onwards Oracle SOA Suite v 12.2.1.3 - with these patches applied.  SOA Suite: Patch 29952023  Service Bus: Patch 29963582 Supported OWSM policies: For SOAP webservices  oracle/http_basic_auth_over_ssl_service_policy oracle/wss_username_token_over_ssl_service_policy oracle/wss_http_token_over_ssl_service_policy oracle/wss_username_token_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported For RESTful webservices   oracle/http_basic_auth_over_ssl_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported  

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It...

Delisting of unsupported HCM SOAP APIs

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP services and ignore any other HCM SOAP services.  Support for displaying only the supported SOAP services has been added in the OIC release of 19.3.3.0.0. This support can be enabled via Feature Flag (oic.cloudadapter.adapters.hcmsoapapis-ignore). Behavior of new Integration flows: Once the feature flag is turned on, end users will be able to access only the HCM services listed in link mentioned in introduction, when they try to create new integration flows. The adapter wizard will list only the services documented as supported. Behavior of old Integration flows: End users will be able to edit, view and activate old integration flows as before. Old integration flows will not have any impact because of this change in adapter. If a new adapter endpoint is added to old integration flow, only the supported HCM services will be available for consumption. Note: The end user experience will be uniform across all Fusion Application Adapters like, Oracle Engagement Cloud Adapter, Oracle ERP Cloud Adapter along with Oracle HCM Cloud Adapter. This support does not have any impact on REST resources accessible via Fusion Adapters.

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP...

Integration

Integration Patterns - Publish/Subscribe (Part1)

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for connecting multiple systems. In a nutshell, this pattern is about: Source System publishes a message Target Systems subscribe to receiving that message This enables the propagation of that message into all the target systems that subscribe to it, as illustrated in the below picture. https://docs.oracle.com/cd/E19509-01/820-5892/ref_jms/index.html   This pattern is not new, in fact, it’s been around for decades. It powered distributed systems with its inherent loose coupling and independence.  Publishers and subscribers are loosely coupled which allows for the systems to run independently of each other. In the traditional client-server architecture, a client cannot send a message to a server that is offline. In the Pub/Sub model, message delivery is not conditioned by the server availability. Topics VS Queues The difference between a Topic and a Queue is that all subscribers to a Topic receive the same message when the message is published and only one subscriber to a Queue receives a message when the message is sent. This pattern is about Topics. The Hard way From a vendor neutral point of view, if an Organization needs a messaging infrastructure, it will typically need to setup hardware, install the OS and the messaging software, take care of configurations, creating and managing user, groups, roles, queues and topics…and this is only for the Development environment. Then we have Test and Production, which may require an HA cluster…you can see the direction this is going, it adds complexity. The Easy way Fortunately, OIC abstracts that complexity from the user. It’s Oracle managed, the Topics are created and managed by Oracle. From an integration developer point of view – the only requirement is to make use of the “ICS Messaging Service Adapter” – as we will explain in a bit. This brings the benefits of messaging to those that did not require the full extent of capabilities that a messaging infrastructure provides and were typically put away due to its complexity. Use Cases  There are plenty of uses cases that would benefit from this solution pattern: User changes address data in the HCM application New contact/account created in the Sales or Marketing applications ERP Purchase Orders need to be shared downstream Oracle’s OIC Adapters support many of the SaaS Business Events. How to enable that has been described in another blog entry:  https://blogs.oracle.com/imc/subscribe-to-business-events-in- fusion-based-saas-applications-from-oracle-integration-cloud-oic-part-2        Implement in 4 Steps For this use case, we will just use a REST request as the Publisher.   1. Create a REST trigger Go to Connections and create a new one. Select the REST Adapter. Provide a Name and a Description. The role should be Trigger. Press Create. Then you can save and close.   2. Create an Integration. We will select the Type Publish To OIC, which provides the required structure. Provide a name a description and optionally associate this integration with a package. Now we can drag the connection we created before, from the pallet on the right side, into the Trigger area on the canvas (left side) The REST wizard pops up. We can add a name and a description. The endpoint URI is /message – that’s the only parameter we need. We want to send a message; therefore the action is POST. Select the Checkbox for “Configure a request payload for this request”. Leave everything else as default.   The payload format we want is JSON and we can insert inline a sample – as seen in the picture. That’s all for the REST adapter configuration! You should also add a tracking identifier. The only available is the message element.   3. Activate the Integration. We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. (Your activation window might look a bit different, as this has the API Platform CS integrated for API publishing)     4. Test the Integration. After activation, you see a green banner on top of your screen with the endpoint metadata. Here you can find the endpoint URL to test the REST trigger we just created. Using Postman (or any other equivalent product) we can send a REST request containing the message we wish to broadcast.   And when we check the Tracking Instances under Monitoring…voilà, we see the instance of the Integration we just created. And here we have the confirmation that the payload was sent to the Topic!   In Part 2 of this blog series we cover the Subscribers!  

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for...

A Simple Guide to Asynchronous calls using Netsuite Adapter

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous CRUD calls to Netsuite and also provided extensive Search capabilities. With a new update, we are now adding support to perform all the above operations as asynchronous calls against Netsuite. As we would see in this post, the user can configure a simple toggle during Netsuite invoke configuration and let the adapter internally handle all the intricacies of asynchronous processing like submitting the asynchronous job, checking for its status and getting the results without the user needing to configure separate invokes for each. How to configure Netsuite Adapter to make Asynchronous Calls? When configuring an invoke activity using a Netsuite connection on the orchestration canvas, the user can now toggle between Synchronous or Asynchronous calls on the Netsuite Operation Selection Page by selecting the appropriate Processing Mode as shown in the image below. This selection is valid for Basic (CRUD) and Search operation types. Note that Miscellaneous operations don't support asynchronous invocations. The user can also click on the Learn More About Processing Mode link next to the Processing Mode option in the configuration wizard to get inline help on the feature. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Basic (CRUD) operations? As mentioned above, the user can configure a particular Netsuite invoke to use the Asynchronous processing mode by selecting the appropriate radio button during the endpoint configuration. Once configured, the Netsuite endpoint thus created, will automatically either submit a new asynchronous job or check the job status and get the results based on certain variables being mapped properly. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous basic operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following two variables → jobId and jobStatus. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by initializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → $jobStatus != "finished" and $jobStatus != "finishedWithErrors" and $jobStatus != "failed" 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId variable created in step 2 to the jobId defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId and jobStatus variables created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user can now configure a Switch activity with either the following condition or a variation of the same based on the business needs... If we follow the condition above, this switch activity would result in two routes being created in the flow... 6.a. jobStatus is either finished or finishedWithErrors or failed → The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an add customer asynchronous job, if the job finished successfully without errors, the user can get the internalId's of the created Customer records... 6.b. jobStatus is neither of the above values → This means that the asynchronous job is still running. Hence before we can get the job results, one can either do certain other operations or wait and loop back to the while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter will automatically either submit a new asynchronous job or check the job status and get the results based on the jobId being passed in the request. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Search operations? This is pretty similar to how we model for Asynchronous basic (CRUD) operations, the only differences arising due to the fact the result returned is paginated. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous search operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following three variables → jobId, pageIndex, totalPages. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by InitializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → integer( $pageIndex) <= integer( $totalPages) 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId and pageIndex variable created in step 2 to the jobId and pageIndex defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId variable created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user should now configure a Switch activity with the condition to check if the status of the submitted job is finished ... This switch activity would result in two routes being created in the flow... 6.a. status is finished → In this route, the user should create an Assign activity (represented by IncrementPageIndex in the flow diagram shown at the beginning of this section) which increments the pageIndex variable and assign the totalPages variable with the actual value from the results of the asynchronous job performed in the Netsuite invoke. The two images below show the two assignments needed in this Assign activity. For pageIndex variable... For totalPages variable ... The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an search customer asynchronous job, if the job finished successfully without errors, the user can get the Customer records which were searched for ... 6.b. status is anything other than finished → This is route 2 of the Switch activity introduced in step 6 above. This means that the asynchronous job is either still running, finishedWithErrors or failed. The user should introduce another Switch activity in this route to deal with jobs which are finishedWithErrors or failed. The otherwise condition for new Switch activity would mean that the job is still running, in which case the control should loop back to while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter now allows the user to make use of its extensive Search capabilities in an Asynchronous mode with full support for retrieving the paginated result set. How to request this feature? This feature is currently in controlled availability (Feature Flag → oic.cloudadapter.adapter.netsuite.AsyncSupport) and available on request. To learn more about features and "How to Request a Feature Flag", please refer to this blog post.

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous...

Introducing the Oracle OpenWorld Session "Compliance and Risk Management Realized with Analytics and Integration Services" - CAS2657

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services. I am excited to be presenting with these two knowledgeable people… Conny Bjorling, Skanska Group, and Lonneke Dikmans, eProseed. Please join me, Simone Geib, Director of Integration at Oracle, Conny and Lonneke as we describe Skanska’s common integration platform and what role Oracle Integration (OIC) plays as the central component of the platform. Conny and Lonneke will walk you through Skanska’s “Sanctions” project, which integrates Oracle Fusion, Microsoft Dynamics, Salesforce and bespoke systems with Oracle Analytics Cloud through Oracle Integration, to ensure that none of the customers and suppliers that Skanska works with are on a Sanctions list. We will also discuss a future part of the project where Skanska will introduce Integration Insight, a capability of Oracle Integration, to provide real time visibility into the business process through business milestones and metrics that are easily mapped to the implementation and aggregated and visualized in business dashboards.   Conny Bjorling is Head of Enterprise Architecture at Skanska Group. 20+ years of experience from senior IT and Finance roles with Retail (FMCG), Banking & Finance and Construction & Project Development. Focusing on Cloud adoption and agile architecture in the Cloud. Passionate about the business value of data.   Lonneke Dikmans is partner and CTO at eProseed. She has been working as a developer and architect with Oracle tools since 2001 and has hands on experience with Oracle Fusion Middleware and Oracle PaaS products like Oracle Kubernetes Engine, Oracle Blockchain Cloud Service, Oracle Integration Cloud Service, Mobile Cloud Service, API Platform Cloud Service and languages like Java, Python and Javascript (JET, node.js etc). Lonneke is a Groundbreaker Ambassador and Oracle Ace Director in Oracle Fusion Middleware. She publishes frequently online and shares her knowledge at conferences and other community events.

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with...

Aggregator Pattern in Oracle Integration Cloud via Parking Lot

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The detailed implementation of the parking lot pattern can be done in a variety of  storage technologies but strongly recommended a database table to be used for simplicity. In this blog, we will use the parking lot pattern in the Oracle Integration Cloud (OIC) to explore a solution in the Aggregator Pattern.   Problem Definition In OIC, the aggregator was often mentioned to collect and store individual messages until a complete set of correlated messages has been received, and then the aggregator as a filter publishes a single message, that derived from the complete set of correlated messages. The parking lot pattern will enable to provide a solution to attend to this scenario. Individual messages parked in the intermediate store and the aggregator in this OIC parking lot pattern serves as the special filter that receives a stream of messages and identifies whether the messages are correlated. Once a complete set of messages has marked as complete and received, an aggregated message that collected from each correlated message will be published as a single message to the output channel for further processing.   Design Solution     Process the input data/message based on the order they come in Each message will be parked in the storage for x minutes (parking time) so the system has a chance to aggregate correlated messages Maximal number of parallel process can be configured to throttle the outgoing calls A set of integration, connections and database scripts are used the current out of the box OIC features The solution is generic, which can be used in various typed business integrations without modification to the provided integrations Error handling of both system/network error and bad requests   Database Schema   The key piece to the parking lot pattern is the database schema in the design pattern. The Message table is explained here: Column Description ID (NUMBER) This is the unique ID/key for the row in the table. STATUS (VARCHAR) This will be used for state management and logical delete with the database adapter. There are three values this column will hold: 1. N - New (Not Processed) 2. P - Processing (In-flight interaction with slower system) 3. C - Complete (Slower system responded to interaction) The database adapter will poll for ‘N’ew rows and will mark the  row as ‘P’rocessing when it hands it over to an OIC integration. PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.     Implementation Details   Integration Details Sample Integration in Par File Connections Business Front-end Integrations Receive the typed business payload Call producer with opaque interface   EmployeeServiceFront(1.0)   OrderServiceFront(1.0) Invoke: EmployeeService   OrderService Producer Receive new record Create a new row in group table if not exit Mark the group status as ‘N’   RSQProducer(1.0) Trigger and Invoke: RSQProducer   Invoke: RSQ DB Group Consumer Schedule to run every x minutes Invoke a message consumer   RSQGroupConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Message Consumer Mark message status as ‘P’ Invoke Dispatcher Delete the message   RSQMessageConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB     Screen Shot of Actual Integration Steps You can deploy this par file.  It has the following connections that need configuring and activating. 1. Import the par file to the Packages in Oracle Integration:     2. Invoke and Trigger Connections: initial unconnected status   3. Configure and activate Database, the provided sample database called RSQ DB; Oracle Autonomous Transaction Processing (ATP) database is used in this scenario (ATP database information can be found here deploying an ATP instance in Oracle Cloud    4. Trigger and Invoke Message Consumer; in the sample it is called RSQMessageConsumer to cause load distribution of calls to message consumer. It requires the connection URL and corresponding admin authentication. Processes active messages of the given group. Receives group id/type from the group consumer. Loads active messages of the group ordered by sequence-id. The messages have to be at least # (parking time) old. Loops through active messages Marks message status as 'P' Invoke the Dispatcher using its opaque interface. Delete the message Calls a stored procedure to: Mark the group status to be c if there are no active messages Mark the group status to be 'N' if there are new active messages   5. Trigger the manager interface; in the sample, it connected the RQSManager is used to invoke the parking lot pattern interface in the Oracle Integration. It currently support three operations: Get configs; Update configs; Recover group     6.  Producer Integration connected with database is used to invoke producer interface; in the sample it is called the RSQProducer. Entry point of the parking lot pattern. Receives the resequencing message.   Creates a new row in group table if it's not already there. Marks the status of the group to 'N'. Creates a message in the message table.   7.  Dispatcher is a request/response integration which reconstructs the original payload and send to the real backend integration. Receives the message. Converts opaque payload to the original typed business payload. Uses the group id to find the business end point and invoke it synchronously. 8. Business Integrations are the real integrations that customers used to processes the business messages. They have their own typed interface. I used two test servers to demonstrate some post data   9. Error Handling Recover System/network error If the problem is caused by some system error like networking issues. After you fixing the problem, you can recover by resubmitting the failed message consumer instance.   10. Connections and Integrations Sample after Triggers and Invokes     Reference: http://www.ateam-oracle.com/the-parking-lot-pattern https://blogs.oracle.com/soacommunity/throttling-in-soa-suite-via-parking-lot-pattern-by-greg-mally https://blogs.oracle.com/integration/ordering-delivery-with-oracle-integration https://www.enterpriseintegrationpatterns.com/patterns/messaging/Aggregator.html      

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The...

Integration

Downstream Throttling in Oracle Integration Cloud via Parking Lot Pattern

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The detailed implementation of the parking lot pattern can be done in a variety of  storage technologies but strongly recommended a database table to be used for simplicity. In this blog, we will use the parking lot pattern in the Oracle Integration Cloud (OIC) to explore a solution in the downstream throttling.   Problem Definition In OIC, the downstream throttling was often mentioned as there might be an influx of data that overwhelm the slower downstream systems. Even though, it might be accomplished by the tuning knobs within OIC and WebLogic Server, but when the built-in tuning cannot be adjusted enough capacity to stop flooding the slower system. The parking lot pattern will enable to provide a solution to attend to this scenario.   Design Solution     Process the input data/message based on the order they come in Each message will be parked in the storage for x minutes (parking time) so the system has a chance to throttle the number of messages processed concurrently Maximal number of parallel process can be configured to throttle the outgoing calls A set of integration, connections and database scripts are used the current out of the box OIC features The solution is generic, which can be used in various typed business integrations without modification to the provided integrations Error handling of both system/network error and bad requests   Database Schema   The key piece to the parking lot pattern is the database schema in the design pattern. The Message table is explained here: Column Description ID (NUMBER) This is the unique ID/key for the row in the table. STATUS (VARCHAR) This will be used for state management and logical delete with the database adapter. There are three values this column will hold: 1. N - New (Not Processed) 2. P - Processing (In-flight interaction with slower system) 3. C - Complete (Slower system responded to interaction) The database adapter will poll for ‘N’ew rows and will mark the  row as ‘P’rocessing when it hands it over to an OIC integration. PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.     Implementation Details Integration Details Sample Integration in Par File Connections Business Front-end Integrations Receive the typed business payload Call producer with opaque interface   EmployeeServiceFront(1.0)   OrderServiceFront(1.0) Invoke: EmployeeService   OrderService Producer Receive new record Create a new row in group table if not exit Mark the group status as ‘N’   RSQProducer(1.0) Trigger and Invoke: RSQProducer   Invoke: RSQ DB Group Consumer Schedule to run every x minutes Invoke a message consumer   RSQGroupConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Message Consumer Mark message status as ‘P’ Invoke Dispatcher Delete the message   RSQMessageConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Dispatcher Receive the message Convert opaque payload to the original typed business payload Find the business end point and invoke   RSQDispatcher(1.0) Trigger and Invoke: RSQDispatcher, OrderService, TestService     Screen Shot of Actual Integration Steps You can deploy this par file.  It has the following connections that need configuring and activating. 1. Import the par file to the Packages in Oracle Integration:     2. Invoke and Trigger Connections: initial unconnected status   3. Configure and activate Database, the provided sample database called RSQ DB; Oracle Autonomous Transaction Processing (ATP) database is used in this scenario (ATP database information can be found here deploying an ATP instance in Oracle Cloud   4. Trigger and Invoke Message Consumer; in the sample it is called RSQMessageConsumer to cause load distribution of calls to message consumer. It requires the connection URL and corresponding admin authentication. Processes active messages of the given group. Receives group id/type from the group consumer. Loads active messages of the group ordered by sequence-id. The messages have to be at least # (parking time) old. Loops through active messages Marks message status as 'P' Invoke the Dispatcher using its opaque interface. Delete the message Calls a stored procedure to: Mark the group status to be 'C' if there are no active messages Mark the group status to be 'N' if there are new active messages     5. Trigger the manager interface; in the sample, it connected the RQSManager is used to invoke the parking lot pattern interface in the Oracle Integration. It currently support three operations: Get configs; Update configs; Recover group       6.  Producer Integration connected with database is used to invoke producer interface; in the sample it is called the RSQProducer. Entry point of the parking lot pattern. Receives the resequencing message.   Creates a new row in group table if it's not already there. Marks the status of the group to 'N'. Creates a message in the message table.   7.  Dispatcher is a request/response integration which reconstructs the original payload and send to the real backend integration. Receives the message. Converts opaque payload to the original typed business payload. Uses the group id to find the business end point and invoke it synchronously. 8. Business Integrations are the real integrations that customers used to processes the business messages. They have their own typed interface. I used two test servers to demonstrate some post data   9. Error Handling Recover System/network error If the problem is caused by some system error like networking issues. After you fixing the problem, you can recover by resubmitting the failed message consumer instance.     10. Connections and Integrations Sample after Triggers and Invokes     The blog was inspired by the A-Team Chronicles writeup Reference: http://www.ateam-oracle.com/the-parking-lot-pattern https://blogs.oracle.com/soacommunity/throttling-in-soa-suite-via-parking-lot-pattern-by-greg-mally https://blogs.oracle.com/integration/ordering-delivery-with-oracle-integration https://www.enterpriseintegrationpatterns.com/patterns/messaging/Aggregator.html    

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The...

One week to the start of Oracle Open World 2019 – Are You Ready?

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration Program Guide lists the Oracle Integration sessions during #OOW19: A small selection is also listed below: PRO5871 - Oracle Integration Strategy & Roadmap - Launch Your Digital Transformation Monday, September 16, 12:15 PM - 01:00 PM | Moscone South - Room 156C Join this session to hear exciting innovations coming to Oracle Integration. See a live demonstration of next-generation machine learning (ML) integration enhancements and robotic process automation, all seamlessly connected into a hybrid SaaS and an on-premise integration. Learn how customer Vertiv successfully delivered its digital transformation with Oracle Integration Cloud by connecting Microsoft Dynamics CRM, Oracle E-Business Suite, Oracle PRM, Oracle ERP Cloud, and Oracle HCM Cloud for real time collaboration and faster business agility. Jumpstart your future today.   PRO5873 - Oracle Integration Roadmap Wednesday, September 18, 12:30 PM - 01:15 PM | Moscone South - Room 156B In this session explore the product roadmap for Oracle Integration including new and exciting initiatives such as AI/ML-based capabilities for recommending best next actions for integration and process, intelligent recommendations on mappings, a new UI with recipes for integration and process, anomaly detection, and enhanced connectivity to Oracle and third-party apps. This session also covers new process automation capabilities such as robotic process automation adapter (RPA-UiPath) and ML-driven case management process execution.   CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services Monday, September 16, 10:00 AM - 10:45 AM | Moscone South - Room 155B In modern global companies it is important to make sure that customers, suppliers, and (prospective) employees are not on a sanction list. Of course, this can be checked manually at onboarding, but what if a relation is added to a list after onboarding, while you are in business with them? Skanska and eProseed built a solution that fetches data from source systems and matches them with data from the Dow Jones Watchlist. This is done daily, minimizing the risk and increasing efficiency through automation. In this session see the solution and learn about the benefits of this cloud solution for Skanska, as well as the lessons learned.   PRO5805 - Oracle Integration: Monitoring and Business Analytics on Autopilot Wednesday, September 18, 04:45 PM - 05:30 PM | Moscone South - Room 156B Ever wondered how the crucial connections and processes in your environment are behaving? In this session learn to monitor and track your integrations and processes in real time. It is not enough to know things are running smoothly; business analytics are driving the evolution and growth of every industry as never before. Today’s competitive markets demand that stakeholders be able to understand, monitor, and react to changing market conditions in real time, and business owners require more control and expect more visibility over their processes. This session showcases how operational monitoring empowers IT while integration insight empowers executives with real-time insight into their business process, allowing IT and executives to take immediate action.   Hands on Labs If you want to do more than listen to presentations and get your hands on Oracle Integration, sign up for one of our two Hands on Labs: HOL6041 and HOL6043 - Connect Applications, Automate Processes, Gain Insight with Oracle Integration Monday, September 16, 08:30 AM - 09:30 AM | Moscone West - Room 3022A and Wednesday, September 18, 09:00 AM - 10:00 AM | Moscone West - Room 3022A These 2 hands on labs provide a unique opportunity to experience Oracle Integration. Learn how to enhance process and connectivity capabilities of CX, ERP, and HCM applications with Oracle Integration Cloud. Create a forms-driven process to extend existing applications, discover how to synchronize data by integrating with other applications, and learn how to collect business metrics from processes and integrations and collate it into a dashboard that can be presented to business owners and executives. This session explores the power of having a single integrated solution for process, integration, and real-time dashboards delivered by Oracle Integration. Discover how the solution provides business insight by collecting and collating business metrics from your processes and integrations and presents them to your business owners and executives. Demos In between sessions, you should consider a stroll to The Exchange in Moscone South at the Exhibition Level (Hall ABC) and stop by our demo pods to see to see real world examples of how Oracle Integration future proofs your digital journey with pre-built application adapters, simple and complex integration recipes, and process templates. Stop by to discuss how integration is more than connecting applications, it is also about extending applications in a minimally invasive fashion. Visit us to see how to gain visual business insight from your integration flows. Oracle Integration Product Management and Engineering will be there to answer your questions and brainstorm about your integration use cases. We have 2 demo pods at The Exchange: INT-008 - Connect Applications, Automate Processes and RPA Digital Workforce, Gain Insight INT-002 - Leverage Oracle Integration to Bring Operations and Business Insight Together And an additional demo pod at Moscone West – Level 2 (Applications) ADM-003 - Connect Cloud and on-prem Applications with Adapters, B2B, MFT, SOA and hybrid solutions   Oracle Integration and Digital Assistant Customer Summit Last, but not least, there is still time to register for the Oracle Integration and Digital Assistant Customer Summit on 19-September-19 at the W Hotel San Francisco, followed by our Customer Appreciation Dinner. For more information, visit You're Invited: Oracle Integration and Digital Assistant Customer Summit at Oracle OpenWorld 2019   We are all looking forward to seeing you at Oracle Open World 2019 in San Francisco!

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration...

Integration

Bulk Recovery of Fault Instances

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable Fault Errors? All faulted instances in asynchronous flows in Oracle Integration Cloud Service are recoverable and can be resubmitted. Synchronous flows cannot be resubmitted. You can resubmit errors in the following ways: Single failed message resubmissions Bulk failed message resubmissions Today operator can manually resubmit failed messages individually from the integration console monitor dashboard.  In this blog we are going to focus on how to create an integration flow that can be used to auto resubmit faulted instances in bulk.  Here are the High Level Steps  Here are the steps to create an integration flow to implement the automated bulk recovery of errors. Note we also provide a sample that is available for download. STEP 1: Create New Scheduled Orchestration Flow  STEP 2: Add Schedule Parameters  It is always a good practice to parametrize the variable so you can configure the flow based on business need by overriding them, here are the schedule parameters configured in this bulk resubmit fault instances integration sample. strErrorQueryFilter : fault query filter parameter. This defines which error instances are to be selected for recovery. Valid values:  timewindow: 1h, 6h, 1d, 2d, 3d, RETENTIONPERIOD. Default is 1h. code: integration code version: integration version id: error id(instance id) primaryValue: value of primary tracking variable secondaryValue: value of secondary tracking variable See API documentation. strMaxBatchSize: Maximum number of error instances to resubmit per run. (default 50) This limits the number of recovery requests to avoid overloading the system. strMinBatchSize: Minimum number of error instances to resubmit per run. (default 2) This defers running the recovery until the given number of errors have accumulated. strRetryCount:  Maximum number of retry attempts an individual error instance. (default 3) This prevents repeatedly resubmitting a failed instance. strMaxThershold: Threshold number of errors to abort recovery and notify user. (default 500) This allows resubmission to be ignored if an excessive number of errors have been detected, indicating that some sort of user intervention may be required. STEP 3: Update the Query Filter to Include only Recoverable Errors concat(concat("{",$strErrorQueryFilter,",recoverable:'true'"),"}") STEP 4: Query All Recoverable Error Instances in the System matching the Query Filter GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter STEP 4: Determine Recovery Action STEP 4a: if Total Recovery Error Instances Found is more than Max Threshold (totalResults > strMaxThershold) then Send a Notification. In this case there may be too many errors, indicating a more serious problem, it is best practice to review manually and once the issue is fixed to temporarily override the strMaxThershold value to allow recovery of failed instances. STEP 4b: else if No Recovery Error Instances Found (totalResults <= 0) then End Flow. STEP 4c: else Continue to resubmit strMaxBatchSize Found Errors in a single batch. NOTE: We limit the number of errors re-submitted in a single batch to avoid overloading the system, we suggest a limit of 50 instances. STEP 5: Query Recovery Errors (limit to Batch Size) GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter&limit=strMaxBatchSize&offset=0 STEP 6 Filter Results to Avoid too Many Retries STEP 6a: if totalResults found <  strMinBatchSize , then skip the batch re-submit and stop the flow STEP 6b: else if totalResults > strMinBatchSize, then Invoke REST API to submit fault errors IDs Bulk Re-submit Error API. Here we can filter out the Fault Instance that are already retry but did fail again,  as shown below  - Drag and Drop For each items - Add if function from Mapper on top of items - Add <= condition element - Add Left Operator = retryCount from source  - Add Right Operator = strMaxRetryAttempt from variable retryCount < = $strMaxRetryAttempt STEP 7: Resubmit Errors POST - /ic/api/integration/v1/monitoring/errors/resubmit   STEP 8:  Check `resubmitRequested` = true, / false,  STEP 9:  Send Email Notification with Recovery Submit status details as below shown (Optional): User can model the integration to invoke a process (using OIC process capability for human interaction and long running tasks) or take any action based on re-submit response via a child flow, or other 3rd party Integration. This may be to post the re-submit information to some system for future analysis/ review. One can utilize the local invoke feature to model the parent to child flow hand off. STEP 9:  Activate the Integration, STEP 10: Schedule the Integration to Run on every X period of time One can also run OnDemand with the option of SubmitNow  Email Notification Here is the Email Notification one would receive Case1: When Bulk Resubmit success  email is sent as below (Sample). Case2: When Too Many Fault Instance and Alert Email Sent as below (Sample). Ok, by now you have completed Development of Integration and schedule to run on your Integration Cloud Instances. How to Customize your Integration to Run Recovery for a Specific Integration or Connection Because different integration or error types may have different recovery requirements you may want to have different query parameters and/or scheduled intervals. For this you need to clone the above Integration and override schedule parameter to query only specific fault Instance for a given Integration or Connection Type based on query filter. so you can keep separate instance running for a specific business use case. Here is how you do it: STEP1 - Clone the above Integration. STEP2. Update the Schedule Parameter strErrorQueryFilter timewindow : '3d', code : 'SC2RNSYNC', version : '01.00.0000' code : 'SC2RNSYNC', version : '01.00.0000', connection :'EH_RN' timewindow : '3d', primaryValue : 'TestValue' You may also want to modify other parameters or even to modify the integration to take alternative actions. STEP3: Schedule to Run This will give you ability to config the bulk re-submit for given set of integration level or connection level. Sample Integration (IAR)  -  Download Here Summary This blog explained how to automatically resubmit errored instances, allowing control of rate of recovery, type of errors to recover and showed how to customize the recovery integration through cloning and modifying parameters. We hope that you find this a useful pattern for your integrations. Thanks You!    

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable...

Integration

CICD Implementation for OIC

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts are used to support the backing up of integration artifacts from the source environment (OIC POD) to the repository (Bitbucket). Shell scripts are also used to retrieve the saved integration artifacts from the repository and deploy the integrations to a target environment (another OIC POD). Following features are currently supported in this implementation: 1)   Export integrations and save the artifacts (IARs, connection json files) to the remote repository: Allow user to either export all integrations, or only one or more integration(s) from the source OIC environment. Commit/Push the exported integration artifacts to the repository. Provide Summary Reports. 2)   Pull integration artifacts from the repository and either import or deploy the integrations to the target OIC environment: Allows user to select one or more integrations to do import only, or to deploy the integrations to a target environment. (To deploy an integration means to import IAR, update connections and activate the integration) Pre-requisites Following are the required setups in your Linux machine: 1)    JDK 1.8 Installation Make sure to update your JDK to version 1.8 or higher 2)    Jenkins Installation Ensure your Linux machine has access to a Jenkins instance. Required Jenkins Plugins Install the following Jenkins Plugins which are required by the CICD scripts: Parameterized Trigger plugin Delivery Pipeline plugin (version 1.3.2) HTML Publisher plugin 3)   Git Client Make sure to use Git client 2.7.4 or later in your Linux machine. 4)   Bitbucket/Github (repository) Do the following to have access to the remote repository: Setup SSH access to the remote repositories (Bitbucket/Github).  A Bitbucket server administrator can enable SSH access to the Git repositories in Bitbucket server for you.  This allows your Bitbucket user to perform secure Git operations between your Linux machine and the Bitbucket server instance. Note:    Bitbucket repository was used with this implementation. Create the local repository by cloning the remote repository: To create local repository, you can run the below commands from your <bitbucket_home>, where <bitbucket_home> is where you want your local repository to reside (i.e. /scratch/bitbucket): cd   <bitbucket_home> git clone  <bitbucket_ssh_based_url> 5)   Jq You can download JQ (JSON Parser) from: https://stedolan.github.io/jq/download/ Once downloaded, run the following commands: rename ‘jq-linux64’ to ‘jq’ chmod  +x  jq copy the ‘jq’ file to /usr/bin using sudo. Note:  It is required to have the minimal of the Git client and Jq utility to be installed on the same server where you are running the scripts.  Jenkins and Bitbucket repository can be on remote servers. Scripts setup Perform the following to setup on your Linux machine to run the bash shell scripts: Create a new <cicd_home> directory in your local Linux machine (i.e. /scratch/cicd_home). Note:  <cicd_home> is where all the CICD related files will reside.   Download the oic_cicd_files.zip file to your <cicd_home> directory. Run unzip to extract the directories and files.  Once unzipped, you should see the below file structure under your <cicd_home> directory: From <cicd_home>, run the below command to ensure that you are using git version 2.21.0 or later: > git  --version    For CI (Continuous Integration) Two shell scripts are provided for CI process: export_integrations.sh This script exports integrations (IARs along with the corresponding connection json files) from source OIC environment.  The script allows user to either export ALL integrations, or to export one or more specified integrations. For exporting one or more integrations, under <cicd_home>/01_export_integrations directory, edit the config.json file and update to include the integration Identifier (code) and the version number that you want to backup, one integration per line in below json format: [       { "code": "<integration1_Id>", "version":  "<integration1_version>" }       { "code": "<integration2_Id>", "version":  "<integration2_version>" }       .. ] For example:  [ { "code": "SAMPL_INCID_DETAI_FROM_SERVI_CLO", "version":  "01.00.0000" } ] Note:    The above steps are not required if you want to export All integrations.  The config.json file will be created automatically by script. push_to_repository.sh This script utilizes the Git utility to Commit and Push integration artifacts to the remote Repository.  This allows developer to save the current working integrations, and to fall back to previous versions as need be.   For CD (Continuous Delivery) Two shell scripts are provided for CD process: pull_from_repository.sh This script pulls the integration artifacts from the remote repository, and stores the artifacts under a local location. deploy_integrations.sh This script deploys integration(s) to target OIC environment. User has the option to only import the integrations, or to deploy the integrations (import IARs, update connections and activate integrations). Perform the following steps to either import or deploy integrations: 1) Under <cicd_home>/04_Deploy_Integrations/config directory, edit the integrations.json file to include the below information of the integrations to be imported/deployed:   Integration Identifier (code) and the integration version number Connection Identifier (code) of the related connections used by the integration. For example:      { "integrations":         [           {                "code":"SAMPL_INCID_DETAI_FROM_SERVI_CLO",                 "version":"01.00.0000",                 "connections": [                         { "code":"MY_REST_ENDPOINT_INTERFAC" },                         { "code":"SAMPLE_SERVICE_CLOUD" }                 ]             }           ]         }   2) Prior to deploying the integration, update the corresponding <connection_id>.json file to contain the expected values for the connection (i.e. WSDL URL, username, password etc). For example: SAMPLE_SERVICE_CLOUDE.json contains: {      "connectionProperties":[           {              "propertyGroup":"CONNECTION_PROPS",              "propertyName":"targetWSDLURL",              "propertyType":"WSDL_URL",              "propertyValue":"<WSDL_URL_Value>"            }        ],        "securityPolicy":"USERNAME_PASSWORD_TOKEN",        "securityProperties":[             {                  "propertyGroup":"CREDENTIALS",                  "propertyName":"username",                  "propertyType":"STRING",                  "propertyValue":"<user_name>"             },             {                   "propertyGroup":"CREDENTIALS",                   "propertyName":"password",                   "propertyType":"PASSWORD",                   "propertyValue":"<user_password>"              }           ]      }   Import Jenkins Jobs While you can create the Jenkins Jobs manually, you have the option to import the Jenkins jobs by using the jenkins_jobs.zip file. To import Jenkins Jobs 1)   Download and unzip the jenkins_jobs.zip file to your <Jenkins_home>/.job directory.  Where <jenkins_home> is the location where your Jenkins instance is installed 2)   Restart Jenkins server 3)   Once Jenkins server is restarted, login to Jenkins (UI) and: Update all parameters for the below four Jobs as per your environment: 01_Export_Integrations_and_Push 02_Pull_Integrations_and_Deploy 02a_Pull_Integrations_from_Repository 02b_Deploy_Integrations_to_Target For example: GIT_INSTALL_PATH:         /user/local/git Update the ‘Run_Location’ parameter in all other child jobs as per your environment (where the script used by each of the child job is located).  For example: In the configuration of the Export_Integrations job: RUN_LOCATION:   <cicd_home>/01_export_integrations Where <cicd_home>/01_export_integrations is the full path to where the corresponding script (export_integrations.sh) resides, Note:  make sure the path does not contain ending ‘/’. For the other Repository related child jobs (i.e Pull_from_Repository and Push_to_Repository, etc.) also update the GIT_INSTALL_PATH parameter to where your Git is being run from. NOTE: If there is no need to update the connection information for the Integrations, then the job 02_Pull_Integrations_and_Deploy can be used to pull Integration artifacts from the repository and also deploy the Integrations. If the connection information needs to be updated (i.e. User name, User password, WSDL URL, etc), then: First run 02a_Pull_Integrations_from_Repository to pull Integration artifacts from repository Update the connection json files to contain relevant information Call 02b_Deploy_Integrations_to_Target to deploy the Integrations   Create Jenkins Pipeline Views To create Pipeline View, ensure to install Delivery Pipeline Plugin as mentioned earlier. Perform the following steps: 1) Login to Jenkins 2) From Jenkins main screen, click on ‘+’ to Add a new View:   Enter view name: 01_OIC_CD_Pipeline_View   Go under Pipelines and click on Add to add Components: Component Name:      OIC_CI_Pipeline                                      (or any relevant name for the view)               Initial Job:     01_Export_Integrations_and_Push          (this is the root job in the pipeline) Click Apply then OK. Select the following options: Enable start of new pipeline build Enable rebuild Show total build time Theme (select ‘Contrast’) (Keep default values for all other options) The following view will be available for your Pipeline Job: (Create CD view using the same steps above)   Reports Reports are available for Export_Integrations, Push_to_Repository and Deploy_Integrations jobs. For Report to be displayed properly, we need to relax the Security Policy rule so that the Style codes in the HTML file can be executed. Relax Content Security Policy rule To relax this rule, from Jenkins, do the following: Manage Jenkins / Manage Nodes Click settings (gear icon) ​​ click Script console on left and type in the following command: System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "")               Click Run. If you see the 'Result:' output below "Result" header, then the protection is disabled successfully. Otherwise, click ‘Run’ again.        4. Restart the Jenkins server   To view Report for Export_Integrations job, for example: Click on OIC Export Integrations Report link to view Report after the job is run:                            Steps to create Report 1) From the selected Job screen (i.e, Export_Integrations), click on Configure 2) Under Post-build Actions, add Publish HTML reports for the job 3) Use the following parameters (as example): HTML directory to archive: ./ Index page[s]:  <the created html file> Report title:      <enter proper title for the Report>   4) Click Apply then Save   Execute Jenkins Jobs To run the CICD scripts, execute the below Jenkins pipeline jobs: For CI: 01_Export_Integrations_and_Push Run this job to execute CI scripts.  Wait for the parent job and its downstream jobs to complete running, then click on the child jobs, Export_Integraions or Push_to_Repository, and the Report link to see the results. For CD:      If there is no need to update connections, then run: 02_Pull_Integrations_and_Deploy Report is available under the Jenkins job Deploy_Integrations screen.      If connection information needs to be updated prior to deploying the integrations, then first pull the integration artifacts from the repository: 02a_Pull_Integrations_from_Repository Update the connection json file(s) as need be, then deploy the integrations to target OIC environment by running: 02b_Deploy_Integrations_to_Target Report is available under the Jenkins job Deploy_Integrations_to_Target screen.  

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts...

Update library in continuous manner even after being consumed by integration

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing functions. Feature Flag This feature is available when oic.ics.console.library.update feature flag is enabled. Here is how library update works Let's consider a simple Math library that defines basic add function that takes two parameters and is used in an integration. Suppose you want to add other math functions like subtract, multiply and divide and change the way add operation is executed. You may update the registered library with new JS file that has these new functions using the Update menu on the library list page. Upload the new JS file using the update library dialog. When attempting to update the library with new code, the system validates the new library file and ensures it meets the following conditions. Function signatures in the new file being uploaded should match the signatures of functions in the existing library that are used in integrations. You may add new functions but removing a function used in integrations results in rejection of the new file. If the new library file adheres to these conditions the library is updated and library edit page is displayed for further configuration changes. Please note that if the returns parameter for a function used in an integration was changed in the updated library the system does not flag an error but it invalidates the downstream mapping in integrations and should be re-mapped. Deactivate and activate the integration for changes to take effect in integrations that use the updated library. Following is an example where the validation conditions are satisfied and the system accepts the uploaded library file. Add function in math.js is used in integrations so the signature of this function in the updated library file is unchanged even though the add function definition is changed. As mentioned above the containing library file name is also part of a function signature so the file name is unchanged in the updated library. The updated library may contain other function definitions. Example mentioned below is an illustration where validation fails and uploaded library file is rejected. As the signature of functions in the new library file do not match with the library in the system, the new library file is rejected.

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing...

Migrating from ICS4SaaS to OIC4SaaS

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service has been sold with Oracle SaaS products and has appeared on the SaaS price list. As this service is not available on the Oracle’s OCI infrastructure, Oracle provides a migration path for ICS4SaaS customers to migrate their workloads to OCI. Oracle introduced a new offering called Oracle Integration for Oracle SaaS (aka OIC4SaaS). This offering is based on the Oracle Integration (OIC) service, which runs exclusively on the OCI infrastructure. The migration path is similar (but not identical to) the migration path for the corresponding tech (PaaS) SKUs, namely migration of ICS to OIC. SKUs for ICS for SaaS The SKUs for ICS for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Service includes per month Part # Oracle Integration Cloud service for Oracle SaaS 850 Hosted Environment 1 Hosted Env 10GB in and out per day B87181 Additional non-Oracle SaaS connection 1000 Hosted connection 1 connection of choice B87182 Additional 10GB per day 1000 Each 10GB in and out per day B87183 Oracle Integration Cloud Service for Oracle SaaS Midsize 585 Hosted Environment -  B87609 Additional non-Oracle SaaS Midsize connections 650 Hosted Connection - B87610 Additional 10GB per day Midsize 900 Each - B87611 Note that for the purposes of migration to OIC for SaaS, the midsize SKUs above (last 3 rows) behave the same as their corresponding ICS4SaaS SKUs (first 3 rows). SKUs for OIC for SaaS: The SKUs for OIC for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Part # Oracle Integration Cloud Service for Oracle SaaS – Standard 600 1 Million messages / month B91109 Oracle Integration Cloud Service for Oracle SaaS – Enterprise 1200 1 Million messages / month B91110 Note that for both ICS for SaaS and OIC for SaaS, the same restriction applies where each integration must have an endpoint in an Oracle Cloud SaaS application. For further details on OIC for SaaS, refer to the Oracle Fusion Cloud Service Description document. Migration paths Oracle allows you to choose whether to migrate from all flavors of ICS to either the OIC subscription offering (OIC for SaaS) or to OIC under Universal Credits. In fact, all 4 paths below are supported: Source Target Comments ICS OIC See migration documentation here ICS for SaaS OIC Migration procedures are the same as ICS -> OIC above ICS for SaaS OIC for SaaS This migration path is the focus of this document ICS OIC for SaaS Migration procedures are the same as ICS for SaaS -> OIC for SaaS   Why are migration procedures different for OIC for SaaS? When migrating ICS to OIC, you need to create and use OCI cloud storage. This storage is used to store the exported metadata from ICS. This enables a secure mechanism to store the metadata of the entire ICS instance, which includes security credentials to your SaaS and other applications and systems. An OIC for SaaS account is dedicated to OIC. Customers pay a subscription price for OIC and other services which are part of Universal Credits (including object storage) are not provision-able. Therefore, the migration procedures are different. Oracle provides and has tested two options for migrating ICS to OIC for SaaS Migration: Piece-Meal migration and Wholesale Migration. The preferred option is wholesale migration.   Migration Option #1: Piece-Meal Migration In this option, you migrate your integrations one-by-one via the export and import features of ICS/OIC. Oracle provides the ability to export and import individual integrations (and lookups). See exporting and importing components in the Oracle documentation. Using this capability, you can export each of your integrations from ICS4SaaS, and then import them into OIC4SaaS. Note that the export does not include security information such as login/password to your end applications, so after the import you must add the security information. This option obviates the need for OCI cloud storage, as the export can be saved to a local file. However, you will be required to export/import each integration individually and you will be required to re-add security credentials. Consider this option only when you have a relatively small (<10) number of integrations to migrate, and you do not want to obtain a universal Credit account.   Migration Option #2: Wholesale Migration In this option, you may migrate all your integrations along with all metadata and security information in a single bulk operation. This option does depend on the availability of OCI Cloud Storage. Therefore, you will need to separately procure a Universal Credit account and make the Cloud Storage in this account available for the migration process. This Universal Credit account is in addition to your OIC4SaaS environment. The rest of the migration path is the same as migration from ICS to OIC. If you already have a Universal Credit account, you can use that. If not, you can obtain one. In fact, Oracle offers free 30-day trials which can be leveraged for this purpose. Even if you choose a paid account. cloud storage is relatively inexpensive, and only required for the duration of the migration. After migration is complete, you can delete the Cloud storage. If you use a 3o-day trial for migration, you can even delete the account after migration, though we hope you will decide to use it and take advantage of the rich services and capabilities available there. NOTE: Wholesale migration is the preferred option. Consider gaining access to a Universal Credit account (including 30-day trial) to enable wholesale migration. What if I have multiple ICS4SaaS instances? Chances are that you have a Stage and Production instance, and perhaps other instances. Like the ICS to OIC migration, Oracle recommends a 1-for-1 migration when you have multiple ICS4SaaS instances.  That is, each ICS4SaaS instance gets migrated to its own OIC4SaaS instance. What if I have multiple ICS4SaaS accounts? You can request to have multiple OIC4SaaS accounts to match your ICS4SaaS environment. It is also possible to consolidate your OIC4SaaS instances into a single account. Note that each instance typically shares the same user base, as they all share the same IDCS tenancy. However, Oracle is rolling out the ability for OIC (and OIC4SaaS) to leverage secondary IDCS stripes to allow each of your instances to have a unique set of OIC service administrators and instance administrators. When migrating to OIC4SaaS, should I select Standard or Enterprise? If your integrations are all Cloud to Cloud, the Standard version should suffice. If you require integration using one of the On-premise adapters, or if you want to take advantage of Process Automation (which is not available in IC4SaaS) then you should choose the Enterprise version. Both versions offer the same pricing model, based on the number of 1 million messages per month you require. How does pricing compare? ICS4SaaS and OIC4SaaS are offered in two completely different pricing models. ICS4SaaS was sold per connection, whereas OIC4SaaS is sold via message volume (one unit = 1M messages per month). OIC4SaaS generally has more favorable pricing than ICS4SaaS for most customers, though this is dependent on specific customers’ integration requirements and usage patterns. For migration scenarios, Oracle will work with customers to ensure their pay the same price (or less) for equivalent functionality in OIC4SaaS. Note that the ICS4SaaS additional non-Oracle SaaS connections (B87182) does not have an equivalent SKU in OIC for SaaS. The base SKUs for OIC for SaaS (B91109, B91110) include non-Oracle SaaS connections, and no separate purchase is required. I am ready to migrate. How do I get started? Contact your Oracle sales representative to help guide you through the process. Migration to OCI includes a commercial migration component, so that you will start paying for OIC4SaaS while no longer paying for ICS4SaaS. Oracle provides a 4-month window for migrations, where your ICS4SaaS instances will be available (at no charge) to give you ample time to perform the migration and associated testing.Oracle continues to invest in enhancements to the migration processes, so please be sure to ask your Oracle sales representative about the latest available tooling which can be applied to your specific enivronment.            

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service...

How to use File Reference in Stage File

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by Stage File action. Advantages :  Any upstream operation which provides a file reference can be directly processed. Thus simplifies orchestration.  Ex. Rest Connection allows downloading of attachment into OIC/.attachment folder. It provides file reference but does not provide file name or directory.   OIC/ICS Operations that provides References: Attachment Reference (Rest Adapter : Attachments) Stream Reference (Rest Invoke response) MTOM (Soap Invoke response) FileReference ( FTP) Base64FileReference(encodeString) function.      This is how the Stage File action Configure Operation page will look like for: Read Entire File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Read File in Segments operation​ 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Unzip File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the Zip File Name' and 'Specify the Zip File Directory' will be replaced with 'Specify the File Reference' field.   To specify the File Reference, you can click the Expression Builder icon to build an expression.   This is how file reference can be used to utilize local file processing capabilities provided by Stage File action.

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by...

Support for zip/json/xml based schema in Stage File action

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration.  Before, only a CSV file could be used to describe the structure of the file in Stage File action. Now, XML or JSON document can also be used to describe the structure of the file contents in Stage File action. Zip File containing multiple schemas can also be uploaded.   The Schema Options page is displayed if you selected a read or write stage file operation. Before, this is how the Stage File action Schema options page would look like: Schema options page would have following schema options: Create a new schema from a CSV file/ Select an existing schema from the file system   Now, this is how the Stage File action Schema options page will look like for: Read Entire File and Write File operation Schema options page will have following choices: Sample delimited document (e. CSV) XML schema (XSD) document Sample XML document (Single or No NameSpace) Sample JSON document   Read File in Segments operation Schema options page will have following choices: Sample delimited document (e. CSV) XML schema (XSD) document Sample XML document (Single or No NameSpace)   Based on your selection on the Schema Options page, the Format Definition page enables you to select the file for describing the structure. This is how the Stage File action Format Definition page will look like for: Sample delimited document (e. CSV) choice   XML schema (XSD) document choice   In case of Read Entire File and Write File operation You can upload a ZIP file that includes multiple schemas in which one schema has a reference to the other schema file (for example, schema A imports schema B).   In case of Read File in Segments operation You cannot upload a ZIP file that includes multiple schemas in which one schema has a reference to the other schema file (for example, schema A imports schema B). Read File in Segments option does not work when it reads the element that belongs to schema B.   New field 'Select Repeating Batch Element' to select the repeating element when Stage Read is configured with Segmentation will be available   Sample XML document (Single or No NameSpace) choice   Sample JSON document choice (This choice is not available in case of Read File in Segments operation)   This is how you can upload schema in Stage File action using not only CSV file but also XML, XSD or JSON file.

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration.  Before, only a CSV file could be used to describe the structure of the file in Stage...

Automating Supplier Data Synchronization using Oracle Integration

Co-Author: Kanaka Vijay Kumar Korupolu, Principal Product Manager Use Case The Oracle SaaS is a modular suite of applications that includes Financials, HCM, Procurement, Projects, SCM offerings etc. The Procurement cloud is designed to work as a complete procurement solution or as an extension to existing procurement applications (Cloud/On-Premise).                                     Organizations may not be in a position to adopt entire suite and completely replace their existing application portfolio, but organizations can successfully pursue coexistence strategies by selective uptake of cloud offerings that serve as a logical extension to their existing solutions. Oracle Sourcing is the next generation application for smarter negotiations with the suppliers.  Sourcing cloud is an integral part of the Procurement Cloud and is generally implemented with other Procurement cloud products like Purchasing and Supplier Portal. Sourcing cloud can also be implemented as a standalone offering with other cloud/on-premise Procurement applications, that makes an ideal coexistence scenario. In such coexistence scenarios, organizations across the world face typical challenges as how to create/synchronize master data and transactional data across different systems within the organization. This use case covers importing of Suppliers data from an external 3rd party enterprise applications usually from an FTP server in to Oracle Procurement Cloud. Oracle Integration (OIC) is an excellent platform to build such integration use cases seamlessly. Configure There are certain pre-requisites that need to be performed in Procurement Cloud, they are detailed below. 1. Add roles required for the Integration 2. Create a Procurement Agent Let's look at them in detail... Add the required roles for integration There are certain roles that are required to access the Procurement application and perform setup and transaction activities. The Supplier Import process is run by user with below roles: The Supplier Administrator or The Supplier Manager. Integration Specialist Login as super user who has access to security console Note: “IT Security Manager” role is required for Security Console access Navigator > tools > Security Console Users > Search for Username Click on Username Click Edit > Click on Add Role Add Supplier Administrator (or) Supplier Manager Roles and Integration Specialist Role Click Save and Close 2. Create a Procurement Agent Procurement Agent is a mandatory setup that will give an access to create Suppliers and Supplier related entities, hence we will do the agent configuration following the steps below. Login as a Procurement super user: Calvin.roth / Password Go to Procurement > Purchase Orders Click on Task Bar, go to  Administration > Manage Procurement Agent Click on Management Procurement Agents, then click on Create. Save and Close. Implement Supplier data synchronization can be achieved through the file based data import (FBDI) pattern, which can be further simplified using Oracle Integration. The generic architecture of the flow involves generating FBDI file, upload to Procurement Cloud, receive the callback then process further steps. Typically the flow appears as below...  (Image Credits - Oradocs Solutions) Process for the implementation of the above pattern includes below steps. (Image Credits - Oradocs Solutions     Here we can completely automate end to end use case right from a. Preparing the FBDI file including manifest file i.Generate Supplier Data file using FBDI xslm template ii. Create Manifest file b. Building the orchestration flow in Oracle Integration (OIC), that will take care of automating below 3 steps. i. Uploading Supplier FBDI data file into UCM ii. Import supplier data into interface tables iii. Import supplier data from interface to base tables c. Verifying the Supplier record created in Procurement Cloud d. Generating a report and sending a callback using Oracle Integration ERP adapter capabilities. Let's look at the steps in detail below... a. Preparing the FBDI file including manifest file i. Generate Supplier Data file using FBDI xlsm template Go to the Oracle Help Center. Click Cloud > Click Applications. Under Enterprise Resource Planning > click Procurement Click Integrate > Under Manage Integrations, click Get started with file-based data import. Click on “+” to expand - File-Based Data Imports Click on Suppliers Make a note of UCM account - prc/supplier/import Click the SupplierImportTemplate.xlsm template link in the File Links table to download Preparing Data Using the XLSM Template Each interface table is represented as a separate Excel sheet. The interface table tabs contain sample data, that can be used as a guideline to understand how to populate the fields The first row in each sheet contains column headers that represent the interface table columns. The columns are in the order that the control file expects them to be in the data file. DO NOT: Change the order of the columns in the Excel sheets. Changing the order of the columns will cause the load process to fail. Delete columns not being used. Deleting columns will cause load process to fail.  You can hide columns that you do not intend to use; if needed, during the data creation process, but please reset to unhidden before upload. Note: Please note the above sample data, specifically Supplier Name, we will use this to verify the supplier records imported into the Procurement Cloud. Open the XLSM template. The first worksheet in file provides instructions for using the template Enter data in the spreadsheet template. Follow the instructions on the Instructions and CSV Generation tab under the section titled Preparing the Table Data. Click the Generate CSV File button. A CSV file is generated that is compressed into a ZIP file. Save the file. ii. Create Manifest file Login to Procurement Cloud and navigate as below Navigator > Setup and Maintenance : Task bar > Search Task: Manage Enterprise Scheduler Job Definitions and Job Sets for Financial, Supply Chain Management, and Related Applications Search and select the above task. Select Import Suppliers and click on EDIT Make a note of Name and Path details. Name: ImportSuppliers Path:     /oracle/apps/ess/prc/poz/supplierImport/ Check, which parameters are Mandatory - Required and prepare the manifest file as shown below. Create a zip file with CSV file that we have generated earlier and manifest file. See below for the sample. Now the supplier data file is ready to feed to the integration flow that we are going to orchestrate in the next steps. Let's look at the orchestration of the flow and creating connections to Procurement cloud and FTP server. b. Building the orchestration flow in Oracle Integration (OIC) We will create connections to Procurement Cloud, FTP server and orchestrate an integration flow i. Create Procurement Cloud Connection Login to OIC console. Click on Integrations to launch the Oracle Integration canvas, click on Connections, then click on Create option. Search for Oracle ERP Cloud. Provide connection name, click on Create. Enter Procurement Cloud connection details Service Catalog WSDL: https://<HostAddress>/fscmService/ServiceCatalogService?wsdl User Name: calvin.roth Password: XXXX Now test the connection by clicking on Test option, upon successful Test, Save the connection. ii. Create FTP connection Now let's create the FTP Server connection. Click on Create option on the connections page. Provide connection name, click on Create. Enter Procurement Cloud connection details in Configure Connectivity and Configure Security. FTP Server Host Address: XXXXXXX FTP Server Port: XX SFTP Connection: XX User Name: XXX Password: XXX Now test the connection by clicking on Test option, upon successful Test, Save the connection. iii. Orchestrate the flow. Click on Integrations in the Designer canvas. Click Create. Select Create Integration -  Style as a "Scheduled Orchestration" Provide name for the Integration flow, leave defaults as is. Click on Create. Note: Please make sure the Supplier FBDI data file is placed in a specific FTP location and mention the location and name of the file in adapter configuration as mentioned below. In the next step configure FTP adapter to read Supplier FBDI data file that was created earlier, configuration summary has shown below. Delete additional mapping that got created between a scheduler and the FTP configuration. As next activity in the flow configure a Procurement Cloud configuration (Oracle ERP Cloud Adapter) Now configure the data mapping between FTP and Procurement Cloud activities in the flow, by clicking on the map icon, and selecting edit option. Drag and drop elements from source to target to map the FileReference and Properties-> filename and Properties->directory elements from the FTP endpoint to the Procurement endpoint as below in the mapper. Click on Validate and Close. Now configure the business identifiers by configuring Tracking option. Configure schedule->startTime as a tracking field. Save and Close the flow, then activate the flow. Optionally you may choose to enable the trace logging for debugging purposes. Now the flow is activated, click on the hamburger icon and click on Submit Now for manually submitting the scheduler. Ideally you would be scheduling the flow for the desired frequency expected by the business. Go to Monitoring --> Tacking to track the instance that is triggered, once the instance is successful we can verify the scheduled job and supplier data eventually in procurement cloud. Let's see how to do this in the next section. c. Verifying the Supplier record created in Procurement Cloud Login to Procurement Cloud, then go to Navigator-->Tools--> Scheduled Processes, then check for the scheduled job that got triggered for supplier data sync. As the scheduled job is successful let's check supplier data imported in procurement cloud. Go to Navigator-->Procurement-->Suppliers, from task bar Search -->Advanced - enter "OIC Supplier%" d. Generating a report and sending a callback using Oracle Integration ERP adapter capabilities (stay tuned, we will discuss in the next blog) Conclusion Now you should have fair understanding of how you can use Oracle Integration effectively to automate Supplier Data synchronization from an external 3rd party system to Oracle Procurement Cloud.  We will be publishing additional blogs to this as a series, that would help understanding how a child process can be orchestrated like Supplier Sites information upon successful Supplier creation.

Co-Author: Kanaka Vijay Kumar Korupolu, Principal Product Manager Use Case The Oracle SaaS is a modular suite of applications that includes Financials, HCM, Procurement, Projects, SCM offerings etc.The...

Integration

OIC-integration-with-OracleATP

The power of ATP (Autonomous Transaction Processing) adapter in Oracle Integration The Oracle ATP Adapter enables you to run stored procedures or SQL statements on Oracle ATP CS as part of an integration in Oracle Integration. Please note that you could use the ATP adapter to connect to Autonomous Data Warehouse as well. The Oracle ATP Adapter provides the following benefits: You can invoke a stored procedure. You can run SQL statements. You can perform the below operation on a Table.               . Insert               . Update               . Insert or Update (Merge)               . Select The Oracle ATP Adapter is one of many predefined adapters included with Oracle Integration. You can configure the Oracle ATP Adapter as a connection in an integration in Oracle Integration. Connections define information about the instances of each configuration you are integrating. Oracle Integration includes a set of predefined adapters, which are the types of applications on which you can base your connections, such as Oracle Sales Cloud, Oracle Eloqua Cloud, Oracle RightNow Cloud, ATP CS, and so on. A connection is based on an adapter. A connection includes the additional information required by the adapter to communicate with a specific instance of an application (this can be referred to as metadata or as connection details). For example, to create a connection to a specific RightNow Cloud application instance, you must select the Oracle RightNow adapter and then specify the WSDL URL, security policy, and security credentials to connect to it. To create a connection to Oracle ATP CS instance, you must select Oracle ATP adapter and then specify the connection properties given below.          1. Service Name. <<For example, myatp_tp. You can get the service name from the wallet or administrator can provide you>> And specify the security properties given below:           1. Wallet : <<Download the wallet from ATP and upload it here>> Please look at the steps given below to download the wallet from ATP.           2. Wallet Password: <<Provide wallet password>>           3. Database Service Username : <<Provide database service username>>           4. Database Service Password: <<Provide database service password>> Please find the below screenshot of the ATP connection: Steps to download the wallet from ATP: Oracle client credentials (wallet files) can be downloaded from Autonomous Transaction Processing by a service administrator. If you are not an Autonomous Transaction Processing administrator, your administrator should provide you with the client credentials. If you have administrator access, then you can login to service console, go to administration and click on "Download Client Credentials" to download the wallet. Refer here for detailed instructions. Note: Wallet files, along with the Database user ID and password provide access to data in your Autonomous Transaction Processing database. Store wallet files in a secure location. Share wallet files only with authorized users. If wallet files are transmitted in a way that might be accessed by unauthorized users (for example, over public email), transmit the wallet password separately and securely.                    Autonomous Transaction Processing uses strong password complexity rules for all users based on Oracle Cloud security standards. OIC: Oracle Integration eliminates barriers between  business applications through a combination of machine learning, embedded best-practice guidance, prebuilt integration, and process  automation. Oracle Integration is unique in the market by leveraging Oracle application expertise to build an extensive library of adapters to Oracle and 3rd party SaaS and on-premises applications to enable you to deliver new business services faster. ATP: Oracle Autonomous Transaction Processing delivers a self-driving, self-securing, self-repairing database service that can instantly scale to meet demands of mission critical applications. Please note that ATP Adapter is available as part of feature flag. Please refer blog on feature flags The below screenshot demonstrates the use case for retrieving the data from ATP CS and writing into FTP location.  

The power of ATP (Autonomous Transaction Processing) adapter in Oracle Integration The Oracle ATP Adapter enables you to run stored procedures or SQL statements on Oracle ATP CS as part of an...

Integration

A Simple Guide to Return Custom HTTP Error Response from REST based OIC Flows

The REST Adapter in the trigger (inbound) direction exposes an HTTP endpoint that HTTP clients can request for using an HTTP request, and returns an HTTP response. If successful, the REST Adapter returns a success response. The REST Adapter returns an error response with an HTTP status belonging to the error family of codes depending on the situation. The following table describes the possible cause and the REST Adapter response.   Condition HTTP Status Details Invalid client request 4xx There are several conditions that can cause client side failures, including: An invalid resource URL Incorrect query parameters An unsupported method type An unsupported media type Bad data Downstream processing errors 5xx All other errors that can occur within the integration, including: An invalid target An HTTP error response General processing errors   In addition, there could be several situations where an integration developer wants to return a custom HTTP error response based on the business logic.  Let’s take one such example and illustrate how this can be done easily within the orchestration flow.  The REST adapter provides very basic type validation out of the box. Any other validation like schema or semantic validation is turned off as it has a significant performance overhead.  This post demonstrates how integrations developers can include validation logic and raise a fault with a custom fault code from within the orchestration flow. This fault is returned as an HTTP error response back to the client by the REST Adapter.  Overview:In our example, we have a REST based trigger that takes a user input. The integration developer checks the user input and raises a fault which is returned as a HTTP response with an error code back to the caller.  Step 1: Create a REST based trigger that takes a JSON input and a JSON response.                            Step 2: Include a switch statement to validate the input. This step can also be externalized in a separate validation service. This service could also be another integration flow.     Step 3: If the validation condition is not true, then return an APIInvocationError. As illustrated below, the APIInvocationError is assigned specific values to reflect the validation failure. For ex: The APIInvocationError/errorCode is set a value of 400. This ensures that the HTTP response is returned with an error code 400 back to the caller. If this is not set, then fault is returned back to the client as an HTTP error response with code 500 by default. (Internal server error)  The error details section is reserved for the actual cause of the error. Since the integration developer is throwing this error, they can assign any appropriate value. Let’s review the runtime behavior of our sample integration:  1. Positive scenario:  The required field is present. The integration completes successfully and returns a HTTP 200 response back to the client.  2. RequiredID is present but has an empty value: The integration will return an HTTP error response with status code 400 back to the client.  3. RequiredID is not present: The integration will return an HTTP error response with status code 400 back to the client.   The flow trace depicts that the switch condition for invalid input was executed from where the APIInvocationError was raised, that resulted in the HTTP error response back to the client.  In summary, using the steps mentioned above, an integration developer can easily send an HTTP error response from within the orchestration logic using a FaultReturn activity.   

The REST Adapter in the trigger (inbound) direction exposes an HTTP endpoint that HTTP clients can request for using an HTTP request, and returns an HTTP response. If successful, the REST Adapter...

Integration

Moving SOA to Oracle Cloud Infrastructure

Many customers are running their workloads on Oracle Cloud Infrastructure Classic (OCI-C), but the new Oracle Cloud Infrastructure (OCI) offers compelling benefits that customers should consider moving their workloads to the "gen 2 cloud".  Additionally if the customer is not yet running SOA 12.2.1.3 or above, now is an ideal time to make the move. A SOA implementation is typically large and serves mission critical requirements.  This means that a "side-by-side" migration is the best approach.  At a high-level the process is as follows: Discover/map the existing OCI-C deployment.  Oracle provides a set of tools to help in migrating workloads to OCI.  You can learn more about this at Upgrade to Oracle Cloud Infrastructure.  Branch your SOA projects: SOA projects can be deployed into a new environment and they will be upgraded on the deployment.  However, a better approach is to branch your version control and upgrade the projects in JDeveloper.  You can then validate the project to catch any potential issues. Prepare OCI to run PaaS:  There are some prerequisites that need to be completed which I talk about in Getting ready to Run SOA on Oracle Cloud Infrastructure with Terraform Provision the new SOACS environment on OCI. Migrate your Weblogic configurations:  A great option for this which I'll talk about in a bit more detail below is Myst, by Rubicon Red. Deploy your composites into the new environment Test your new deployments Switch over your routing to the new services Decommission your old environment on OCI-C Weblogic configurations can be quite extensive with SOA installations, from JCA Adapters, to JMS Queues etc.  Reconfiguring a new Weblogic instance can be quite involved. To make this process easier, Rubicon Red, an Oracle Platinum Partner, provides Myst, which is a cloud platform dedicated to managing SOA Configurations throughout the lifecycle.  Oracle customers can get a free trial at https://www.myst.cloud/ Once you sign up for the platform, you can quickly use it to connect to your existing SOA Suite Cloud Service instance and discover all of your configurations.  Using Myst, you can then connect to your new environment and re-configure it to match your existing environment.  Of course if you need to change some of the details due to deltas in your new environment, you can do that as well. Myst will also help you maintain your configurations and catch any sort of configuration drift. There may be ways to work with WLST and other mechanisms to automate this yourself, but Myst provides a great platform to manage both SOACS and SOA Suite on-premises instances.   If you are running SOA Cloud Service on OCI-C, now is the time to move to OCI.  If you are not yet running SOA Cloud Service 12.2.1.3 or above,  then now is the time to upgrade and if that means moving to OCI, then you can solve both within one project and Myst will help make this a smoother transition.  

Many customers are running their workloads on Oracle Cloud Infrastructure Classic (OCI-C), but the new Oracle Cloud Infrastructure (OCI) offers compelling benefits that customers should consider...

Oracle Integration App Dev Summit 2019 – Taking it to the Next Level

The strong relationship between Oracle Integration and third party and Oracle applications is critical for providing our customers with a seamless integration experience. This week, Oracle development leaders and integration experts from approximately two dozen different Oracle Apps teams met at HQ in Redwood Shores, California to discuss the ongoing evolution, innovation, and solidification of this relationship. The primary goal was to leverage their individual expertise to take Oracle Integration to a deeper level than any other integration platform on the market today, all while focusing on the user experience and ease of use. Attendees dove deep into a variety of topics including: Connectivity – we are constantly expanding our extensive library of application connectors. For example, we now support RPA connectors for UiPath and Automation Anywhere and are excited to announce more connectivity partnerships soon. Hybrid integration – we are here to make your integrations between on-premises applications and those in the cloud seamless. Process Automation – Process automation capabilities are already built into Oracle Integration for your convenience. Integration patterns including batch, events-based, and SOAP-based – Experience smooth integrations regardless of the pattern you need. Apps integration best practices – Our experts are thought leaders when it comes to best practices both in integration and their individual applications. This event provided the perfect opportunity to share best practices and learn across different product areas. Prebuilt recipes – we have a diverse array of pre-built integration recipes for Oracle Integration users to choose from. These recipes speed up and simplify The next generation integration experience and future facing functionalities – keep an eye out for future announcements. We are working on some very cool things! App experts spanning a broad range of teams filled the room to maximum capacity, including developers from:    Oracle ERP Cloud Oracle Loyalty Cloud Oracle Utilities Cloud Oracle Hospitality Oracle Field Service Cloud Oracle Procurement Cloud Oracle NetSuite Oracle Product Lifecycle Management Cloud Oracle Product Hub Cloud Oracle PeopleSoft Oracle Service Cloud Oracle E-Business Suite Oracle Engagement Cloud Oracle Warehouse Management Cloud  Oracle Service Cloud Oracle Supply Chain Management Cloud Oracle Engagement Cloud Oracle Logistics Cloud Oracle Enterprise Performance Management Cloud Oracle Taleo Cloud Service Oracle Siebel CRM And more!   With Oracle Integration, it is easy to connect applications and SaaS both within Oracle and from third parties. We offer connectivity with a wide range of applications for HCM connectivity, ERP connectivity, CX connectivity, social and productivity apps connectivity, technology connectivity, as well as connectivity for other types of apps and SaaS. You can see the full library here. Oracle Integration also provides an extensive and ever-expanding library of pre-built recipes to choose from. See all the options here. We wrapped up the summit reflecting on lessons learned and best next actions for supercharging Oracle Integration in the near future. We look forward to sharing these updates as they become available.   Keep up with us on Twitter for all the latest announcements related to Oracle Integration.

The strong relationship between Oracle Integration and third party and Oracle applications is critical for providing our customers with a seamless integration experience. This week, Oracle development...

Integration: Heart of the Digital Economy – New Podcasts Now Available

Authored by Steve Quan, Principal Product Marketing Director, Oracle Mobile devices and AI technologies are rapidly changing the way customers interact with businesses.  Some organizations are quickly assembling ad-hoc solutions to meet these challenges by writing custom code to hard-wire systems together. Without modern application integration, organizations end up building systems looking like a tangled mess of spaghetti on a plate, creating solutions that are costly to maintain and update. The two, new podcasts in our six-part series, Integration: Heart of the digital Economy, are now available in the Oracle Cloud Café. Integration: Fuel for AI-Enable Digital Assistants – Learn how National Pharmacies used modern, cloud application integration tools to build AI-enabled digital assistants. These chatbots gave shoppers a seamless experience when shopping in the cloud and in the store, enabling the company to grow sales in a matter of weeks.   Creating Modern Applications with API-First Integration – Success in today’s economy require companies to adopt cloud applications or become extinct like some brick-and-mortar businesses - remember the Blockbuster video rental stores? These companies need a new class of applications to remain synchronized to a digital heartbeat. It requires combining internal and external software and services that are glued together through APIs. Listen to the third podcast in this series to learn how API Management helped the company create new solutions quickly to compete successfully in the digital age.   Learn more about Oracle’s Application Integration Solution here. Learn more about Oracle’s Data Integration Solution here. Dive into Oracle Cloud with a free trial available here.

Authored by Steve Quan, Principal Product Marketing Director, Oracle Mobile devices and AI technologies are rapidly changing the way customers interact with businesses.  Some organizations are quickly...

My Monitor is Wider Than it is Tall - New Layouts in Integration Cloud

New Layouts in Integration Cloud I am sure most of you have noticed that your monitor is wider than it is tall.  To take advantage of this we have a new formats available in Integration Cloud to change the way we view orchestrations. Vertical & Horizontal Layout The first new feature is the ability to switch between veritcal and horizontal layouts. If we have an orchestration we can change it between horizontal and vertical layout using the Layout button at the top of the canvas as shown below. Choosing Horizontal will switch a vertical down the screen layout to a horizontal across the screen layout as shown below. New Views in Integration Cloud In addition to the ability to switch between horizontal and vertical layouts we now also support additional views of an orchestration.  The view above is the traditional canvas view of the orchestration.  But by selecting the different icons on the left at the top of the canvas we can switch to other views. The second icon from the left is the "Pseudo View" that adds pseudo code to the canvas to help identify what each step in the orchestration is doing. Note that the invoke tells us the connection and connection type being used. The third icon from the left provides us with a "Code" view.  This is not editable but allows us to see the actual orchestration code.  This can be helpful at times to understand unexpected behaviors. Summary Integration Cloud now makes it easy to switch between horizontal and vertical layouts of an orchestration on the canvas.  It also allows us to see annotations on the canvas using "Pseudo View" that helps us to understand what individual activities are doing.  Both the "Canvas View" and the "Pseudo View" are editable.  The "Code View" is not editable.  

New Layouts in Integration Cloud I am sure most of you have noticed that your monitor is wider than it is tall.  To take advantage of this we have a new formats available in Integration Cloud to change...

What is the Value of Robotic Process Automation in the Process Automation Space?

This blog originally appeared on LinkedIn; written by Eduardo Chiocconi. During the last two decades, much of the Process Automation efforts concentrated on using Business Process Management Systems (BPMS) as a means to document and digitize business processes. This technology wave helped the Process Automation space make a significant step forward. BPMS tools armed with Integration capabilities allowed organizations (and their business and IT stakeholders) visualize the processes they wanted to automate. From this initial business process documentation phase, it was possible to create a manageable digital asset to help “orchestrate” all business process steps regardless of its nature (people and systems). Without risking to exaggerate, most of Process Automation (or Business Process Management) practitioners would agree, that one of the hardest implementation areas is integrating with systems of information that the business process needs to transact with. BPMS vendors offered a wide array of application integration capabilities, usually in the form of application adapters, to integrate with these Enterprise and Productivity Applications. As more systems needed to be integrated from the business process, the hardest the implementation phase became. As much as we would like for Applications to enable all transactions via publicly available APIs, this is not the case and limits what integration service capabilities can do to integrate in an automated and headless manner. Simplification in the integration space helps! New Enterprise and Productivity Applications have started to really invest early in Application Programming Interfaces (API). REST based Web Services as an implementation mechanism and an API-First approach to offer Application functionality, certainly offered a simpler consumption of Application functionality and by transition it simplified the Process Automation implementation projects “hardest” last mile: integration. Integration vendors can leverage these APIs and offer a direct and easy way to transact against these Applications. But is this not well enough? Well… if your business processes create logic around new SaaS Applications you may be lucky. But for many organizations (and specially those that have gone the path of merger and acquisitions) it is not. Whether we like it or not, there are still many systems that are very hard to transact or interact with. This category of Applications include mainframe systems and homegrown to Enterprise Applications. But also, any kind of application that has gone some kind of customization where this functionality is only available through the application user interface (UI). Robotic Process Automation (RPA): The new kid on the block! What exactly is Robotic Process Automation? These questions may have many different answers. But to me, RPA offers a new mechanism to integrate and transact against Applications using the same UI that their users use. And via this non intrusive approach, it is possible to interact with the application as if it would be done by an person, but rather than a person doing the clicks and entering data, it is an automated application that we call a robot. Period! Why do we talk about RPA in the context of Process Automation? My first observation is that these two technologies are not the same. Secondly, that if you combine them to work together, it is possible to take Process Automation to the next level as RPA offers new ways to integrate with systems of record that could not be integrated before. The simplicity of the way in which it transacts with Applications also offers a first step of automation while a more robust and throughput optimal adapter or API approach is used. But let's drill down one level down and review two important use cases. From a Process Automation top-down point of view, we can sum it down to these: Use Case #1: Use robots to replace repetitive non-value added human interactions. This use case aims to reduce the unnecessary human touch points. In this scenario, it is possible to streamline the business process, since robots can execute these tasks without errors as they follow the same procedure over and over again. Moreover, robots will use the input data and avoid any “fat finger” issue that comes from humans accidentally mistyping the input data. It is worth putting some caution when using this use case, as robots cannot replace the necessary human decision intelligence and knowhow. In this later scenario, we will be better off to use the human discretion and criteria as it makes the process better. In the end, not all process steps can be fully automated without human touch points! Use Case #2: Use robots to prototype integration, as well as integrate with Applications when there is no other headless integration approach available (for example: API or Adapter). Leveraging RPA as “another” integration mechanism offers new ways to transact against Applications besides the ones known to the market to date. How do we bring more value combining Orchestration with Robotic Process Automation? As it was described through this blog, RPA offers “another” way to integrate with systems of record, complementing the existing adapter and API mechanisms offered by Integration platform capabilities. If we agree with the fact, that Integration is one of the hardest Process Automation implementation tasks to nail, then having another tool in our toolset definitive helps! While RPA may not be a silver bullet, it does make Process Automation better and offer a way to better digitize and automate your business processes. If you are using RPA in the context of Process Automation efforts, I would like to hear your thoughts.  

This blog originally appeared on LinkedIn; written by Eduardo Chiocconi. During the last two decades, much of the Process Automation efforts concentrated on using Business Process Management...

Make Orchestration Better with RPA

This article originally appeared on LinkedIn; written by Eduardo Chiocconi Nobody can deny, that when used correctly, RPA has the potential of providing a great ROI. Specially in situations where we are trying to automate manual no value added tasks as well as used as a mechanism to integrate with systems of information that do not have headless way (for example no APIs or Adapters if you are using an integration broker tool) to interact with them. I would like to start this article with a simple example. Imagine for a second, an approval business process where a Statement of Work (SOW) needs to be approved by several individuals within an organization (consulting manager to properly staff project, finance manager to make sure project is viable). Once the approvals are done, the SOW should be uploaded and associated to an opportunity in this company's CRM application (where all customer information is centrally located). At the core of this business process, there is orchestration that coordinates people approvals and should also integrate with the CRM application to upload the SOW to the customer opportunity. The diagram below illustrates the happy path of this orchestration using BPMN as the modeling notation to map this business process (screenshot from Oracle Integration Cloud - Process). Process Automation tools can easily manage the human factor of these orchestrations. Different tools manage integration to applications differently. Depending on the integrated system, the task of transacting against this system may be simple, complex and at times not possible at all. If we take a closer look at the step in which we need to upload the SOW document to the opportunity, then we have the following options: Option a) If the CRM application has an API that allows uploading documents and link it directly to an opportunity, then this transaction can be invoked from the orchestrating business process and automated in a headless manner. When available, this is the preferred way as it is more scalable and it does not come with the overhead of transacting via the application User Interface. Option b) If the CRM application does not have an API (or a headless way to transact with ti), the chances of automation are at stake. The immediate option is to ask a human (such as an Admin) to do the work. The orchestrating business process can route a task to the Admin and this person can get the SOW file, connect to the CRM application via its user interface, find the opportunity and then upload the SOW document to it. Not only this is a highly manual, repetitive and to be honest none value to the organization, but it is also at the mercy of the Admin having the bandwidth to perform this task (and also hopefully not associated to the wrong opportunity). But are these the only two options? Is there a middle ground between option a) and b)? As a matter of fact, YES! And the answer is Robotic Process Automation. The work that the Admin performs can be captured within an RPA process and via the RPA vendor APIs, be invoked when the flow reaches that step in the process (Upload SOW to Opportunity). Now, a Robot will perform the Admin's work (which was not really needed in the first place as this was requested due to the lack of integration alternatives). More importantly, it will be done the same way over and over again and at any time (even after working hours). Because the Robot is configured and scripted to do certain work, it is not necessary to train persons to learn this work on how to perform this transaction against the CRM Application. This automation via RPA, allows the consulting company to close and share the SOW faster with their customers. While the RPA process may need to interact with the application via its User Interface and the RPA script may be sensitive to UI changes, it is definitively a better option that having to work for a person to do the work manually! The screenshot below outlines who performs now the different steps of the process Great! As we combine people, robots and services, we are creating a digital workforce that performs business processes optimally. But wait! Can RPA automate this process end to end? Well, in reality it CANNOT! And this takes me to the second part of this write up. I would like to expose a handful of points where I make a case to always have an orchestration coordinate the work of people, robots and services calls to systems. Important people’s decisions cannot be automated: While in this example, it is possible to look for conditions where the consulting and finance managers may not need to approve the SOW, there will always be cases in which a person's decision and discretion if needed. This reason alone makes the case for an orchestration tool with human task interactions to be in the loop as RPA solutions do not manage the workflow and human element. Orchestration helps better recover from discrete action failures: One of the main functions of an orchestrator is to coordinate when to move on to the next step in the orchestrated flow. This happens ONLY the a step has been successfully completed. If it fails, then it will stay there until it can be perform without problems. Orchestration tools are built from the ground up with these capabilities in them and how to deal with exceptions, failures and retry logic so that the orchestration developer does not need to deal with these details when you get off the beaten happy path. RPA scripting cannot be considered an orchestration technology. For Robots to be resilient all the exception handling logic will need to be coded within the RPA process script itself, likely leading to spaghetti code which will be hard to maintain and understand. Bottom line (and the case I am making), you will be better off by coordinating small and discrete RPA process executions through an orchestration technology. RPA processes should be simple and discrete in what they do. If they fail, let the problem and error management logic be managed by the orchestration layer. The RPA process can always be retried, and it if it keeps failing, be delegated to a person who will be warned via a central monitoring location along with the rest of the integration and orchestration services. Orchestration with RPA, make Orchestration better. RPA with orchestration, make RPA better. One leading Orchestration tool is Oracle Integration Cloud. If you are looking forward to scale your orchestration or RPA efforts, I hope you find this example and lessons learnt useful.

This article originally appeared on LinkedIn; written by Eduardo Chiocconi Nobody can deny, that when used correctly, RPA has the potential of providing a great ROI. Specially in situations where we...

The Power of High Availability Connectivity Agent

High Availability with Oracle Integration Connectivity Agent You want your systems to be resilient to failure and within Integration Cloud Oracle take care to ensure that there is always redundancy in the cloud based components to enable your integrations to continue to run despite potential failures of hardware or software.  However the connectivity agent was a singleton until recently.  That is no longer the case and you can now run more than one agent in an agent group. Of Connections, Agent Groups & Agents An agent is a software component installed on your local system that "phones home" to Integration Cloud to allow message transfer between cloud and local systems without opening any firewalls.  Agents are assigned to agent groups which are logical groupings of agents.  A connection may make use of an agent group to gain access to local resources. As of March 2019 this feature is now available on all OIC instances.  This provides an HA solution for the agent, if one agent fails the other continues to process messages. Agent Networking Agents require access to Integration Cloud using HTTPS, note that the agent may need to use a proxy to access Integration Cloud.  This allows them to check for messages to be delivered from the cloud to local systems or vice versa.  When using multiple agents in an agent group it is important that all agents in the group can access the same resources across the network.  Failure to do this can cause unexpected failure of messages. High Availability When running two agents in a group they process messages in an active-active model.  All agents in the group will process messages, any given message will only be processed by a single agent.  This provides both high availability and potentially improved throughput. Conclusion If resiliency is important then the HA agent group provides a reliable on-premise connectivity solution.

High Availability with Oracle Integration Connectivity Agent You want your systems to be resilient to failure and within Integration Cloud Oracle take care to ensure that there is always redundancy in...

Sending OIC notifications from an email address of your choice

Most of the avid OIC users are aware that the OIC notifications, whether it is system status reports or integration notifications, are sent out from an oracle address i.e. no-reply@oracle.com. But with the latest enhancements, OIC gives flexibility to the users to choose the sender for these notifications. OIC achieves this by providing a simple and intuitive UI, where a user can easily add a list of email addresses which can later be approved to qualify as an Approved Sender in OIC. Let’s see how we can do this in a few simple steps: Navigate to the Notifications page. Here, you will see a table where you can add a list of email addresses that you want to register as Approved Senders with OIC. When you click on add button (plus sign) on the bottom right corner of the page, a new row is added to the table where you can enter an email address. You can also choose one of the email addresses for sending System Notifications such as status report for successful message rate, service failure alerts etc. You can do this by checking the box corresponding to email address of your choice. Please note that you can only choose one email address for sending System Notifications. When you are done entering the list of email addresses, click on Save. Upon saving, a confirmation e-mail is sent out to each of the email addresses in the list. Approval Status is changed to reflect the same information. The recipient of the email is then required to confirm his email address by clicking on the confirmation link in the mail. Sample snippet of the confirmation email is pasted below Upon confirmation, the Approval Status is changed to Approved.            (To refresh the approval status, please use the refresh button on the top left corner of the section.) Congratulations! You have an approved sender registered in OIC. You can now use this approved sender in the From Address section of Notification Action in the Integration Orchestration canvas as depicted below. In additon to this, you can also choose this Approved Sender for sending System Notification. Please note: In a scenario where a registered email address is still “Waiting for User Confirmation” and the user uses it in the Notification action or chooses it to send system notifications, then the sender will be defaulted to no-reply@oracle.com. Hope this blog was able to shed some light on how OIC is helping users manage their notifications better, whether it is by providing the ability to register any number of email address or deleting a previously approved email address from the list of approved senders or changing the primary sender of System Notification any number of times. Hope you have fun incorporating this feature into your use-cases!

Most of the avid OIC users are aware that the OIC notifications, whether it is system status reports or integration notifications, are sent out from an oracle address i.e. no-reply@oracle.com....

See how easily you switch your integration views

In OIC, we spend most of our time building the integration. Currently, when you view/edit the integration in editor, it shows the integration in vertical layout. Now, you can view/edit the integration in several ways: Canvas view Vertical: Displays the integration vertically. Horizontal: Displays the integration horizontally. Pseudo view: Displays the integration vertically with child nodes indented. Details about each node in the integration are displayed to the right. In addition to the above, you can also view the integration outline style.You will need to enable "oic.ics.console.integration.layout" feature flag to enjoy this feature. Note: As of October,2019  release of OIC, this feature is publicly available. You don't need to enable the feature flag. The above diagram shows how to select different views and how the integration looks like in vertical view layout. Canvas view: Canvas view allows you to select the layout. There are two options for the layout: Vertical: This is the default view mode of the integration. In this mode, the integration is shown vertically. Horizontal: While in Canvas view, you can switch the layout to Horizontal and the integration will be shown horizontally. Pseudo view: In this view the integration is shown vertically with indented child nodes. For each node, it shows the details of it. This helps you to easily understand the integration without need to drill down to each node to see the details!  You can use the inline menu to add new nodes/actions. In this view mode, you won't be able to change the orientation of the nodes but you can do the reposition of the nodes , i.e moving Assign inside the Switch node etc.   Hope you find this feature helpful. Enjoy the different integration views!  

In OIC, we spend most of our time building the integration. Currently, when you view/edit the integration in editor, it shows the integration in vertical layout. Now, you can view/edit the integration...

Oracle OpenWorld 2018 Highlights

With another Oracle OpenWorld in the books, we want to take a moment to reflect on some of this year's highlights.  First, let us start by thanking those who make OOW the success that is it: our incredible customers and partners. Your stories inspire us every day and we are so glad to have been able to share them with thousands of attendees at OpenWorld.  Thank you to our customer and partner speakers and panelists: Erik Dvergsnes, (Aker BP), Michael Morales (Quality Metrics Partners), Lonneke Dikmans, (eProseed Europe), Patrick McMahon (Regional Transportation District), Steven Tremblay (Graco Inc), Wade Quale (Graco Inc.), Suresh Sharma (Cognizant Technology Solutions Corporation), Sandeep Singh (GE), David VanWiggeren (Drop Tank), Deepak Kakar (Western Digital), Timothy Lomax (Mitsubishi Electric Automation), Candace McAvaney (Minnesota Power), Mark Harrison (Eaton Corp), Awais Bajwa (GE), Nishi Deokule (GetResource Inc),  Murali Palanisamy (DXC Technology), Bhavnesh Patel (UHG Optum Services Inc.), Biswajit Dhar (Unitedhealth Group Incorporated), Karl Jonsson (Reinhart), Lakshmi Pavuluri (The Wonderful Company), Eric Doty (Greenworks Tools), Susan Gorecki (American Red Cross), Timothy Dickson (Laureate Education), Marc Murphy (Atlatl Software), Chad Ulland (Minnkota Power Cooperative), Amit Patanjali (ICU Medical), Rajendra Bhide (GE), Jonathan Hult (Mythics), Wilson Farrar (UiPath), Xander van Rooijen (Rabobank), Kevin King (AVIO Consulting), Milind Joshi (WorkSpan), Palash Kundu (Achaogen), Simon Haslam (eProseed UK), Matthew Gilbride (Skanska), Chris Maggiulli (Latham Pool Products), Duane Debique (Sinclair Broadcast Group), Ravi Gade (Calix, Inc), and Jagadish Manchikanti (Tupperware). We would also like to congratulate our 2018 Oracle Cloud Platform Innovation Award winners: Drop Tank, Ministry of Interior Turkey, The Co-operative Group, and Meliá Hotels International. Their innovation journeys were truly inspiring!  More than anything, #OOW18 was about innovation, sharing our customer’s successes, and Oracle’s strategy and vision! This year's OOW was abuzz with 60,000 customers and partners from 175 countries and 19 million virtual attendees. We had 50+ sessions on integration, process and APIs, taking center stage even in SaaS sessions for ERP, HCM and CX cloud. This year was all about using integration to mobilize digital transformation, looking at areas like API-led integration and innovation with Robotic Process Automation, IoT, AI, blockchain, and machine learning.  Connecting with customers and partners is always a top highlight of OpenWorld. This year, Oracle VP of Product Management, Vikas Anand, had a chance to connect with UiPath’s Brent Haley. Take a look to hear a bit how we are bringing AI and RPA into Oracle's Integration Platform and more.  Before OpenWorld, we shared a few of our most buzzed about sessions with you. No matter which sessions you were able to attend, we hope you found them informative and left OOW with fresh knowledge and inspiration.  And as always, executive keynotes were a major highlight of OOW. In case you missed any, you can catch them here.  Cloud Generation 2: Larry Ellison Keynote at Oracle OpenWorld 2018 Accelerating Growth in the Cloud: Mark Hurd Keynote at Oracle OpenWorld 2018 With the help from our customers and partners, #OOW18 was a smash hit! We cannot wait to see what the next year will bring.   

With another Oracle OpenWorld in the books, we want to take a moment to reflect on some of this year's highlights.  First, let us start by thanking those who make OOW the success that is it: our...