X

The Integration blog covers the latest in product updates, best practices, customer stories, and more.

Recent Posts

Integration

Integration Patterns - Publish/Subscribe (Part2)

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a messaging infrastructure. The previous post covered the required steps to create the connection, the integration and the trigger (using Postman) to publish a message. This second part will explain step-by-step how to consume those messages.   1. Create an Integration. We will select the Type Subscribe To OIC.     Provide a name a description and optionally associate this integration with a package.     Then you need to select a Publisher. These are available as soon as you configure a “Publish to OIC” integration.  In my instance I have 2 active Publishers.     Now we need to decide who is the consumer of the message. For this exercise I will simply write the message to a local file on my machine. In order to do that I need a File Adapter  and a Connectivity Agent* Setting up a File Adapter and the required Connectivity Agent is out of the scope for this article – but you can find the required information for the File Adapter here and how to setup the agent here. In a real scenario we would use an application or technology adapter to propagate that message to the end application/system. *The Oracle On-Premises Agent i.e Connectivity Agent is required for Oracle Integration Cloud to communicate to on-premise applications.      On the right-side pallet, you need to drag the File Adapter Connection onto the canvas. Once you do that, you get the below wizard. “what do you want to call your endpoint”: Here we can give a name for the File Operation  “Do you want to specify the structure for the contents file?” – Yes “Which one of the following choices would be used to describe the structure of the file content?” – Sample JSON document.   Now, we need to specify where to write the file and the pattern for the name. Specify an Output Directory: In this example I use a directory on my local machine File Name Pattern: The name of the file should be concatenated with %SEQ% which is an incremental variable that is used to avoid files having the same name. Hoovering with the mouse on the question mark provides more information on this.     The last step is the definition of the file structure. Since we selected a JSON format, I uploaded a very simple sample as seen below.     This is the end flow with both source and target endpoints configure. We just need to do some mapping now! By clicking in the Create Map icon we get the below screen, where we can simply drag the attribute message from source onto target. 2. Activate the Integration We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. You can create more subscribers - I cloned the existing one and named it SubscribeMessage_App2, so that we have two consumers for the published message.     3. Run the Integration Now we can use Postman to trigger the publish message – exactly the same as step 4 from post. 4. Monitor the Results When we check the Tracking option under Monitoring, we can see the PublishMessage instance and two SubscribeMessage instances – as expected.   The final step is to verify that 2 files were created in my local machine.   Simple, yet powerful. For more information on Oracle Integration Cloud please check https://www.oracle.com/middleware/application-integration/products.html    

In my previous blog post, I explained the integration pattern publish/subscribe, and how easy it is to make use of its power with Oracle Integration Cloud – all without the need to setup a...

Integration

How to send email with attachments in OIC?

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really easy to configure notification activity to add attachments along with the email. Prerequisite Enable feature flag:  oic.ics.console.notification-attachment Click here to learn on how to enable feature flag. The minimum Oracle Integration version required for the feature is 191020.0200.32001. Step By Step Guide to Send Notification with Attachment There are multiple ways in OIC to work with files. Some of the options are i) configure REST adapter to accept attachments as part of the request, ii) use Stage file activity to create a new file or iii) use FTP adapter to download the file to OIC from remote location for further processing. Any file reference(s) created by upstream operations can be used to configure attachments in the notification activity. Let us learn how we can configure notification action to send email with attachments in simple steps:  For this blog, we will clone the sample integration ‘Hello World’ that is available with OIC. Navigate to the integration landing page, clone the 'Hello World' integration and name it 'Hello World with Attachment' Navigate to the canvas page for the newly created 'Hello World with Attachment' integration Edit the configuration for the REST trigger and change the method type to POST, media type to multipart/form-data and configure request payload parameters to accept attachments. We will add a FTP connection (DownloadToOIC in the image below) and select 'download' operation to download the file to OIC. Here, I have configured FTP connection to download a 'Sample-FTP.txt' file which is already created and present in the remote location. Now, let us add a Stage File action to create a new file. Here, I have configured stage file to write the request parameters - name, email and flow id separated by a comma and name it as 'StageWriteSample.txt'. Refer to the blog to learn more about how to use Stage activity. This will allow us to configure multiple files as attachments in the notification activity later. The updated flow now looks as shown below Edit the notification activity (named as sendEmail in the sample integration) and we should see a new section "attachments" next to the body section. Clicking on the add button (plus sign) in the attachments section will take us to a new page to choose the attachment. We have three file references (highlighted in yellow) available to choose from - attachment from REST connection, file reference from the stage file write operation and file reference from the FTP download operation. We can select file reference(s) each at a time to send the files as attachments. User can edit or delete the attachment once added. The notification activity after configuration should have 3 attachments. Save and activate the integration and now your integration is ready to send emails with attachments. Sample email is shown below when the above flow is triggered. Hello World with Attachment integration created is attached for reference and can be used after configuring the FTP connection. Hope you enjoyed reading the blog and this new feature helps in solving your business use-cases!

Have you ever encountered a scenario where the requirement was to send attachments along with email notification in OIC and you could not? Well, now it is possible. The new feature makes it really...

Integration

Using the next generation Activity Stream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to keep switching between them to get the complete picture of the Instance.  With this new Activity Stream, we are clubbing Audit Trail with Tracing information and showing more compact and easily readable Activity Stream. Prerequisite for Activity Stream Enable feature flag:  oic.ics.console.monitoring.tracking.improved-activity-stream The minimum Oracle Integration version required for the feature is 191030.0200.32180 Step By Step Guide to View Activity Stream Enable Tracing (with payload if required) for the integration. This is to view detailed payload information during development cycle. For production, it is recommended to keep Tracing turned off. Run the integration to create an instance. Navigate to Monitoring → Tracking page and check for the instance user wants to view Activity Stream Click on the primary identifier link of chosen instance to navigate to Tracking Details page.  Click on View Activity Stream menu from the Hamburger menu to display the new Activity Stream panel. NOTE:  To view payload, enable Tracing and payload. Follow How to enable and use tracing in less than 5 min to enable tracing   Features in Activity Stream: Click on Message/Payload to view (lazy load) the payload for the action Expandable loop section to view flow execution inside For-Each/While loop (available only if tracing with payload is enabled) Red node to indicate Error. Errored instances are displayed in a descending execution sequence to show the Error at the very top. Expand payload to full screen Date and Time are shown according to User Preferences Each Message/Payload section has Copy to Clipboard option, that allows user to copy the payload to clipboard Since payload information is derived from log files (which can rotate as and when newer data gets written), older instances might no longer display the payload information on Activity Stream There are two level of download  Download button at the top. User can download complete Activity Stream using this button Download button inside Message/Payload section to download specific Message/Payload            REST API: To View Activity Stream for a given instance ID: curl -1 <user-name>:<password> -k -v -X GET -H 'Content-Type:application/json' https://<host-name>/ic/api/integration/v1/monitoring/instances/<instance-id>/activityStream

For debugging an instance or check payload in an instance, user had to use Audit Trail and Tracing on Tracking details screen. Since the information was scattered at two places, user had to...

How SOA Suite Adapter Can Help Leverage your On-premises Investments

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It provides a rich design-time experience to create a single connection to SOA Suite / Service Bus, browse through the services running on them, and create integrations. For runtime, it relies on the standard SOAP and REST Adapters with or without the Connectivity Agent, depending on how the SOA Suite / Service Bus is accessible over the network. The current SOAP and REST adapters on OIC already provide integration to these services, but with this new adapter, you can do away with the hassles of multiple connections or fetching service metadata manually.  The SOA Suite adapter supports connectivity to: Oracle SOA Suite and/or Oracle Service Bus hosted on-premise Oracle SOA Suite and/or Oracle Service Bus hosted on SOA Cloud Services    Configuring SOA Suite Adapter to connect to a SOA Suite / Service Bus instance In the connection palette, select the SOA Suite Adapter. Provide a meaningful name for this connection and click on 'Create'. This opens up the page where the connection details can be configured.   Configure connectivity: To determine what URL to provide here, examine the topology of the SOA Suite / Service Bus instance i.e., whether the instance is accessible through : The Load Balancer URL The OTD or Cluster Frontend URL or just the Managed Server URL where the SOA Suite / Service Bus instance is running.   Configure Security: Provide the SOA Suite / Service Bus user credentials here.  If you are integrating with SOA Suite, make sure this user is a part of the 'Operators' group and has the 'SOAOperator' role on that server. Likewise if you are integrating with Service Bus, make sure this user is a part of the 'Deployers' group on that server.    Configure Connectivity Agents: In case the SOA Suite / Service Bus instance is not directly accessible from Oracle Integration, for eg. if deployed on-premise, or behind a firewall, a Connectivity Agent needs to be configured for this connection. This can be done using the 'Configure Agents' section.  However, Connectivity Agent may not be required when the SOA Suite / Service Bus URL is publicly accessible, for eg. if deployed on SOA Cloud Service. To know more about Connectivity Agent, check out these: New Agent Simplifies Cloud to On-premises Integration The Power of High Availability Connectivity Agent   Test and Save the connection: A simple 'Test' connection on this page verifies that the SOA Suite / Service Bus is accessible through the connection details provided, that the version of this instance is supported by the adapter, and that the user is authenticated and authorised to access this instance.   How to configure a SOA Suite invoke endpoint in an Orchestration Flow (This adapter can be configured only as an invoke activity to the services exposed by SOA Suite / Service Bus.) Drag and drop a SOA Suite adapter connection into an orchestration flow.  Name the endpoint and proceed to configure the invoke operation. If only SOA Suite or Service Bus instance is accessible through the URL provided in the connections page, the same is shown as a read-only label. But if both are accessible, they are shown as options. If the options are shown, select option 'SOA' or 'Service Bus' - to configure this endpoint to invoke SOA Composites or Service Bus projects respectively.   To configure this endpoint to invoke SOA Composites: (If both SOA and Service Bus are available as options, select option 'SOA') Select a partition to browse the composites in it Select a composite to view the services that it exposes.   To configure this endpoint to invoke Service Bus Projects: (If both SOA and Service Bus are available as options, select option 'Service Bus') Select a project to view the services that this it exposes:   Configuring Service details: Select a service from the desired SOA composite or Service Bus project, to integrate. If the selected service is a SOAP web service, the Operation, Request / Response objects, and the Message Exchange Patterns are displayed SOAP services with Synchronous Request-Response or One Way Notifications are supported. Asynchronous Requests are supported as One Way Notifications only. Callbacks are currently not supported. If the selected service is a RESTFul web service, proceed to the next page to complete further configurations for the Resource, Verb, Request and Response Content Types, Query Parameters, etc.  REST services which have the schemas defined (i.e., non-native REST services and non-end-to-end-json based REST services) are supported. The following Request and Response Content Types are supported: application/xml application/json Proceed to the next page to view the summary and complete the wizard. The newly created endpoint can now be seen in the orchestration flow. The request and response objects of this invoke are available for mapping in the orchestration.   Runtime invocation from OIC to SOA composites / Service Bus projects:  Once the request and response objects are mapped, this flow can be activated like any other flows on OIC. The activated flow would be ready to send requests to running SOA composites / Service Bus projects via SOAP or REST invocations. You can use the OIC Instance Tracking page to monitor the runtime invocation after flow is activated and invoked.   What this adapter needs on the SOA Suite / Service Bus side Supported SOA Suite Versions: Oracle SOA Suite v 12.2.1.4 onwards Oracle SOA Suite v 12.2.1.3 - with these patches applied.  SOA Suite: http://aru.us.oracle.com:8080/ARU/ViewPatchRequest/process_form?aru=22986369  Service Bus: http://aru.us.oracle.com:8080/ARU/ViewPatchRequest/process_form?aru=23008146 Supported OWSM policies: For SOAP webservices  oracle/http_basic_auth_over_ssl_service_policy oracle/wss_username_token_over_ssl_service_policy oracle/wss_http_token_over_ssl_service_policy oracle/wss_username_token_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported For RESTful webservices   oracle/http_basic_auth_over_ssl_service_policy oracle/wss_http_token_service_policy no authentication policy configured Services protected by multiple policies are not supported   How to request this adapter This adapter / feature is currently in controlled availability  (Feature Flag → oic.cloudadapter.adapters.soaadapter) and available on request. To learn more about features and "How to Request a Feature Flag", please refer to this blog post.  

The SOA Suite Adapter on Oracle Integration (OIC) enables you to take advantage of the latest feature rich adapters on OIC, while leveraging your existing investments on SOA Suite and Service Bus. It...

Delisting of unsupported HCM SOAP APIs

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP services and ignore any other HCM SOAP services.  Support for displaying only the supported SOAP services has been added in the OIC release of 19.3.3.0.0. This support can be enabled via Feature Flag (oic.cloudadapter.adapters.hcmsoapapis-ignore). Behavior of new Integration flows: Once the feature flag is turned on, end users will be able to access only the HCM services listed in link mentioned in introduction, when they try to create new integration flows. The adapter wizard will list only the services documented as supported. Behavior of old Integration flows: End users will be able to edit, view and activate old integration flows as before. Old integration flows will not have any impact because of this change in adapter. If a new adapter endpoint is added to old integration flow, only the supported HCM services will be available for consumption. Note: The end user experience will be uniform across all Fusion Application Adapters like, Oracle Engagement Cloud Adapter, Oracle ERP Cloud Adapter along with Oracle HCM Cloud Adapter. This support does not have any impact on REST resources accessible via Fusion Adapters.

Introduction: Oracle HCM Cloud supports a set of SOAP services. They are listed at link. Oracle HCM Cloud Adapter which is part of Oracle Integration Cloud, should list only the supported SOAP...

Integration

Integration Patterns - Publish/Subscribe (Part1)

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for connecting multiple systems. In a nutshell, this pattern is about: Source System publishes a message Target Systems subscribe to receiving that message This enables the propagation of that message into all the target systems that subscribe to it, as illustrated in the below picture. https://docs.oracle.com/cd/E19509-01/820-5892/ref_jms/index.html   This pattern is not new, in fact, it’s been around for decades. It powered distributed systems with its inherent loose coupling and independence.  Publishers and subscribers are loosely coupled which allows for the systems to run independently of each other. In the traditional client-server architecture, a client cannot send a message to a server that is offline. In the Pub/Sub model, message delivered is not conditioned by the server availability. Topics VS Queues The difference between a Topic and a Queue is that all subscribers to a Topic receive the same message when the message is published and only one subscriber to a Queue receives a message when the message is sent. This pattern is about Topics. The Hard way From a vendor neutral point of view, if an Organization needs a messaging infrastructure, it will typically need to setup hardware, install the OS and the messaging software, take care of configurations, creating and managing user, groups, roles, queues and topics…and this is only for the Development environment. Then we have Test and Production, which may require an HA cluster…you can see the direction this is going, it adds complexity. The Easy way Fortunately, OIC abstracts that complexity from the user. It’s Oracle managed, the Topics are created and managed by Oracle. From an integration developer point of view – the only requirement is to make use of the “ICS Messaging Service Adapter” – as we will explain in a bit. This brings the benefits of messaging to those that did not require the full extent of capabilities that a messaging infrastructure provides and were typically put away due to its complexity. Use Cases  There are plenty of uses cases that would benefit from this solution pattern: User changes address data in the HCM application New contact/account created in the Sales or Marketing applications ERP Purchase Orders need to be shared downstream Oracle’s OIC Adapters support many of the SaaS Business Events. How to enable that has been described in another blog entry:  https://blogs.oracle.com/imc/subscribe-to-business-events-in- fusion-based-saas-applications-from-oracle-integration-cloud-oic-part-2        Implement in 4 Steps For this use case, we will just use a REST request as the Publisher.   1. Create a REST trigger Go to Connections and create a new one. Select the REST Adapter. Provide a Name and a Description. The role should be Trigger. Press Create. Then you can save and close.   2. Create an Integration. We will select the Type Publish To OIC, which provides the required structure. Provide a name a description and optionally associate this integration with a package. Now we can drag the connection we created before, from the pallet on the right side, into the Trigger area on the canvas (left side) The REST wizard pops up. We can add a name and a description. The endpoint URI is /message – that’s the only parameter we need. We want to send a message; therefore the action is POST. Select the Checkbox for “Configure a request payload for this request”. Leave everything else as default.   The payload format we want is JSON and we can insert inline a sample – as seen in the picture. That’s all for the REST adapter configuration! You should also add a tracking identifier. The only available is the message element.   3. Activate the Integration. We are now ready to activate the Integration. You can choose to enable tracing and payload for debugging. (Your activation window might look a bit different, as this has the API Platform CS integrated for API publishing)     4. Test the Integration. After activation, you see a green banner on top of your screen with the endpoint metadata. Here you can find the endpoint URL to test the REST trigger we just created. Using Postman (or any other equivalent product) we can send a REST request containing the message we wish to broadcast.   And when we check the Tracking Instances under Monitoring…voilà, we see the instance of the Integration we just created. And here we have the confirmation that the payload was sent to the Topic!   In Part 2 of this blog series we will cover the Subscribers!  

Broadcasting, Publish/Subscribe, Distributed Messaging, One-to-many, these are just some of the names referring to the same integration pattern, which is one of the most powerful available for...

A Simple Guide to Asynchronous calls using Netsuite Adapter

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous CRUD calls to Netsuite and also provided extensive Search capabilities. With a new update, we are now adding support to perform all the above operations as asynchronous calls against Netsuite. As we would see in this post, the user can configure a simple toggle during Netsuite invoke configuration and let the adapter internally handle all the intricacies of asynchronous processing like submitting the asynchronous job, checking for its status and getting the results without the user needing to configure separate invokes for each. How to configure Netsuite Adapter to make Asynchronous Calls? When configuring an invoke activity using a Netsuite connection on the orchestration canvas, the user can now toggle between Synchronous or Asynchronous calls on the Netsuite Operation Selection Page by selecting the appropriate Processing Mode as shown in the image below. This selection is valid for Basic (CRUD) and Search operation types. Note that Miscellaneous operations don't support asynchronous invocations. The user can also click on the Learn More About Processing Mode link next to the Processing Mode option in the configuration wizard to get inline help on the feature. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Basic (CRUD) operations? As mentioned above, the user can configure a particular Netsuite invoke to use the Asynchronous processing mode by selecting the appropriate radio button during the endpoint configuration. Once configured, the Netsuite endpoint thus created, will automatically either submit a new asynchronous job or check the job status and get the results based on certain variables being mapped properly. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous basic operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following two variables → jobId and jobStatus. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by initializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → $jobStatus != "finished" and $jobStatus != "finishedWithErrors" and $jobStatus != "failed" 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId variable created in step 2 to the jobId defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId and jobStatus variables created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user can now configure a Switch activity with either the following condition or a variation of the same based on the business needs... If we follow the condition above, this switch activity would result in two routes being created in the flow... 6.a. jobStatus is either finished or finishedWithErrors or failed → The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an add customer asynchronous job, if the job finished successfully without errors, the user can get the internalId's of the created Customer records... 6.b. jobStatus is neither of the above values → This means that the asynchronous job is still running. Hence before we can get the job results, one can either do certain other operations or wait and loop back to the while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter will automatically either submit a new asynchronous job or check the job status and get the results based on the jobId being passed in the request. How to model an Orchestration Flow with Netsuite invoke configured for Asynchronous Search operations? This is pretty similar to how we model for Asynchronous basic (CRUD) operations, the only differences arising due to the fact the result returned is paginated. Below is a typical flow modeled to utilize a Netsuite invoke configured to make an asynchronous search operation call. Lets look at the high level steps involved in properly modelling an orchestration flow as shown above ... 1. The Integration flow can either be App Driven Orchestration or a Scheduled Orchestration flow. 2. At the beginning of the flow, before invoking Netsuite Asynchronous operation, the user must create and initialize the following three variables → jobId, pageIndex, totalPages. Care should be taken while initializing the values of these variables to ensure the condition defined in the next step is satisfied the first time, for e.g., use -1 as the initial value. (This step is represented by InitializeVariables in the flow diagram above) 3. Create a While loop activity and provide the condition as → integer( $pageIndex) <= integer( $totalPages) 4. In the request Map activity to the Netsuite invoke configured to make asynchronous calls, apart from the mappings required for the business use case, the user must map the jobId and pageIndex variable created in step 2 to the jobId and pageIndex defined in the Netsuite request schema under the AsyncJobParameters element as shown in the image below 5. After the Netsuite invoke activity, the user should use the Assign activity to assign jobId variable created in step 2 with values from the response of the Netsuite invoke activity. (This is represented by ReAssignVariables in the flow diagram shown at the beginning of this section.) 6. The user should now configure a Switch activity with the condition to check if the status of the submitted job is finished ... This switch activity would result in two routes being created in the flow... 6.a. status is finished → In this route, the user should create an Assign activity (represented by IncrementPageIndex in the flow diagram shown at the beginning of this section) which increments the pageIndex variable and assign the totalPages variable with the actual value from the results of the asynchronous job performed in the Netsuite invoke. The two images below show the two assignments needed in this Assign activity. For pageIndex variable... For totalPages variable ... The user can now get the results from the Netsuite invoke activity's response and based on the business needs, can process the results. For e.g. for an search customer asynchronous job, if the job finished successfully without errors, the user can get the Customer records which were searched for ... 6.b. status is anything other than finished → This is route 2 of the Switch activity introduced in step 6 above. This means that the asynchronous job is either still running, finishedWithErrors or failed. The user should introduce another Switch activity in this route to deal with jobs which are finishedWithErrors or failed. The otherwise condition for new Switch activity would mean that the job is still running, in which case the control should loop back to while loop created in step 3. Thus, as can be seen from this example, Netsuite adapter now allows the user to make use of its extensive Search capabilities in an Asynchronous mode with full support for retrieving the paginated result set. How to request this feature? This feature is currently in controlled availability (Feature Flag → oic.cloudadapter.adapter.netsuite.AsyncSupport) and available on request. To learn more about features and "How to Request a Feature Flag", please refer to this blog post.

The Oracle Netsuite Adapter provides a no-code approach for building integrations with Netsuite. The current Netsuite adapter in Oracle Integration Cloud already allowed the user to make synchronous...

Introducing the Oracle OpenWorld Session "Compliance and Risk Management Realized with Analytics and Integration Services" - CAS2657

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services. I am excited to be presenting with these two knowledgeable people… Conny Bjorling, Skanska Group, and Lonneke Dikmans, eProseed. Please join me, Simone Geib, Director of Integration at Oracle, Conny and Lonneke as we describe Skanska’s common integration platform and what role Oracle Integration (OIC) plays as the central component of the platform. Conny and Lonneke will walk you through Skanska’s “Sanctions” project, which integrates Oracle Fusion, Microsoft Dynamics, Salesforce and bespoke systems with Oracle Analytics Cloud through Oracle Integration, to ensure that none of the customers and suppliers that Skanska works with are on a Sanctions list. We will also discuss a future part of the project where Skanska will introduce Integration Insight, a capability of Oracle Integration, to provide real time visibility into the business process through business milestones and metrics that are easily mapped to the implementation and aggregated and visualized in business dashboards.   Conny Bjorling is Head of Enterprise Architecture at Skanska Group. 20+ years of experience from senior IT and Finance roles with Retail (FMCG), Banking & Finance and Construction & Project Development. Focusing on Cloud adoption and agile architecture in the Cloud. Passionate about the business value of data.   Lonneke Dikmans is partner and CTO at eProseed. She has been working as a developer and architect with Oracle tools since 2001 and has hands on experience with Oracle Fusion Middleware and Oracle PaaS products like Oracle Kubernetes Engine, Oracle Blockchain Cloud Service, Oracle Integration Cloud Service, Mobile Cloud Service, API Platform Cloud Service and languages like Java, Python and Javascript (JET, node.js etc). Lonneke is a Groundbreaker Ambassador and Oracle Ace Director in Oracle Fusion Middleware. She publishes frequently online and shares her knowledge at conferences and other community events.

I am looking forward to seeing you all at Oracle Open World – we are less than a week out and with so many great sessions I wanted to highlight CAS2657 - Compliance and Risk Management Realized with...

Integration

Downstream Throttling in Oracle Integration Cloud via Parking Lot Pattern

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The detailed implementation of the parking lot pattern can be done in a variety of  storage technologies but strongly recommended a database table to be used for simplicity. In this blog, we will use the parking lot pattern in the Oracle Integration Cloud (OIC) to explore a solution in the downstream throttling.   Problem Definition In OIC, the downstream throttling was often mentioned as there might be an influx of data that overwhelm the slower downstream systems. Even though, it might be accomplished by the tuning knobs within OIC and WebLogic Server, but when the built-in tuning cannot be adjusted enough capacity to stop flooding the slower system. The parking lot pattern will enable to provide a solution to attend to this scenario.   Design Solution     Process the input data/message based on the order they come in Each message will be parked in the storage for x minutes (parking time) so the system has a chance to throttle the number of messages processed concurrently Maximal number of parallel process can be configured to throttle the outgoing calls A set of integration, connections and database scripts are used the current out of the box OIC features The solution is generic, which can be used in various typed business integrations without modification to the provided integrations Error handling of both system/network error and bad requests   Database Schema   The key piece to the parking lot pattern is the database schema in the design pattern. The Message table is explained here: Column Description ID (NUMBER) This is the unique ID/key for the row in the table. STATUS (VARCHAR) This will be used for state management and logical delete with the database adapter. There are three values this column will hold: 1. N - New (Not Processed) 2. P - Processing (In-flight interaction with slower system) 3. C - Complete (Slower system responded to interaction) The database adapter will poll for ‘N’ew rows and will mark the  row as ‘P’rocessing when it hands it over to an OIC integration. PAYLOAD (CLOB) The message that would normally be associated with a component is stored here as an XML clob.     Implementation Details Integration Details Sample Integration in Par File Connections Business Front-end Integrations Receive the typed business payload Call producer with opaque interface   EmployeeServiceFront(1.0)   OrderServiceFront(1.0) Invoke: EmployeeService   OrderService Producer Receive new record Create a new row in group table if not exit Mark the group status as ‘N’   RSQProducer(1.0) Trigger and Invoke: RSQProducer   Invoke: RSQ DB Group Consumer Schedule to run every x minutes Invoke a message consumer   RSQGroupConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Message Consumer Mark message status as ‘P’ Invoke Dispatcher Delete the message   RSQMessageConsumer(1.0) Trigger and Invoke: RQSMessageConsumer   Invoke: RSQ DB Dispatcher Receive the message Convert opaque payload to the original typed business payload Find the business end point and invoke   RSQDispatcher(1.0) Trigger and Invoke: RSQDispatcher, OrderService, TestService     Screen Shot of Actual Integration Steps You can deploy this par file.  It has the following connections that need configuring and activating. 1. Import the par file to the Packages in Oracle Integration:     2. Invoke and Trigger Connections: initial unconnected status   3. Configure and activate Database, the provided sample database called RSQ DB; Oracle Autonomous Transaction Processing (ATP) database is used in this scenario (ATP database information can be found here deploying an ATP instance in Oracle Cloud   4. Trigger and Invoke Message Consumer; in the sample it is called RSQMessageConsumer to cause load distribution of calls to message consumer. It requires the connection URL and corresponding admin authentication. Processes active messages of the given group. Receives group id/type from the group consumer. Loads active messages of the group ordered by sequence-id. The messages have to be at least # (parking time) old. Loops through active messages Marks message status as 'P' Invoke the Dispatcher using its opaque interface. Delete the message Calls a stored procedure to: Mark the group status to be 'C' if there are no active messages Mark the group status to be 'N' if there are new active messages     5. Trigger the manager interface; in the sample, it connected the RQSManager is used to invoke the parking lot pattern interface in the Oracle Integration. It currently support three operations: Get configs; Update configs; Recover group       6.  Producer Integration connected with database is used to invoke producer interface; in the sample it is called the RSQProducer. Entry point of the parking lot pattern. Receives the resequencing message.   Creates a new row in group table if it's not already there. Marks the status of the group to 'N'. Creates a message in the message table.   7.  Dispatcher is a request/response integration which reconstructs the original payload and send to the real backend integration. Receives the message. Converts opaque payload to the original typed business payload. Uses the group id to find the business end point and invoke it synchronously. 8. Business Integrations are the real integrations that customers used to processes the business messages. They have their own typed interface. I used two test servers to demonstrate some post data   9. Error Handling Recover System/network error If the problem is caused by some system error like networking issues. After you fixing the problem, you can recover by resubmitting the failed message consumer instance.     10. Connections and Integrations Sample after Triggers and Invokes            

Background Parking lot pattern is a mature design to store the data in an intermediate stage before processing the data from the intermediate stage to the end system based on the required rate. The...

One week to the start of Oracle Open World 2019 – Are You Ready?

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration Program Guide lists the Oracle Integration sessions during #OOW19: A small selection is also listed below: PRO5871 - Oracle Integration Strategy & Roadmap - Launch Your Digital Transformation Monday, September 16, 12:15 PM - 01:00 PM | Moscone South - Room 156C Join this session to hear exciting innovations coming to Oracle Integration. See a live demonstration of next-generation machine learning (ML) integration enhancements and robotic process automation, all seamlessly connected into a hybrid SaaS and an on-premise integration. Learn how customer Vertiv successfully delivered its digital transformation with Oracle Integration Cloud by connecting Microsoft Dynamics CRM, Oracle E-Business Suite, Oracle PRM, Oracle ERP Cloud, and Oracle HCM Cloud for real time collaboration and faster business agility. Jumpstart your future today.   PRO5873 - Oracle Integration Roadmap Wednesday, September 18, 12:30 PM - 01:15 PM | Moscone South - Room 156B In this session explore the product roadmap for Oracle Integration including new and exciting initiatives such as AI/ML-based capabilities for recommending best next actions for integration and process, intelligent recommendations on mappings, a new UI with recipes for integration and process, anomaly detection, and enhanced connectivity to Oracle and third-party apps. This session also covers new process automation capabilities such as robotic process automation adapter (RPA-UiPath) and ML-driven case management process execution.   CAS2657 - Compliance and Risk Management Realized with Analytics and Integration Services Monday, September 16, 10:00 AM - 10:45 AM | Moscone South - Room 155B In modern global companies it is important to make sure that customers, suppliers, and (prospective) employees are not on a sanction list. Of course, this can be checked manually at onboarding, but what if a relation is added to a list after onboarding, while you are in business with them? Skanska and eProseed built a solution that fetches data from source systems and matches them with data from the Dow Jones Watchlist. This is done daily, minimizing the risk and increasing efficiency through automation. In this session see the solution and learn about the benefits of this cloud solution for Skanska, as well as the lessons learned.   PRO5805 - Oracle Integration: Monitoring and Business Analytics on Autopilot Wednesday, September 18, 04:45 PM - 05:30 PM | Moscone South - Room 156B Ever wondered how the crucial connections and processes in your environment are behaving? In this session learn to monitor and track your integrations and processes in real time. It is not enough to know things are running smoothly; business analytics are driving the evolution and growth of every industry as never before. Today’s competitive markets demand that stakeholders be able to understand, monitor, and react to changing market conditions in real time, and business owners require more control and expect more visibility over their processes. This session showcases how operational monitoring empowers IT while integration insight empowers executives with real-time insight into their business process, allowing IT and executives to take immediate action.   Hands on Labs If you want to do more than listen to presentations and get your hands on Oracle Integration, sign up for one of our two Hands on Labs: HOL6041 and HOL6043 - Connect Applications, Automate Processes, Gain Insight with Oracle Integration Monday, September 16, 08:30 AM - 09:30 AM | Moscone West - Room 3022A and Wednesday, September 18, 09:00 AM - 10:00 AM | Moscone West - Room 3022A These 2 hands on labs provide a unique opportunity to experience Oracle Integration. Learn how to enhance process and connectivity capabilities of CX, ERP, and HCM applications with Oracle Integration Cloud. Create a forms-driven process to extend existing applications, discover how to synchronize data by integrating with other applications, and learn how to collect business metrics from processes and integrations and collate it into a dashboard that can be presented to business owners and executives. This session explores the power of having a single integrated solution for process, integration, and real-time dashboards delivered by Oracle Integration. Discover how the solution provides business insight by collecting and collating business metrics from your processes and integrations and presents them to your business owners and executives. Demos In between sessions, you should consider a stroll to The Exchange in Moscone South at the Exhibition Level (Hall ABC) and stop by our demo pods to see to see real world examples of how Oracle Integration future proofs your digital journey with pre-built application adapters, simple and complex integration recipes, and process templates. Stop by to discuss how integration is more than connecting applications, it is also about extending applications in a minimally invasive fashion. Visit us to see how to gain visual business insight from your integration flows. Oracle Integration Product Management and Engineering will be there to answer your questions and brainstorm about your integration use cases. We have 2 demo pods at The Exchange: INT-008 - Connect Applications, Automate Processes and RPA Digital Workforce, Gain Insight INT-002 - Leverage Oracle Integration to Bring Operations and Business Insight Together And an additional demo pod at Moscone West – Level 2 (Applications) ADM-003 - Connect Cloud and on-prem Applications with Adapters, B2B, MFT, SOA and hybrid solutions   Oracle Integration and Digital Assistant Customer Summit Last, but not least, there is still time to register for the Oracle Integration and Digital Assistant Customer Summit on 19-September-19 at the W Hotel San Francisco, followed by our Customer Appreciation Dinner. For more information, visit You're Invited: Oracle Integration and Digital Assistant Customer Summit at Oracle OpenWorld 2019   We are all looking forward to seeing you at Oracle Open World 2019 in San Francisco!

One week from today, we will kick off Oracle Open World 2019 in San Francisco. We hope, the below will be helpful for you while preparing your schedule for the week.   The Application Integration...

Integration

Bulk Recovery of Fault Instances

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable Fault Errors? All faulted instances in asynchronous flows in Oracle Integration Cloud Service are recoverable and can be resubmitted. Synchronous flows cannot be resubmitted. You can resubmit errors in the following ways: Single failed message resubmissions Bulk failed message resubmissions Today operator can manually resubmit failed messages individually from the integration console monitor dashboard.  In this blog we are going to focus on how to create an integration flow that can be used to auto resubmit faulted instances in bulk.  Here are the High Level Steps  Here are the steps to create an integration flow to implement the automated bulk recovery of errors. Note we also provide a sample that is available for download. STEP 1: Create New Scheduled Orchestration Flow  STEP 2: Add Schedule Parameters  It is always a good practice to parametrize the variable so you can configure the flow based on business need by overriding them, here are the schedule parameters configured in this bulk resubmit fault instances integration sample. strErrorQueryFilter : fault query filter parameter. This defines which error instances are to be selected for recovery. Valid values:  timewindow: 1h, 6h, 1d, 2d, 3d, RETENTIONPERIOD. Default is 1h. code: integration code version: integration version id: error id(instance id) primaryValue: value of primary tracking variable secondaryValue: value of secondary tracking variable See API documentation. strMaxBatchSize: Maximum number of error instances to resubmit per run. (default 50) This limits the number of recovery requests to avoid overloading the system. strMinBatchSize: Minimum number of error instances to resubmit per run. (default 2) This defers running the recovery until the given number of errors have accumulated. strRetryCount:  Maximum number of retry attempts an individual error instance. (default 3) This prevents repeatedly resubmitting a failed instance. strMaxThershold: Threshold number of errors to abort recovery and notify user. (default 500) This allows resubmission to be ignored if an excessive number of errors have been detected, indicating that some sort of user intervention may be required. STEP 3: Update the Query Filter to Include only Recoverable Errors concat(concat("{",$strErrorQueryFilter,",recoverable:'true'"),"}") STEP 4: Query All Recoverable Error Instances in the System matching the Query Filter GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter STEP 4: Determine Recovery Action STEP 4a: if Total Recovery Error Instances Found is more than Max Threshold (totalResults > strMaxThershold) then Send a Notification. In this case there may be too many errors, indicating a more serious problem, it is best practice to review manually and once the issue is fixed to temporarily override the strMaxThershold value to allow recovery of failed instances. STEP 4b: else if No Recovery Error Instances Found (totalResults <= 0) then End Flow. STEP 4c: else Continue to resubmit strMaxBatchSize Found Errors in a single batch. NOTE: We limit the number of errors re-submitted in a single batch to avoid overloading the system, we suggest a limit of 50 instances. STEP 5: Query Recovery Errors (limit to Batch Size) GET /ic/api/integration/v1/monitoring/errors?q=strErrorQueryFilter&limit=strMaxBatchSize&offset=0 STEP 6 Filter Results to Avoid too Many Retries STEP 6a: if totalResults found <  strMinBatchSize , then skip the batch re-submit and stop the flow STEP 6b: else if totalResults > strMinBatchSize, then Invoke REST API to submit fault errors IDs Bulk Re-submit Error API. Here we can filter out the Fault Instance that are already retry but did fail again,  as shown below  - Drag and Drop For each items - Add if function from Mapper on top of items - Add <= condition element - Add Left Operator = retryCount from source  - Add Right Operator = strMaxRetryAttempt from variable retryCount < = $strMaxRetryAttempt STEP 7: Resubmit Errors POST - /ic/api/integration/v1/monitoring/errors/resubmit   STEP 8:  Check `resubmitRequested` = true, / false,  STEP 9:  Send Email Notification with Recovery Submit status details as below shown (Optional): User can model the integration to invoke a process (using OIC process capability for human interaction and long running tasks) or take any action based on re-submit response via a child flow, or other 3rd party Integration. This may be to post the re-submit information to some system for future analysis/ review. One can utilize the local invoke feature to model the parent to child flow hand off. STEP 9:  Activate the Integration, STEP 10: Schedule the Integration to Run on every X period of time One can also run OnDemand with the option of SubmitNow  Email Notification Here is the Email Notification one would receive Case1: When Bulk Resubmit success  email is sent as below (Sample). Case2: When Too Many Fault Instance and Alert Email Sent as below (Sample). Ok, by now you have completed Development of Integration and schedule to run on your Integration Cloud Instances. How to Customize your Integration to Run Recovery for a Specific Integration or Connection Because different integration or error types may have different recovery requirements you may want to have different query parameters and/or scheduled intervals. For this you need to clone the above Integration and override schedule parameter to query only specific fault Instance for a given Integration or Connection Type based on query filter. so you can keep separate instance running for a specific business use case. Here is how you do it: STEP1 - Clone the above Integration. STEP2. Update the Schedule Parameter strErrorQueryFilter timewindow : '3d', code : 'SC2RNSYNC', version : '01.00.0000' code : 'SC2RNSYNC', version : '01.00.0000', connection :'EH_RN' timewindow : '3d', primaryValue : 'TestValue' You may also want to modify other parameters or even to modify the integration to take alternative actions. STEP3: Schedule to Run This will give you ability to config the bulk re-submit for given set of integration level or connection level. Sample Integration (IAR)  -  Download Here Summary This blog explained how to automatically resubmit errored instances, allowing control of rate of recovery, type of errors to recover and showed how to customize the recovery integration through cloning and modifying parameters. We hope that you find this a useful pattern for your integrations. Thanks You!    

One of the most common requirements of enterprise integration is error management. It is critical for customers to manage recoverable errors in a seamless and automated fashion. What are Recoverable...

Integration

CICD Implementation for OIC

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts are used to support the backing up of integration artifacts from the source environment (OIC POD) to the repository (Bitbucket). Shell scripts are also used to retrieve the saved integration artifacts from the repository and deploy the integrations to a target environment (another OIC POD). Following features are currently supported in this implementation: 1)   Export integrations and save the artifacts (IARs, connection json files) to the remote repository: Allow user to either export all integrations, or only one or more integration(s) from the source OIC environment. Commit/Push the exported integration artifacts to the repository. Provide Summary Reports. 2)   Pull integration artifacts from the repository and either import or deploy the integrations to the target OIC environment: Allows user to select one or more integrations to do import only, or to deploy the integrations to a target environment. (To deploy an integration means to import IAR, update connections and activate the integration) Pre-requisites Following are the required setups in your Linux machine: 1)    JDK 1.8 Installation Make sure to update your JDK to version 1.8 or higher 2)    Jenkins Installation Ensure your Linux machine has access to a Jenkins instance. Required Jenkins Plugins Install the following Jenkins Plugins which are required by the CICD scripts: Parameterized Trigger plugin Delivery Pipeline plugin (version 1.3.2) HTML Publisher plugin 3)   Git Client Make sure to use Git client 2.7.4 or later in your Linux machine. 4)   Bitbucket/Github (repository) Do the following to have access to the remote repository: Setup SSH access to the remote repositories (Bitbucket/Github).  A Bitbucket server administrator can enable SSH access to the Git repositories in Bitbucket server for you.  This allows your Bitbucket user to perform secure Git operations between your Linux machine and the Bitbucket server instance. Note:    Bitbucket repository was used with this implementation. Create the local repository by cloning the remote repository: To create local repository, you can run the below commands from your <bitbucket_home>, where <bitbucket_home> is where you want your local repository to reside (i.e. /scratch/bitbucket): cd   <bitbucket_home> git clone  <bitbucket_ssh_based_url> 5)   Jq You can download JQ (JSON Parser) from: https://stedolan.github.io/jq/download/ Once downloaded, run the following commands: rename ‘jq-linux64’ to ‘jq’ chmod  +x  jq copy the ‘jq’ file to /usr/bin using sudo. Note:  It is required to have the minimal of the Git client and Jq utility to be installed on the same server where you are running the scripts.  Jenkins and Bitbucket repository can be on remote servers. Scripts setup Perform the following to setup on your Linux machine to run the bash shell scripts: Create a new <cicd_home> directory in your local Linux machine (i.e. /scratch/cicd_home). Note:  <cicd_home> is where all the CICD related files will reside.   Download the oic_cicd_files.zip file to your <cicd_home> directory. Run unzip to extract the directories and files.  Once unzipped, you should see the below file structure under your <cicd_home> directory: From <cicd_home>, run the below command to ensure that you are using git version 2.21.0 or later: > git  --version    For CI (Continuous Integration) Two shell scripts are provided for CI process: export_integrations.sh This script exports integrations (IARs along with the corresponding connection json files) from source OIC environment.  The script allows user to either export ALL integrations, or to export one or more specified integrations. For exporting one or more integrations, under <cicd_home>/01_export_integrations directory, edit the config.json file and update to include the integration Identifier (code) and the version number that you want to backup, one integration per line in below json format: [       { "code": "<integration1_Id>", "version":  "<integration1_version>" }       { "code": "<integration2_Id>", "version":  "<integration2_version>" }       .. ] For example:  [ { "code": "SAMPL_INCID_DETAI_FROM_SERVI_CLO", "version":  "01.00.0000" } ] Note:    The above steps are not required if you want to export All integrations.  The config.json file will be created automatically by script. push_to_repository.sh This script utilizes the Git utility to Commit and Push integration artifacts to the remote Repository.  This allows developer to save the current working integrations, and to fall back to previous versions as need be.   For CD (Continuous Delivery) Two shell scripts are provided for CD process: pull_from_repository.sh This script pulls the integration artifacts from the remote repository, and stores the artifacts under a local location. deploy_integrations.sh This script deploys integration(s) to target OIC environment. User has the option to only import the integrations, or to deploy the integrations (import IARs, update connections and activate integrations). Perform the following steps to either import or deploy integrations: 1) Under <cicd_home>/04_Deploy_Integrations/config directory, edit the integrations.json file to include the below information of the integrations to be imported/deployed:   Integration Identifier (code) and the integration version number Connection Identifier (code) of the related connections used by the integration. For example:      { "integrations":         [           {                "code":"SAMPL_INCID_DETAI_FROM_SERVI_CLO",                 "version":"01.00.0000",                 "connections": [                         { "code":"MY_REST_ENDPOINT_INTERFAC" },                         { "code":"SAMPLE_SERVICE_CLOUD" }                 ]             }           ]         }   2) Prior to deploying the integration, update the corresponding <connection_id>.json file to contain the expected values for the connection (i.e. WSDL URL, username, password etc). For example: SAMPLE_SERVICE_CLOUDE.json contains: {      "connectionProperties":[           {              "propertyGroup":"CONNECTION_PROPS",              "propertyName":"targetWSDLURL",              "propertyType":"WSDL_URL",              "propertyValue":"<WSDL_URL_Value>"            }        ],        "securityPolicy":"USERNAME_PASSWORD_TOKEN",        "securityProperties":[             {                  "propertyGroup":"CREDENTIALS",                  "propertyName":"username",                  "propertyType":"STRING",                  "propertyValue":"<user_name>"             },             {                   "propertyGroup":"CREDENTIALS",                   "propertyName":"password",                   "propertyType":"PASSWORD",                   "propertyValue":"<user_password>"              }           ]      }   Import Jenkins Jobs While you can create the Jenkins Jobs manually, you have the option to import the Jenkins jobs by using the jenkins_jobs.zip file. To import Jenkins Jobs 1)   Download and unzip the jenkins_jobs.zip file to your <Jenkins_home>/.job directory.  Where <jenkins_home> is the location where your Jenkins instance is installed 2)   Restart Jenkins server 3)   Once Jenkins server is restarted, login to Jenkins (UI) and: Update all parameters for the below four Jobs as per your environment: 01_Export_Integrations_and_Push 02_Pull_Integrations_and_Deploy 02a_Pull_Integrations_from_Repository 02b_Deploy_Integrations_to_Target For example: GIT_INSTALL_PATH:         /user/local/git Update the ‘Run_Location’ parameter in all other child jobs as per your environment (where the script used by each of the child job is located).  For example: In the configuration of the Export_Integrations job: RUN_LOCATION:   <cicd_home>/01_export_integrations Where <cicd_home>/01_export_integrations is the full path to where the corresponding script (export_integrations.sh) resides, Note:  make sure the path does not contain ending ‘/’. For the other Repository related child jobs (i.e Pull_from_Repository and Push_to_Repository, etc.) also update the GIT_INSTALL_PATH parameter to where your Git is being run from. NOTE: If there is no need to update the connection information for the Integrations, then the job 02_Pull_Integrations_and_Deploy can be used to pull Integration artifacts from the repository and also deploy the Integrations. If the connection information needs to be updated (i.e. User name, User password, WSDL URL, etc), then: First run 02a_Pull_Integrations_from_Repository to pull Integration artifacts from repository Update the connection json files to contain relevant information Call 02b_Deploy_Integrations_to_Target to deploy the Integrations   Create Jenkins Pipeline Views To create Pipeline View, ensure to install Delivery Pipeline Plugin as mentioned earlier. Perform the following steps: 1) Login to Jenkins 2) From Jenkins main screen, click on ‘+’ to Add a new View:   Enter view name: 01_OIC_CD_Pipeline_View   Go under Pipelines and click on Add to add Components: Component Name:      OIC_CI_Pipeline                                      (or any relevant name for the view)               Initial Job:     01_Export_Integrations_and_Push          (this is the root job in the pipeline) Click Apply then OK. Select the following options: Enable start of new pipeline build Enable rebuild Show total build time Theme (select ‘Contrast’) (Keep default values for all other options) The following view will be available for your Pipeline Job: (Create CD view using the same steps above)   Reports Reports are available for Export_Integrations, Push_to_Repository and Deploy_Integrations jobs. For Report to be displayed properly, we need to relax the Security Policy rule so that the Style codes in the HTML file can be executed. Relax Content Security Policy rule To relax this rule, from Jenkins, do the following: Manage Jenkins / Manage Nodes Click settings (gear icon) ​​ click Script console on left and type in the following command: System.setProperty("hudson.model.DirectoryBrowserSupport.CSP", "")               Click Run. If you see the 'Result:' output below "Result" header, then the protection is disabled successfully. Otherwise, click ‘Run’ again.        4. Restart the Jenkins server   To view Report for Export_Integrations job, for example: Click on OIC Export Integrations Report link to view Report after the job is run:                            Steps to create Report 1) From the selected Job screen (i.e, Export_Integrations), click on Configure 2) Under Post-build Actions, add Publish HTML reports for the job 3) Use the following parameters (as example): HTML directory to archive: ./ Index page[s]:  <the created html file> Report title:      <enter proper title for the Report>   4) Click Apply then Save   Execute Jenkins Jobs To run the CICD scripts, execute the below Jenkins pipeline jobs: For CI: 01_Export_Integrations_and_Push Run this job to execute CI scripts.  Wait for the parent job and its downstream jobs to complete running, then click on the child jobs, Export_Integraions or Push_to_Repository, and the Report link to see the results. For CD:      If there is no need to update connections, then run: 02_Pull_Integrations_and_Deploy Report is available under the Jenkins job Deploy_Integrations screen.      If connection information needs to be updated prior to deploying the integrations, then first pull the integration artifacts from the repository: 02a_Pull_Integrations_from_Repository Update the connection json file(s) as need be, then deploy the integrations to target OIC environment by running: 02b_Deploy_Integrations_to_Target Report is available under the Jenkins job Deploy_Integrations_to_Target screen.  

This blog is to share information on the CICD implementation for OIC and the instructions to setup and run the CICD scripts on a Linux machine environment. In this implementation, bash shell scripts...

Update library in continuous manner even after being consumed by integration

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing functions. Feature Flag This feature is available when oic.ics.console.library.update feature flag is enabled. Here is how library update works Let's consider a simple Math library that defines basic add function that takes two parameters and is used in an integration. Suppose you want to add other math functions like subtract, multiply and divide and change the way add operation is executed. You may update the registered library with new JS file that has these new functions using the Update menu on the library list page. Upload the new JS file using the update library dialog. When attempting to update the library with new code, the system validates the new library file and ensures it meets the following conditions. Function signatures in the new file being uploaded should match the signatures of functions in the existing library that are used in integrations. You may add new functions but removing a function used in integrations results in rejection of the new file. If the new library file adheres to these conditions the library is updated and library edit page is displayed for further configuration changes. Please note that if the returns parameter for a function used in an integration was changed in the updated library the system does not flag an error but it invalidates the downstream mapping in integrations and should be re-mapped. Deactivate and activate the integration for changes to take effect in integrations that use the updated library. Following is an example where the validation conditions are satisfied and the system accepts the uploaded library file. Add function in math.js is used in integrations so the signature of this function in the updated library file is unchanged even though the add function definition is changed. As mentioned above the containing library file name is also part of a function signature so the file name is unchanged in the updated library. The updated library may contain other function definitions. Example mentioned below is an illustration where validation fails and uploaded library file is rejected. As the signature of functions in the new library file do not match with the library in the system, the new library file is rejected.

Update library feature provides much awaited update functionality on registered libraries. As part of the update library user can add new functions, remove unused functions or modify logic of existing...

Migrating from ICS4SaaS to OIC4SaaS

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service has been sold with Oracle SaaS products and has appeared on the SaaS price list. As this service is not available on the Oracle’s OCI infrastructure, Oracle provides a migration path for ICS4SaaS customers to migrate their workloads to OCI. Oracle introduced a new offering called Oracle Integration for Oracle SaaS (aka OIC4SaaS). This offering is based on the Oracle Integration (OIC) service, which runs exclusively on the OCI infrastructure. The migration path is similar (but not identical to) the migration path for the corresponding tech (PaaS) SKUs, namely migration of ICS to OIC. SKUs for ICS for SaaS The SKUs for ICS for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Service includes per month Part # Oracle Integration Cloud service for Oracle SaaS 850 Hosted Environment 1 Hosted Env 10GB in and out per day B87181 Additional non-Oracle SaaS connection 1000 Hosted connection 1 connection of choice B87182 Additional 10GB per day 1000 Each 10GB in and out per day B87183 Oracle Integration Cloud Service for Oracle SaaS Midsize 585 Hosted Environment -  B87609 Additional non-Oracle SaaS Midsize connections 650 Hosted Connection - B87610 Additional 10GB per day Midsize 900 Each - B87611 Note that for the purposes of migration to OIC for SaaS, the midsize SKUs above (last 3 rows) behave the same as their corresponding ICS4SaaS SKUs (first 3 rows). SKUs for OIC for SaaS: The SKUs for OIC for SaaS, along with list prices, are given in the table below:   Monthly subscription price Metric Part # Oracle Integration Cloud Service for Oracle SaaS – Standard 600 1 Million messages / month B91109 Oracle Integration Cloud Service for Oracle SaaS – Enterprise 1200 1 Million messages / month B91110 Note that for both ICS for SaaS and OIC for SaaS, the same restriction applies where each integration must have an endpoint in an Oracle Cloud SaaS application. For further details on OIC for SaaS, refer to the Oracle Fusion Cloud Service Description document. Migration paths Oracle allows you to choose whether to migrate from all flavors of ICS to either the OIC subscription offering (OIC for SaaS) or to OIC under Universal Credits. In fact, all 4 paths below are supported: Source Target Comments ICS OIC See migration documentation here ICS for SaaS OIC Migration procedures are the same as ICS -> OIC above ICS for SaaS OIC for SaaS This migration path is the focus of this document ICS OIC for SaaS Migration procedures are the same as ICS for SaaS -> OIC for SaaS   Why are migration procedures different for OIC for SaaS? When migrating ICS to OIC, you need to create and use OCI cloud storage. This storage is used to store the exported metadata from ICS. This enables a secure mechanism to store the metadata of the entire ICS instance, which includes security credentials to your SaaS and other applications and systems. An OIC for SaaS account is dedicated to OIC. Customers pay a subscription price for OIC and other services which are part of Universal Credits (including object storage) are not provision-able. Therefore, the migration procedures are different. Oracle provides and has tested two options for migrating ICS to OIC for SaaS Migration: Piece-Meal migration and Wholesale Migration. The preferred option is wholesale migration.   Migration Option #1: Piece-Meal Migration In this option, you migrate your integrations one-by-one via the export and import features of ICS/OIC. Oracle provides the ability to export and import individual integrations (and lookups). See exporting and importing components in the Oracle documentation. Using this capability, you can export each of your integrations from ICS4SaaS, and then import them into OIC4SaaS. Note that the export does not include security information such as login/password to your end applications, so after the import you must add the security information. This option obviates the need for OCI cloud storage, as the export can be saved to a local file. However, you will be required to export/import each integration individually and you will be required to re-add security credentials. Consider this option only when you have a relatively small (<10) number of integrations to migrate, and you do not want to obtain a universal Credit account.   Migration Option #2: Wholesale Migration In this option, you may migrate all your integrations along with all metadata and security information in a single bulk operation. This option does depend on the availability of OCI Cloud Storage. Therefore, you will need to separately procure a Universal Credit account and make the Cloud Storage in this account available for the migration process. This Universal Credit account is in addition to your OIC4SaaS environment. The rest of the migration path is the same as migration from ICS to OIC. If you already have a Universal Credit account, you can use that. If not, you can obtain one. In fact, Oracle offers free 30-day trials which can be leveraged for this purpose. Even if you choose a paid account. cloud storage is relatively inexpensive, and only required for the duration of the migration. After migration is complete, you can delete the Cloud storage. If you use a 3o-day trial for migration, you can even delete the account after migration, though we hope you will decide to use it and take advantage of the rich services and capabilities available there. NOTE: Wholesale migration is the preferred option. Consider gaining access to a Universal Credit account (including 30-day trial) to enable wholesale migration. What if I have multiple ICS4SaaS instances? Chances are that you have a Stage and Production instance, and perhaps other instances. Like the ICS to OIC migration, Oracle recommends a 1-for-1 migration when you have multiple ICS4SaaS instances.  That is, each ICS4SaaS instance gets migrated to its own OIC4SaaS instance. What if I have multiple ICS4SaaS accounts? You can request to have multiple OIC4SaaS accounts to match your ICS4SaaS environment. It is also possible to consolidate your OIC4SaaS instances into a single account. Note that each instance typically shares the same user base, as they all share the same IDCS tenancy. However, Oracle is rolling out the ability for OIC (and OIC4SaaS) to leverage secondary IDCS stripes to allow each of your instances to have a unique set of OIC service administrators and instance administrators. When migrating to OIC4SaaS, should I select Standard or Enterprise? If your integrations are all Cloud to Cloud, the Standard version should suffice. If you require integration using one of the On-premise adapters, or if you want to take advantage of Process Automation (which is not available in IC4SaaS) then you should choose the Enterprise version. Both versions offer the same pricing model, based on the number of 1 million messages per month you require. How does pricing compare? ICS4SaaS and OIC4SaaS are offered in two completely different pricing models. ICS4SaaS was sold per connection, whereas OIC4SaaS is sold via message volume (one unit = 1M messages per month). OIC4SaaS generally has more favorable pricing than ICS4SaaS for most customers, though this is dependent on specific customers’ integration requirements and usage patterns. For migration scenarios, Oracle will work with customers to ensure their pay the same price (or less) for equivalent functionality in OIC4SaaS. Note that the ICS4SaaS additional non-Oracle SaaS connections (B87182) does not have an equivalent SKU in OIC for SaaS. The base SKUs for OIC for SaaS (B91109, B91110) include non-Oracle SaaS connections, and no separate purchase is required. I am ready to migrate. How do I get started? Contact your Oracle sales representative to help guide you through the process. Migration to OCI includes a commercial migration component, so that you will start paying for OIC4SaaS while no longer paying for ICS4SaaS. Oracle provides a 4-month window for migrations, where your ICS4SaaS instances will be available (at no charge) to give you ample time to perform the migration and associated testing.Oracle continues to invest in enhancements to the migration processes, so please be sure to ask your Oracle sales representative about the latest available tooling which can be applied to your specific enivronment.            

Introduction Oracle Integration Cloud Service for Oracle SaaS (aka ICS4SaaS) is a version of Oracle’s Integration Cloud Service (ICS) targeted for use with Oracle SaaS products. The ICS4SaaS service...

How to use File Reference in Stage File

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by Stage File action. Advantages :  Any upstream operation which provides a file reference can be directly processed. Thus simplifies orchestration.  Ex. Rest Connection allows downloading of attachment into OIC/.attachment folder. It provides file reference but does not provide file name or directory.   OIC/ICS Operations that provides References: Attachment Reference (Rest Adapter : Attachments) Stream Reference (Rest Invoke response) MTOM (Soap Invoke response) FileReference ( FTP) Base64FileReference(encodeString) function.      This is how the Stage File action Configure Operation page will look like for: Read Entire File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Read File in Segments operation​ 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the File Name' and 'Specify the Directory to read from' will be replaced with 'Specify the File Reference' field.   Unzip File operation 'Configure File Reference' option will be available. This will be defaulted to No. On selecting Yes, 'Specify the Zip File Name' and 'Specify the Zip File Directory' will be replaced with 'Specify the File Reference' field.   To specify the File Reference, you can click the Expression Builder icon to build an expression.   This is how file reference can be used to utilize local file processing capabilities provided by Stage File action.

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration. File reference can be used to utilize local file processing capabilities provided by...

Support for zip/json/xml based schema in Stage File action

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration.  Before, only a CSV file could be used to describe the structure of the file in Stage File action. Now, XML or JSON document can also be used to describe the structure of the file contents in Stage File action. Zip File containing multiple schemas can also be uploaded.   The Schema Options page is displayed if you selected a read or write stage file operation. Before, this is how the Stage File action Schema options page would look like: Schema options page would have following schema options: Create a new schema from a CSV file/ Select an existing schema from the file system   Now, this is how the Stage File action Schema options page will look like for: Read Entire File and Write File operation Schema options page will have following choices: Sample delimited document (e. CSV) XML schema (XSD) document Sample XML document (Single or No NameSpace) Sample JSON document   Read File in Segments operation Schema options page will have following choices: Sample delimited document (e. CSV) XML schema (XSD) document Sample XML document (Single or No NameSpace)   Based on your selection on the Schema Options page, the Format Definition page enables you to select the file for describing the structure. This is how the Stage File action Format Definition page will look like for: Sample delimited document (e. CSV) choice   XML schema (XSD) document choice   In case of Read Entire File and Write File operation You can upload a ZIP file that includes multiple schemas in which one schema has a reference to the other schema file (for example, schema A imports schema B).   In case of Read File in Segments operation You cannot upload a ZIP file that includes multiple schemas in which one schema has a reference to the other schema file (for example, schema A imports schema B). Read File in Segments option does not work when it reads the element that belongs to schema B.   New field 'Select Repeating Batch Element' to select the repeating element when Stage Read is configured with Segmentation will be available   Sample XML document (Single or No NameSpace) choice   Sample JSON document choice (This choice is not available in case of Read File in Segments operation)   This is how you can upload schema in Stage File action using not only CSV file but also XML, XSD or JSON file.

The Stage File action can read, write, zip, unzip and list files in a staged location known to Oracle Integration.  Before, only a CSV file could be used to describe the structure of the file in Stage...

Automating Supplier Data Synchronization using Oracle Integration

Co-Author: Kanaka Vijay Kumar Korupolu, Principal Product Manager Use Case The Oracle SaaS is a modular suite of applications that includes Financials, HCM, Procurement, Projects, SCM offerings etc. The Procurement cloud is designed to work as a complete procurement solution or as an extension to existing procurement applications (Cloud/On-Premise).                                     Organizations may not be in a position to adopt entire suite and completely replace their existing application portfolio, but organizations can successfully pursue coexistence strategies by selective uptake of cloud offerings that serve as a logical extension to their existing solutions. Oracle Sourcing is the next generation application for smarter negotiations with the suppliers.  Sourcing cloud is an integral part of the Procurement Cloud and is generally implemented with other Procurement cloud products like Purchasing and Supplier Portal. Sourcing cloud can also be implemented as a standalone offering with other cloud/on-premise Procurement applications, that makes an ideal coexistence scenario. In such coexistence scenarios, organizations across the world face typical challenges as how to create/synchronize master data and transactional data across different systems within the organization. This use case covers importing of Suppliers data from an external 3rd party enterprise applications usually from an FTP server in to Oracle Procurement Cloud. Oracle Integration (OIC) is an excellent platform to build such integration use cases seamlessly. Configure There are certain pre-requisites that need to be performed in Procurement Cloud, they are detailed below. 1. Add roles required for the Integration 2. Create a Procurement Agent Let's look at them in detail... Add the required roles for integration There are certain roles that are required to access the Procurement application and perform setup and transaction activities. The Supplier Import process is run by user with below roles: The Supplier Administrator or The Supplier Manager. Integration Specialist Login as super user who has access to security console Note: “IT Security Manager” role is required for Security Console access Navigator > tools > Security Console Users > Search for Username Click on Username Click Edit > Click on Add Role Add Supplier Administrator (or) Supplier Manager Roles and Integration Specialist Role Click Save and Close 2. Create a Procurement Agent Procurement Agent is a mandatory setup that will give an access to create Suppliers and Supplier related entities, hence we will do the agent configuration following the steps below. Login as a Procurement super user: Calvin.roth / Password Go to Procurement > Purchase Orders Click on Task Bar, go to  Administration > Manage Procurement Agent Click on Management Procurement Agents, then click on Create. Save and Close. Implement Supplier data synchronization can be achieved through the file based data import (FBDI) pattern, which can be further simplified using Oracle Integration. The generic architecture of the flow involves generating FBDI file, upload to Procurement Cloud, receive the callback then process further steps. Typically the flow appears as below...  (Image Credits - Oradocs Solutions) Process for the implementation of the above pattern includes below steps. (Image Credits - Oradocs Solutions     Here we can completely automate end to end use case right from a. Preparing the FBDI file including manifest file i.Generate Supplier Data file using FBDI xslm template ii. Create Manifest file b. Building the orchestration flow in Oracle Integration (OIC), that will take care of automating below 3 steps. i. Uploading Supplier FBDI data file into UCM ii. Import supplier data into interface tables iii. Import supplier data from interface to base tables c. Verifying the Supplier record created in Procurement Cloud d. Generating a report and sending a callback using Oracle Integration ERP adapter capabilities. Let's look at the steps in detail below... a. Preparing the FBDI file including manifest file i. Generate Supplier Data file using FBDI xlsm template Go to the Oracle Help Center. Click Cloud > Click Applications. Under Enterprise Resource Planning > click Procurement Click Integrate > Under Manage Integrations, click Get started with file-based data import. Click on “+” to expand - File-Based Data Imports Click on Suppliers Make a note of UCM account - prc/supplier/import Click the SupplierImportTemplate.xlsm template link in the File Links table to download Preparing Data Using the XLSM Template Each interface table is represented as a separate Excel sheet. The interface table tabs contain sample data, that can be used as a guideline to understand how to populate the fields The first row in each sheet contains column headers that represent the interface table columns. The columns are in the order that the control file expects them to be in the data file. DO NOT: Change the order of the columns in the Excel sheets. Changing the order of the columns will cause the load process to fail. Delete columns not being used. Deleting columns will cause load process to fail.  You can hide columns that you do not intend to use; if needed, during the data creation process, but please reset to unhidden before upload. Note: Please note the above sample data, specifically Supplier Name, we will use this to verify the supplier records imported into the Procurement Cloud. Open the XLSM template. The first worksheet in file provides instructions for using the template Enter data in the spreadsheet template. Follow the instructions on the Instructions and CSV Generation tab under the section titled Preparing the Table Data. Click the Generate CSV File button. A CSV file is generated that is compressed into a ZIP file. Save the file. ii. Create Manifest file Login to Procurement Cloud and navigate as below Navigator > Setup and Maintenance : Task bar > Search Task: Manage Enterprise Scheduler Job Definitions and Job Sets for Financial, Supply Chain Management, and Related Applications Search and select the above task. Select Import Suppliers and click on EDIT Make a note of Name and Path details. Name: ImportSuppliers Path:     /oracle/apps/ess/prc/poz/supplierImport/ Check, which parameters are Mandatory - Required and prepare the manifest file as shown below. Create a zip file with CSV file that we have generated earlier and manifest file. See below for the sample. Now the supplier data file is ready to feed to the integration flow that we are going to orchestrate in the next steps. Let's look at the orchestration of the flow and creating connections to Procurement cloud and FTP server. b. Building the orchestration flow in Oracle Integration (OIC) We will create connections to Procurement Cloud, FTP server and orchestrate an integration flow i. Create Procurement Cloud Connection Login to OIC console. Click on Integrations to launch the Oracle Integration canvas, click on Connections, then click on Create option. Search for Oracle ERP Cloud. Provide connection name, click on Create. Enter Procurement Cloud connection details Service Catalog WSDL: https://<HostAddress>/fscmService/ServiceCatalogService?wsdl User Name: calvin.roth Password: XXXX Now test the connection by clicking on Test option, upon successful Test, Save the connection. ii. Create FTP connection Now let's create the FTP Server connection. Click on Create option on the connections page. Provide connection name, click on Create. Enter Procurement Cloud connection details in Configure Connectivity and Configure Security. FTP Server Host Address: XXXXXXX FTP Server Port: XX SFTP Connection: XX User Name: XXX Password: XXX Now test the connection by clicking on Test option, upon successful Test, Save the connection. iii. Orchestrate the flow. Click on Integrations in the Designer canvas. Click Create. Select Create Integration -  Style as a "Scheduled Orchestration" Provide name for the Integration flow, leave defaults as is. Click on Create. Note: Please make sure the Supplier FBDI data file is placed in a specific FTP location and mention the location and name of the file in adapter configuration as mentioned below. In the next step configure FTP adapter to read Supplier FBDI data file that was created earlier, configuration summary has shown below. Delete additional mapping that got created between a scheduler and the FTP configuration. As next activity in the flow configure a Procurement Cloud configuration (Oracle ERP Cloud Adapter) Now configure the data mapping between FTP and Procurement Cloud activities in the flow, by clicking on the map icon, and selecting edit option. Drag and drop elements from source to target to map the FileReference and Properties-> filename and Properties->directory elements from the FTP endpoint to the Procurement endpoint as below in the mapper. Click on Validate and Close. Now configure the business identifiers by configuring Tracking option. Configure schedule->startTime as a tracking field. Save and Close the flow, then activate the flow. Optionally you may choose to enable the trace logging for debugging purposes. Now the flow is activated, click on the hamburger icon and click on Submit Now for manually submitting the scheduler. Ideally you would be scheduling the flow for the desired frequency expected by the business. Go to Monitoring --> Tacking to track the instance that is triggered, once the instance is successful we can verify the scheduled job and supplier data eventually in procurement cloud. Let's see how to do this in the next section. c. Verifying the Supplier record created in Procurement Cloud Login to Procurement Cloud, then go to Navigator-->Tools--> Scheduled Processes, then check for the scheduled job that got triggered for supplier data sync. As the scheduled job is successful let's check supplier data imported in procurement cloud. Go to Navigator-->Procurement-->Suppliers, from task bar Search -->Advanced - enter "OIC Supplier%" d. Generating a report and sending a callback using Oracle Integration ERP adapter capabilities (stay tuned, we will discuss in the next blog) Conclusion Now you should have fair understanding of how you can use Oracle Integration effectively to automate Supplier Data synchronization from an external 3rd party system to Oracle Procurement Cloud.  We will be publishing additional blogs to this as a series, that would help understanding how a child process can be orchestrated like Supplier Sites information upon successful Supplier creation.

Co-Author: Kanaka Vijay Kumar Korupolu, Principal Product Manager Use Case The Oracle SaaS is a modular suite of applications that includes Financials, HCM, Procurement, Projects, SCM offerings etc.The...

Integration

OIC-integration-with-OracleATP

The power of ATP adapter (Autonomous Transaction Processing) in Oracle Integration(OIC).   ATP Adapter is available as part of feature flag. Please refer blog on feature flags.   The Oracle ATP Adapter enables you to run stored procedures or SQL statements on Oracle ATP CS as part of an integration in Oracle Integration. Please note that you could use the ATP adapter to connect to Autonomous Data Warehouse as well. The Oracle ATP Adapter provides the following benefits: You can invoke a stored procedure. You can run SQL statements. You can perform the below operation on a Table.               . Insert               . Update               . Insert or Update (Merge)               . Select     The Oracle ATP Adapter is one of many predefined adapters included with Oracle Integration. You can configure the Oracle ATP Adapter as a connection in an integration in Oracle Integration.     Connections define information about the instances of each configuration you are integrating. Oracle Integration includes a set of predefined adapters, which are the types of applications on which you can base your connections, such as Oracle Sales Cloud, Oracle Eloqua Cloud, Oracle RightNow Cloud, ATP CS, and so on.     A connection is based on an adapter. A connection includes the additional information required by the adapter to communicate with a specific instance of an application (this can be referred to as metadata or as connection details). For example, to create a connection to a specific RightNow Cloud application instance, you must select the Oracle RightNow adapter and then specify the WSDL URL, security policy, and security credentials to connect to it.     To create a connection to Oracle ATP CS instance, you must select Oracle ATP adapter and then specify the connection properties given below.          1. Service Name. <<For example, myatp_tp. You can get the service name from the wallet or administrator can provide you>> And specify the security properties given below:           1. Wallet : <<Download the wallet from ATP and upload it here>> Please look at the steps given below to download the wallet from ATP.           2. Wallet Password: <<Provide wallet password>>           3. Database Service Username : <<Provide database service username>>           4. Database Service Password: <<Provide database service password>>     Please find the below screenshot of the ATP connection: Steps to download the wallet from ATP: Oracle client credentials (wallet files) can be downloaded from Autonomous Transaction Processing by a service administrator. If you are not an Autonomous Transaction Processing administrator, your administrator should provide you with the client credentials. If you have administrator access, then you can login to service console, go to administration and click on "Download Client Credentials" to download the wallet. Refer here for detailed instructions.     Note: Wallet files, along with the Database user ID and password provide access to data in your Autonomous Transaction Processing database. Store wallet files in a secure location. Share wallet files only with authorized users. If wallet files are transmitted in a way that might be accessed by unauthorized users (for example, over public email), transmit the wallet password separately and securely.                    Autonomous Transaction Processing uses strong password complexity rules for all users based on Oracle Cloud security standards.     OIC: Oracle Integration eliminates barriers between  business applications through a combination of machine learning, embedded best-practice guidance, prebuilt integration, and process  automation. Oracle Integration is unique in the market by leveraging Oracle application expertise to build an extensive library of adapters to Oracle and 3rd party SaaS and on-premises applications to enable you to deliver new business services faster. ATP: Oracle Autonomous Transaction Processing delivers a self-driving, self-securing, self-repairing database service that can instantly scale to meet demands of mission critical applications.   The below screenshot demonstrates the use case for retrieving the data from ATP CS and writing into FTP location.    

The power of ATP adapter (Autonomous Transaction Processing) in Oracle Integration(OIC).   ATP Adapter is available as part of feature flag. Please refer blog on feature flags.   The Oracle ATP Adapter...

Integration

A Simple Guide to Return Custom HTTP Error Response from REST based OIC Flows

The REST Adapter in the trigger (inbound) direction exposes an HTTP endpoint that HTTP clients can request for using an HTTP request, and returns an HTTP response. If successful, the REST Adapter returns a success response. The REST Adapter returns an error response with an HTTP status belonging to the error family of codes depending on the situation. The following table describes the possible cause and the REST Adapter response.   Condition HTTP Status Details Invalid client request 4xx There are several conditions that can cause client side failures, including: An invalid resource URL Incorrect query parameters An unsupported method type An unsupported media type Bad data Downstream processing errors 5xx All other errors that can occur within the integration, including: An invalid target An HTTP error response General processing errors   In addition, there could be several situations where an integration developer wants to return a custom HTTP error response based on the business logic.  Let’s take one such example and illustrate how this can be done easily within the orchestration flow.  The REST adapter provides very basic type validation out of the box. Any other validation like schema or semantic validation is turned off as it has a significant performance overhead.  This post demonstrates how integrations developers can include validation logic and raise a fault with a custom fault code from within the orchestration flow. This fault is returned as an HTTP error response back to the client by the REST Adapter.  Overview:In our example, we have a REST based trigger that takes a user input. The integration developer checks the user input and raises a fault which is returned as a HTTP response with an error code back to the caller.  Step 1: Create a REST based trigger that takes a JSON input and a JSON response.                            Step 2: Include a switch statement to validate the input. This step can also be externalized in a separate validation service. This service could also be another integration flow.     Step 3: If the validation condition is not true, then return an APIInvocationError. As illustrated below, the APIInvocationError is assigned specific values to reflect the validation failure. For ex: The APIInvocationError/errorCode is set a value of 400. This ensures that the HTTP response is returned with an error code 400 back to the caller. If this is not set, then fault is returned back to the client as an HTTP error response with code 500 by default. (Internal server error)  The error details section is reserved for the actual cause of the error. Since the integration developer is throwing this error, they can assign any appropriate value. Let’s review the runtime behavior of our sample integration:  1. Positive scenario:  The required field is present. The integration completes successfully and returns a HTTP 200 response back to the client.  2. RequiredID is present but has an empty value: The integration will return an HTTP error response with status code 400 back to the client.  3. RequiredID is not present: The integration will return an HTTP error response with status code 400 back to the client.   The flow trace depicts that the switch condition for invalid input was executed from where the APIInvocationError was raised, that resulted in the HTTP error response back to the client.  In summary, using the steps mentioned above, an integration developer can easily send an HTTP error response from within the orchestration logic using a FaultReturn activity.   

The REST Adapter in the trigger (inbound) direction exposes an HTTP endpoint that HTTP clients can request for using an HTTP request, and returns an HTTP response. If successful, the REST Adapter...

Integration

Moving SOA to Oracle Cloud Infrastructure

Many customers are running their workloads on Oracle Cloud Infrastructure Classic (OCI-C), but the new Oracle Cloud Infrastructure (OCI) offers compelling benefits that customers should consider moving their workloads to the "gen 2 cloud".  Additionally if the customer is not yet running SOA 12.2.1.3 or above, now is an ideal time to make the move. A SOA implementation is typically large and serves mission critical requirements.  This means that a "side-by-side" migration is the best approach.  At a high-level the process is as follows: Discover/map the existing OCI-C deployment.  Oracle provides a set of tools to help in migrating workloads to OCI.  You can learn more about this at Upgrade to Oracle Cloud Infrastructure.  Branch your SOA projects: SOA projects can be deployed into a new environment and they will be upgraded on the deployment.  However, a better approach is to branch your version control and upgrade the projects in JDeveloper.  You can then validate the project to catch any potential issues. Prepare OCI to run PaaS:  There are some prerequisites that need to be completed which I talk about in Getting ready to Run SOA on Oracle Cloud Infrastructure with Terraform Provision the new SOACS environment on OCI. Migrate your Weblogic configurations:  A great option for this which I'll talk about in a bit more detail below is Myst, by Rubicon Red. Deploy your composites into the new environment Test your new deployments Switch over your routing to the new services Decommission your old environment on OCI-C Weblogic configurations can be quite extensive with SOA installations, from JCA Adapters, to JMS Queues etc.  Reconfiguring a new Weblogic instance can be quite involved. To make this process easier, Rubicon Red, an Oracle Platinum Partner, provides Myst, which is a cloud platform dedicated to managing SOA Configurations throughout the lifecycle.  Oracle customers can get a free trial at https://www.myst.cloud/ Once you sign up for the platform, you can quickly use it to connect to your existing SOA Suite Cloud Service instance and discover all of your configurations.  Using Myst, you can then connect to your new environment and re-configure it to match your existing environment.  Of course if you need to change some of the details due to deltas in your new environment, you can do that as well. Myst will also help you maintain your configurations and catch any sort of configuration drift. There may be ways to work with WLST and other mechanisms to automate this yourself, but Myst provides a great platform to manage both SOACS and SOA Suite on-premises instances.   If you are running SOA Cloud Service on OCI-C, now is the time to move to OCI.  If you are not yet running SOA Cloud Service 12.2.1.3 or above,  then now is the time to upgrade and if that means moving to OCI, then you can solve both within one project and Myst will help make this a smoother transition.  

Many customers are running their workloads on Oracle Cloud Infrastructure Classic (OCI-C), but the new Oracle Cloud Infrastructure (OCI) offers compelling benefits that customers should consider...

Oracle Integration App Dev Summit 2019 – Taking it to the Next Level

The strong relationship between Oracle Integration and third party and Oracle applications is critical for providing our customers with a seamless integration experience. This week, Oracle development leaders and integration experts from approximately two dozen different Oracle Apps teams met at HQ in Redwood Shores, California to discuss the ongoing evolution, innovation, and solidification of this relationship. The primary goal was to leverage their individual expertise to take Oracle Integration to a deeper level than any other integration platform on the market today, all while focusing on the user experience and ease of use. Attendees dove deep into a variety of topics including: Connectivity – we are constantly expanding our extensive library of application connectors. For example, we now support RPA connectors for UiPath and Automation Anywhere and are excited to announce more connectivity partnerships soon. Hybrid integration – we are here to make your integrations between on-premises applications and those in the cloud seamless. Process Automation – Process automation capabilities are already built into Oracle Integration for your convenience. Integration patterns including batch, events-based, and SOAP-based – Experience smooth integrations regardless of the pattern you need. Apps integration best practices – Our experts are thought leaders when it comes to best practices both in integration and their individual applications. This event provided the perfect opportunity to share best practices and learn across different product areas. Prebuilt recipes – we have a diverse array of pre-built integration recipes for Oracle Integration users to choose from. These recipes speed up and simplify The next generation integration experience and future facing functionalities – keep an eye out for future announcements. We are working on some very cool things! App experts spanning a broad range of teams filled the room to maximum capacity, including developers from:    Oracle ERP Cloud Oracle Loyalty Cloud Oracle Utilities Cloud Oracle Hospitality Oracle Field Service Cloud Oracle Procurement Cloud Oracle NetSuite Oracle Product Lifecycle Management Cloud Oracle Product Hub Cloud Oracle PeopleSoft Oracle Service Cloud Oracle E-Business Suite Oracle Engagement Cloud Oracle Warehouse Management Cloud  Oracle Service Cloud Oracle Supply Chain Management Cloud Oracle Engagement Cloud Oracle Logistics Cloud Oracle Enterprise Performance Management Cloud Oracle Taleo Cloud Service Oracle Siebel CRM And more!   With Oracle Integration, it is easy to connect applications and SaaS both within Oracle and from third parties. We offer connectivity with a wide range of applications for HCM connectivity, ERP connectivity, CX connectivity, social and productivity apps connectivity, technology connectivity, as well as connectivity for other types of apps and SaaS. You can see the full library here. Oracle Integration also provides an extensive and ever-expanding library of pre-built recipes to choose from. See all the options here. We wrapped up the summit reflecting on lessons learned and best next actions for supercharging Oracle Integration in the near future. We look forward to sharing these updates as they become available.   Keep up with us on Twitter for all the latest announcements related to Oracle Integration.

The strong relationship between Oracle Integration and third party and Oracle applications is critical for providing our customers with a seamless integration experience. This week, Oracle development...

Integration: Heart of the Digital Economy – New Podcasts Now Available

Authored by Steve Quan, Principal Product Marketing Director, Oracle Mobile devices and AI technologies are rapidly changing the way customers interact with businesses.  Some organizations are quickly assembling ad-hoc solutions to meet these challenges by writing custom code to hard-wire systems together. Without modern application integration, organizations end up building systems looking like a tangled mess of spaghetti on a plate, creating solutions that are costly to maintain and update. The two, new podcasts in our six-part series, Integration: Heart of the digital Economy, are now available in the Oracle Cloud Café. Integration: Fuel for AI-Enable Digital Assistants – Learn how National Pharmacies used modern, cloud application integration tools to build AI-enabled digital assistants. These chatbots gave shoppers a seamless experience when shopping in the cloud and in the store, enabling the company to grow sales in a matter of weeks.   Creating Modern Applications with API-First Integration – Success in today’s economy require companies to adopt cloud applications or become extinct like some brick-and-mortar businesses - remember the Blockbuster video rental stores? These companies need a new class of applications to remain synchronized to a digital heartbeat. It requires combining internal and external software and services that are glued together through APIs. Listen to the third podcast in this series to learn how API Management helped the company create new solutions quickly to compete successfully in the digital age.   Learn more about Oracle’s Application Integration Solution here. Learn more about Oracle’s Data Integration Solution here. Dive into Oracle Cloud with a free trial available here.

Authored by Steve Quan, Principal Product Marketing Director, Oracle Mobile devices and AI technologies are rapidly changing the way customers interact with businesses.  Some organizations are quickly...

My Monitor is Wider Than it is Tall - New Layouts in Integration Cloud

New Layouts in Integration Cloud I am sure most of you have noticed that your monitor is wider than it is tall.  To take advantage of this we have a new formats available in Integration Cloud to change the way we view orchestrations. Vertical & Horizontal Layout The first new feature is the ability to switch between veritcal and horizontal layouts. If we have an orchestration we can change it between horizontal and vertical layout using the Layout button at the top of the canvas as shown below. Choosing Horizontal will switch a vertical down the screen layout to a horizontal across the screen layout as shown below. New Views in Integration Cloud In addition to the ability to switch between horizontal and vertical layouts we now also support additional views of an orchestration.  The view above is the traditional canvas view of the orchestration.  But by selecting the different icons on the left at the top of the canvas we can switch to other views. The second icon from the left is the "Pseudo View" that adds pseudo code to the canvas to help identify what each step in the orchestration is doing. Note that the invoke tells us the connection and connection type being used. The third icon from the left provides us with a "Code" view.  This is not editable but allows us to see the actual orchestration code.  This can be helpful at times to understand unexpected behaviors. Summary Integration Cloud now makes it easy to switch between horizontal and vertical layouts of an orchestration on the canvas.  It also allows us to see annotations on the canvas using "Pseudo View" that helps us to understand what individual activities are doing.  Both the "Canvas View" and the "Pseudo View" are editable.  The "Code View" is not editable.  

New Layouts in Integration Cloud I am sure most of you have noticed that your monitor is wider than it is tall.  To take advantage of this we have a new formats available in Integration Cloud to change...

What is the Value of Robotic Process Automation in the Process Automation Space?

This blog originally appeared on LinkedIn; written by Eduardo Chiocconi. During the last two decades, much of the Process Automation efforts concentrated on using Business Process Management Systems (BPMS) as a means to document and digitize business processes. This technology wave helped the Process Automation space make a significant step forward. BPMS tools armed with Integration capabilities allowed organizations (and their business and IT stakeholders) visualize the processes they wanted to automate. From this initial business process documentation phase, it was possible to create a manageable digital asset to help “orchestrate” all business process steps regardless of its nature (people and systems). Without risking to exaggerate, most of Process Automation (or Business Process Management) practitioners would agree, that one of the hardest implementation areas is integrating with systems of information that the business process needs to transact with. BPMS vendors offered a wide array of application integration capabilities, usually in the form of application adapters, to integrate with these Enterprise and Productivity Applications. As more systems needed to be integrated from the business process, the hardest the implementation phase became. As much as we would like for Applications to enable all transactions via publicly available APIs, this is not the case and limits what integration service capabilities can do to integrate in an automated and headless manner. Simplification in the integration space helps! New Enterprise and Productivity Applications have started to really invest early in Application Programming Interfaces (API). REST based Web Services as an implementation mechanism and an API-First approach to offer Application functionality, certainly offered a simpler consumption of Application functionality and by transition it simplified the Process Automation implementation projects “hardest” last mile: integration. Integration vendors can leverage these APIs and offer a direct and easy way to transact against these Applications. But is this not well enough? Well… if your business processes create logic around new SaaS Applications you may be lucky. But for many organizations (and specially those that have gone the path of merger and acquisitions) it is not. Whether we like it or not, there are still many systems that are very hard to transact or interact with. This category of Applications include mainframe systems and homegrown to Enterprise Applications. But also, any kind of application that has gone some kind of customization where this functionality is only available through the application user interface (UI). Robotic Process Automation (RPA): The new kid on the block! What exactly is Robotic Process Automation? These questions may have many different answers. But to me, RPA offers a new mechanism to integrate and transact against Applications using the same UI that their users use. And via this non intrusive approach, it is possible to interact with the application as if it would be done by an person, but rather than a person doing the clicks and entering data, it is an automated application that we call a robot. Period! Why do we talk about RPA in the context of Process Automation? My first observation is that these two technologies are not the same. Secondly, that if you combine them to work together, it is possible to take Process Automation to the next level as RPA offers new ways to integrate with systems of record that could not be integrated before. The simplicity of the way in which it transacts with Applications also offers a first step of automation while a more robust and throughput optimal adapter or API approach is used. But let's drill down one level down and review two important use cases. From a Process Automation top-down point of view, we can sum it down to these: Use Case #1: Use robots to replace repetitive non-value added human interactions. This use case aims to reduce the unnecessary human touch points. In this scenario, it is possible to streamline the business process, since robots can execute these tasks without errors as they follow the same procedure over and over again. Moreover, robots will use the input data and avoid any “fat finger” issue that comes from humans accidentally mistyping the input data. It is worth putting some caution when using this use case, as robots cannot replace the necessary human decision intelligence and knowhow. In this later scenario, we will be better off to use the human discretion and criteria as it makes the process better. In the end, not all process steps can be fully automated without human touch points! Use Case #2: Use robots to prototype integration, as well as integrate with Applications when there is no other headless integration approach available (for example: API or Adapter). Leveraging RPA as “another” integration mechanism offers new ways to transact against Applications besides the ones known to the market to date. How do we bring more value combining Orchestration with Robotic Process Automation? As it was described through this blog, RPA offers “another” way to integrate with systems of record, complementing the existing adapter and API mechanisms offered by Integration platform capabilities. If we agree with the fact, that Integration is one of the hardest Process Automation implementation tasks to nail, then having another tool in our toolset definitive helps! While RPA may not be a silver bullet, it does make Process Automation better and offer a way to better digitize and automate your business processes. If you are using RPA in the context of Process Automation efforts, I would like to hear your thoughts.  

This blog originally appeared on LinkedIn; written by Eduardo Chiocconi. During the last two decades, much of the Process Automation efforts concentrated on using Business Process Management...

Make Orchestration Better with RPA

This article originally appeared on LinkedIn; written by Eduardo Chiocconi Nobody can deny, that when used correctly, RPA has the potential of providing a great ROI. Specially in situations where we are trying to automate manual no value added tasks as well as used as a mechanism to integrate with systems of information that do not have headless way (for example no APIs or Adapters if you are using an integration broker tool) to interact with them. I would like to start this article with a simple example. Imagine for a second, an approval business process where a Statement of Work (SOW) needs to be approved by several individuals within an organization (consulting manager to properly staff project, finance manager to make sure project is viable). Once the approvals are done, the SOW should be uploaded and associated to an opportunity in this company's CRM application (where all customer information is centrally located). At the core of this business process, there is orchestration that coordinates people approvals and should also integrate with the CRM application to upload the SOW to the customer opportunity. The diagram below illustrates the happy path of this orchestration using BPMN as the modeling notation to map this business process (screenshot from Oracle Integration Cloud - Process). Process Automation tools can easily manage the human factor of these orchestrations. Different tools manage integration to applications differently. Depending on the integrated system, the task of transacting against this system may be simple, complex and at times not possible at all. If we take a closer look at the step in which we need to upload the SOW document to the opportunity, then we have the following options: Option a) If the CRM application has an API that allows uploading documents and link it directly to an opportunity, then this transaction can be invoked from the orchestrating business process and automated in a headless manner. When available, this is the preferred way as it is more scalable and it does not come with the overhead of transacting via the application User Interface. Option b) If the CRM application does not have an API (or a headless way to transact with ti), the chances of automation are at stake. The immediate option is to ask a human (such as an Admin) to do the work. The orchestrating business process can route a task to the Admin and this person can get the SOW file, connect to the CRM application via its user interface, find the opportunity and then upload the SOW document to it. Not only this is a highly manual, repetitive and to be honest none value to the organization, but it is also at the mercy of the Admin having the bandwidth to perform this task (and also hopefully not associated to the wrong opportunity). But are these the only two options? Is there a middle ground between option a) and b)? As a matter of fact, YES! And the answer is Robotic Process Automation. The work that the Admin performs can be captured within an RPA process and via the RPA vendor APIs, be invoked when the flow reaches that step in the process (Upload SOW to Opportunity). Now, a Robot will perform the Admin's work (which was not really needed in the first place as this was requested due to the lack of integration alternatives). More importantly, it will be done the same way over and over again and at any time (even after working hours). Because the Robot is configured and scripted to do certain work, it is not necessary to train persons to learn this work on how to perform this transaction against the CRM Application. This automation via RPA, allows the consulting company to close and share the SOW faster with their customers. While the RPA process may need to interact with the application via its User Interface and the RPA script may be sensitive to UI changes, it is definitively a better option that having to work for a person to do the work manually! The screenshot below outlines who performs now the different steps of the process Great! As we combine people, robots and services, we are creating a digital workforce that performs business processes optimally. But wait! Can RPA automate this process end to end? Well, in reality it CANNOT! And this takes me to the second part of this write up. I would like to expose a handful of points where I make a case to always have an orchestration coordinate the work of people, robots and services calls to systems. Important people’s decisions cannot be automated: While in this example, it is possible to look for conditions where the consulting and finance managers may not need to approve the SOW, there will always be cases in which a person's decision and discretion if needed. This reason alone makes the case for an orchestration tool with human task interactions to be in the loop as RPA solutions do not manage the workflow and human element. Orchestration helps better recover from discrete action failures: One of the main functions of an orchestrator is to coordinate when to move on to the next step in the orchestrated flow. This happens ONLY the a step has been successfully completed. If it fails, then it will stay there until it can be perform without problems. Orchestration tools are built from the ground up with these capabilities in them and how to deal with exceptions, failures and retry logic so that the orchestration developer does not need to deal with these details when you get off the beaten happy path. RPA scripting cannot be considered an orchestration technology. For Robots to be resilient all the exception handling logic will need to be coded within the RPA process script itself, likely leading to spaghetti code which will be hard to maintain and understand. Bottom line (and the case I am making), you will be better off by coordinating small and discrete RPA process executions through an orchestration technology. RPA processes should be simple and discrete in what they do. If they fail, let the problem and error management logic be managed by the orchestration layer. The RPA process can always be retried, and it if it keeps failing, be delegated to a person who will be warned via a central monitoring location along with the rest of the integration and orchestration services. Orchestration with RPA, make Orchestration better. RPA with orchestration, make RPA better. One leading Orchestration tool is Oracle Integration Cloud. If you are looking forward to scale your orchestration or RPA efforts, I hope you find this example and lessons learnt useful.

This article originally appeared on LinkedIn; written by Eduardo Chiocconi Nobody can deny, that when used correctly, RPA has the potential of providing a great ROI. Specially in situations where we...

The Power of High Availability Connectivity Agent

High Availability with Oracle Integration Connectivity Agent You want your systems to be resilient to failure and within Integration Cloud Oracle take care to ensure that there is always redundancy in the cloud based components to enable your integrations to continue to run despite potential failures of hardware or software.  However the connectivity agent was a singleton until recently.  That is no longer the case and you can now run more than one agent in an agent group. Of Connections, Agent Groups & Agents An agent is a software component installed on your local system that "phones home" to Integration Cloud to allow message transfer between cloud and local systems without opening any firewalls.  Agents are assigned to agent groups which are logical groupings of agents.  A connection may make use of an agent group to gain access to local resources. As of March 2019 this feature is now available on all OIC instances.  This provides an HA solution for the agent, if one agent fails the other continues to process messages. Agent Networking Agents require access to Integration Cloud using HTTPS, note that the agent may need to use a proxy to access Integration Cloud.  This allows them to check for messages to be delivered from the cloud to local systems or vice versa.  When using multiple agents in an agent group it is important that all agents in the group can access the same resources across the network.  Failure to do this can cause unexpected failure of messages. High Availability When running two agents in a group they process messages in an active-active model.  All agents in the group will process messages, any given message will only be processed by a single agent.  This provides both high availability and potentially improved throughput. Conclusion If resiliency is important then the HA agent group provides a reliable on-premise connectivity solution.

High Availability with Oracle Integration Connectivity Agent You want your systems to be resilient to failure and within Integration Cloud Oracle take care to ensure that there is always redundancy in...

Sending OIC notifications from an email address of your choice

Most of the avid OIC users are aware that the OIC notifications, whether it is system status reports or integration notifications, are sent out from an oracle address i.e. no-reply@oracle.com. But with the latest enhancements, OIC gives flexibility to the users to choose the sender for these notifications. OIC achieves this by providing a simple and intuitive UI, where a user can easily add a list of email addresses which can later be approved to qualify as an Approved Sender in OIC. Let’s see how we can do this in a few simple steps: Navigate to the Notifications page. Here, you will see a table where you can add a list of email addresses that you want to register as Approved Senders with OIC. When you click on add button (plus sign) on the bottom right corner of the page, a new row is added to the table where you can enter an email address. You can also choose one of the email addresses for sending System Notifications such as status report for successful message rate, service failure alerts etc. You can do this by checking the box corresponding to email address of your choice. Please note that you can only choose one email address for sending System Notifications. When you are done entering the list of email addresses, click on Save. Upon saving, a confirmation e-mail is sent out to each of the email addresses in the list. Approval Status is changed to reflect the same information. The recipient of the email is then required to confirm his email address by clicking on the confirmation link in the mail. Sample snippet of the confirmation email is pasted below Upon confirmation, the Approval Status is changed to Approved.            (To refresh the approval status, please use the refresh button on the top left corner of the section.) Congratulations! You have an approved sender registered in OIC. You can now use this approved sender in the From Address section of Notification Action in the Integration Orchestration canvas as depicted below. In additon to this, you can also choose this Approved Sender for sending System Notification. Please note: In a scenario where a registered email address is still “Waiting for User Confirmation” and the user uses it in the Notification action or chooses it to send system notifications, then the sender will be defaulted to no-reply@oracle.com. Hope this blog was able to shed some light on how OIC is helping users manage their notifications better, whether it is by providing the ability to register any number of email address or deleting a previously approved email address from the list of approved senders or changing the primary sender of System Notification any number of times. Hope you have fun incorporating this feature into your use-cases!

Most of the avid OIC users are aware that the OIC notifications, whether it is system status reports or integration notifications, are sent out from an oracle address i.e. no-reply@oracle.com....

See how easily you switch your integration views

In OIC, we spend most of our time building the integration. Currently, when you view/edit the integration in editor, it shows the integration in vertical layout. Now, you can view/edit the integration in several ways: Canvas view Vertical: Displays the integration vertically. Horizontal: Displays the integration horizontally. Pseudo view: Displays the integration vertically with child nodes indented. Details about each node in the integration are displayed to the right. In addition to the above, you can also view the integration outline style.You will need to enable "oic.ics.console.integration.layout" feature flag to enjoy this feature. Note: As of October,2019  release of OIC, this feature is publicly available. You don't need to enable the feature flag. The above diagram shows how to select different views and how the integration looks like in vertical view layout. Canvas view: Canvas view allows you to select the layout. There are two options for the layout: Vertical: This is the default view mode of the integration. In this mode, the integration is shown vertically. Horizontal: While in Canvas view, you can switch the layout to Horizontal and the integration will be shown horizontally. Pseudo view: In this view the integration is shown vertically with indented child nodes. For each node, it shows the details of it. This helps you to easily understand the integration without need to drill down to each node to see the details!  You can use the inline menu to add new nodes/actions. In this view mode, you won't be able to change the orientation of the nodes but you can do the reposition of the nodes , i.e moving Assign inside the Switch node etc.   Hope you find this feature helpful. Enjoy the different integration views!  

In OIC, we spend most of our time building the integration. Currently, when you view/edit the integration in editor, it shows the integration in vertical layout. Now, you can view/edit the integration...

Oracle OpenWorld 2018 Highlights

With another Oracle OpenWorld in the books, we want to take a moment to reflect on some of this year's highlights.  First, let us start by thanking those who make OOW the success that is it: our incredible customers and partners. Your stories inspire us every day and we are so glad to have been able to share them with thousands of attendees at OpenWorld.  Thank you to our customer and partner speakers and panelists: Erik Dvergsnes, (Aker BP), Michael Morales (Quality Metrics Partners), Lonneke Dikmans, (eProseed Europe), Patrick McMahon (Regional Transportation District), Steven Tremblay (Graco Inc), Wade Quale (Graco Inc.), Suresh Sharma (Cognizant Technology Solutions Corporation), Sandeep Singh (GE), David VanWiggeren (Drop Tank), Deepak Kakar (Western Digital), Timothy Lomax (Mitsubishi Electric Automation), Candace McAvaney (Minnesota Power), Mark Harrison (Eaton Corp), Awais Bajwa (GE), Nishi Deokule (GetResource Inc),  Murali Palanisamy (DXC Technology), Bhavnesh Patel (UHG Optum Services Inc.), Biswajit Dhar (Unitedhealth Group Incorporated), Karl Jonsson (Reinhart), Lakshmi Pavuluri (The Wonderful Company), Eric Doty (Greenworks Tools), Susan Gorecki (American Red Cross), Timothy Dickson (Laureate Education), Marc Murphy (Atlatl Software), Chad Ulland (Minnkota Power Cooperative), Amit Patanjali (ICU Medical), Rajendra Bhide (GE), Jonathan Hult (Mythics), Wilson Farrar (UiPath), Xander van Rooijen (Rabobank), Kevin King (AVIO Consulting), Milind Joshi (WorkSpan), Palash Kundu (Achaogen), Simon Haslam (eProseed UK), Matthew Gilbride (Skanska), Chris Maggiulli (Latham Pool Products), Duane Debique (Sinclair Broadcast Group), Ravi Gade (Calix, Inc), and Jagadish Manchikanti (Tupperware). We would also like to congratulate our 2018 Oracle Cloud Platform Innovation Award winners: Drop Tank, Ministry of Interior Turkey, The Co-operative Group, and Meliá Hotels International. Their innovation journeys were truly inspiring!  More than anything, #OOW18 was about innovation, sharing our customer’s successes, and Oracle’s strategy and vision! This year's OOW was abuzz with 60,000 customers and partners from 175 countries and 19 million virtual attendees. We had 50+ sessions on integration, process and APIs, taking center stage even in SaaS sessions for ERP, HCM and CX cloud. This year was all about using integration to mobilize digital transformation, looking at areas like API-led integration and innovation with Robotic Process Automation, IoT, AI, blockchain, and machine learning.  Connecting with customers and partners is always a top highlight of OpenWorld. This year, Oracle VP of Product Management, Vikas Anand, had a chance to connect with UiPath’s Brent Haley. Take a look to hear a bit how we are bringing AI and RPA into Oracle's Integration Platform and more.  Before OpenWorld, we shared a few of our most buzzed about sessions with you. No matter which sessions you were able to attend, we hope you found them informative and left OOW with fresh knowledge and inspiration.  And as always, executive keynotes were a major highlight of OOW. In case you missed any, you can catch them here.  Cloud Generation 2: Larry Ellison Keynote at Oracle OpenWorld 2018 Accelerating Growth in the Cloud: Mark Hurd Keynote at Oracle OpenWorld 2018 With the help from our customers and partners, #OOW18 was a smash hit! We cannot wait to see what the next year will bring.   

With another Oracle OpenWorld in the books, we want to take a moment to reflect on some of this year's highlights.  First, let us start by thanking those who make OOW the success that is it: our...

Integration

How to migrate from ICS to OIC?

  In this blog I'd like to show you how to migrate Metadata from an ICS (Integration Cloud Service) instance to OIC (Oracle Integration Cloud) instance. Metadata that will be migrated includes the following: Integrations, Connections, Lookups, Libraries, Packages, Agent Groups, Custom Adapters etc.  Integrations in any state (in-progress, activated etc) will be migrated. All resources such as Lookups, Connections that are not referenced by integrations also will be migrated. Endpoint configuration (configured in connections). Certificates. Credentials stored in CSF store. Settings such as Database, Notification. The migration tool automates some of the below tasks that otherwise have to be done manually if using manual export and import: Bulk export of all integrations along with their dependencies (such as Connections, Lookups etc) into a migration package. Migration of endpoint configuration and credentials Automatic replacement of host / port from source ICS instance to target OIC instance for "Integration calling Integration" use cases. Automatic "Test Connection" Automatic activation of previously activated integrations. Enabling Migration in OIC A feature flag has to be enabled in OIC to import content into OIC as part of migration. To turn on the feature flag, open a Service Request with Oracle support.     Migration Lifecycle         High level steps that need to be performed for the migration: Create an object storage bucket in the underlying Oracle Cloud Infrastructure environment (If the migration target is OIC autonomous). This is needed to transfer the migration package between ICS and OIC. Check this link for detailed steps on how to create a storage bucket. Once the above step is completed, then using the storage URL and storage credentials, invoke the export REST API within ICS environment. This will copy the data from ICS into the storage service. Invoke a REST API to provide the status of the export operation if needed.  For information on what objects have been exported or any error or warnings that were raised as part of the migration can be retrieved from a migration report. Then perform the import operation in OIC environment passing the storage URL and storage credentials. This will import the content from storage into OIC. Invoke a REST API to provide the status of the export operation if needed.  For information on what objects have been imported or any error or warnings that were raised as part of the migration can be retrieved from the migration report.   Exporting the data from ICS Export the data from an ICS environment using the below steps: (Please see the section "Exporting the data from OIC" for exporting from OIC) Using administrator access, execute the export REST API. A sample is shown below using Postman REST client: Export Request: Construct the storage URL based on the configuration done within the storage service based on the format "https://swiftobjectstorage.region.oraclecloud.com/v1/tenancy/bucket" passing the storage credentials as well. Check this link for more details on creating a storage bucket. Response: Checking status: Checking the migration archive: Importing the data into OIC Import the data from into an OIC environment using the below steps: The migration utility supports different modes for the import process No importActivateMode value Description 1 ImportOnly This mode only imports the objects and doesn't activate integrations. Used in case a manual operation needs to be performed such as Adapter agent installation. 2 ImportActivate This mode imports and activates all previously activated integrations. 3 ActivateOnly This mode only activates previously activated integrations. Using administrator access, execute the import REST API. A sample is shown below using Postman REST client: ImportOnly Request: Construct the storage URL based on the configuration done within the storage service based on the format "https://swiftobjectstorage.region.oraclecloud.com/v1/tenancy/bucket" passing the storage credentials as well. ImportActivate Request: ActivateOnly Request:     .   Response: Checking the import status: Note: The jobId returned in the payload of the Import request is passed in as part of the resource,  in the example below the jobId is "405"   Checking the migration report The result of the migration import process can be checked using the below steps: Migration report location: Sample report:     Exporting the data from OIC Export the data from an OIC environment using the below steps: Using administrator access, execute the export REST API. A sample is shown below using Postman REST client: Export Request: Export Response:   Checking status:  

  In this blog I'd like to show you how to migrate Metadata from an ICS (Integration Cloud Service) instance to OIC (Oracle Integration Cloud) instance. Metadata that will be migrated includes the...

Integration

This is the SOA session you were looking for

I often hear from customers that we don't have enough SOA Suite on premises sessions at Oracle OpenWorld. Well, here is an exciting SOA session with three amazing speakers, who will share their adventures with Oracle SOA Suite: Candace McAvaney from ALLETE/Minnesota Power, Mark Harrison from Eaton Corporation and Awais Bajwa from GE Digital.  Candace McAvaney is the senior Enterprise Application Integration architect/developer at ALLETE/Minnesota Power. In her over 10 years of development using the SOA framework, she has implemented nearly 100 interfaces between applications including the Oracle Customer Care and Billing application, IBM Maximo Work Management application, Oracle Enterprise Business System, GE Outage Management System, Sensus Automatic Metering/Smart Grid system, and Accruent Mobile Workforce application. She has presented sessions at the Oracle SOA Customer Advisory Board and the Minnesota Fusion Middleware User's Group. During the OOW panel, Candace will provide Insight into ALLETE/Minnesota Power’s SOA Suite history, the reasoning behind selecting Oracle SOA Suite, their integration guidelines and best practices and what components they have been using. She will also spend a few minutes on their plans for a hybrid platform, spanning cloud and on premises.   Mark Harrison just completed 20 years’ service with Eaton Corporation. He was initially working as Oracle Applications DBA and DBA Manager and for the past 6 years working within the Integration space as Eaton Information Integration Manager. He is managing a virtual team of professionals (70+) who are responsible for the end 2 end business integration of systems across Eaton, business partners and customers (Internal/External). Amongst other technologies, they utilize Oracle SOA Suite, Oracle Data Integration, Oracle OAG and MFT.  Mark is responsible for Integration strategy globally, project delivery (PM certified), enterprise integration support, vendor management and budget planning/tracking. The focus of Mark’s presentation will be the use of Oracle SOA 12c within a traditional manufacturing environment at Eaton, from the use case to the implementation, best practices, lessons learned and future plans.   Awais Bajwa is an Enterprise IT leader (Integration & cloud technologies) for GE Digital. Awais started his career as a Java expert and is now a seasoned professional in the IT industry for 18 years. He has been exclusively focusing in Oracle technologies, integrations and Oracle ERPs for the last 11 years. He specializes in Oracle cloud PaaS and iPaaS platforms including OIC/ICS/PCS, traditional on-premises SOA, OCI and ERP integrations. Awais has extensive experience in rolling out large scale global Oracle program initiatives in different parts of the world including North America, Middle East and Asia. He is also passionate about Microservices Architecture, Event Driven Architecture, API-oriented integrations and Machine Learning. At GE Digital, he drives the strategy & roadmap for Oracle integration technologies and Cloud adoption that is founded in these modern architecture practices. Awais holds a master’s in computer science from Al-Khair university and he is originally from Lahore, Pakistan. Avais will discuss GE’s SOA Suite use cases, the business value, their architectural governance for design-time and runtime and how they handle automation and monitoring. He will also go into details on best practices and lessons learned and the implementation of a business Use case leveraging API-Based approach. Finally, he will give a short overview of their next generation Oracle SOA and hybrid cloud roadmap and strategy, which includes SOA, BPM, SOA CS and OIC. Please also check our Focus on Document for more integration sessions and don't forget to visit us at the demo grounds in Moscone South.

I often hear from customers that we don't have enough SOA Suite on premises sessions at Oracle OpenWorld. Well, here is an exciting SOA session with three amazing speakers, who will share...

Integration

Don't miss your chance to get your hands on Oracle Integration Cloud

As the countdown to Oracle OpenWorld continues, attendees are building their schedule, adding keynotes and the sessions that seem most interesting to them. But OpenWorld also provides the unique chance to get your hands on our products and work through labs with the assistance of the Product Managers and engineers who built the product. The 2 hands on labs for Oracle Integration Cloud (OIC) will introduce you to the Integration, Process and Insight features of OIC and teach you how to build, test and run a simple end to end use case. Of course there will also be an opportunity to ask questions and get to know the OIC team. You are invited to join Antony Reynolds, Nathan Angstadt and myself for this hour of learning and fun: Extending and Connecting Applications with Oracle Integration Cloud [HOL6298] Wednesday, Oct 24, 11:15 a.m. - 12:15 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 5/6 Enhance your CX Applications with Oracle Integration Cloud [HOL6299] Thursday, Oct 25, 10:30 a.m. - 11:30 a.m. | Marriott Marquis (Yerba Buena Level) - Salon 5/6 If you don't have time to attend the labs, or would like to get more information on Oracle Integration Cloud Platform and learn about additional use cases, I recommend you visit the demo grounds in Moscone South. We will be there Monday morning till Wednesday evening with demos on Integration Cloud, API Platform, Self Service Integration (SSI), Robotic Process Automation (RPA), SOA Cloud Service, B2B and Managed File Transfer (MFT). As always, you can find more information about our sessions, hands on labs and demos in the Focus on Document.  

As the countdown to Oracle OpenWorld continues, attendees are building their schedule, adding keynotes and the sessions that seem most interesting to them. But OpenWorld also provides the unique chance...

Integration

A simple guide to use nested scope in orchestration

Wish you cloud have nested scopes in orchestration? Now, you can use nested scope in OIC integration!  In this short blog, I will show you how to use nested Scope in your orchestration. Scope activities allow users to group other child activities which have their own Variables, Fault and Event Handlers. Create or Edit an integration Drag a Scope activity onto the canvas or use the inline menu. Upon dropping the Scope you will be prompted to enter Name and Description (optional) Upon clicking Create the scope will be added to the canvas. Scope activities can also contain other scope activities, this is referred to as nested scopes. This provides a more sophisticated way of organizing or separating the activities into a subsection of the flow. Drag a Scope activity Inside the other Scope activity Upon dropping the Scope you will again be prompted to enter a Name and Description (optional) Upon clicking Create the scope will be added to the canvas inside the other scope A nested scope behaves the same way as a basic scope, it provides its own container of child activities and fault handlers There is no limitation to the levels of nesting.  Even scope’s fault handlers can have nested scopes Hope you enjoyed the nested scope feature!    

Wish you cloud have nested scopes in orchestration? Now, you can use nested scope in OIC integration!  In this short blog, I will show you how to use nested Scope in your orchestration. Scopeactivities...

Integration

How to invoke an Integration From another Integration in OIC without creating a connection

In this blog, I am going to show you how to use Oracle Integration Cloud Service's ‘Local Integration’ feature to invoke an integration from another integration. With the advent of this new feature, you don’t need to create any explicit connection for the integration you want to call. To utilize this feature, you will need to turn on "oic.ics.console.integration.invoke.local.integration" feature flag. We will be creating a new Integration Invoke Hello World to call "Hello World" integration. The "Hello World" integration is delivered with OIC as a sample. For more info on the "Hello World" sample see: https://docs.oracle.com/en/cloud/paas/integration-cloud-service/icsug/running-hello-world-sample.html First activate the Hello World Integration. Then, follow the steps below to create the "Invoke Hello World" integration. From the Integration list page, click on "Create" and Select "App Driven Orchestration" and provide the name as "Invoke Hello World" and create the integration. We will create a REST trigger which will take name and email as parameter. In order to do that, drag and drop "Sample REST Endpoint Interface" as the trigger or use In-line menu to add that and follow through the wizard. "Sample REST Endpoint Interface" connection should be already in your system. Configure the Tracking Field. Add the Name as tracking field. For more info on tracking see : https://docs.oracle.com/en/cloud/paas/integration-cloud-service/icsug/assigning-business-identifiers.html   Drag and Drop Local Integration Click on Integration Artifacts, click on Business Integrations and then drag and drop "Local Integration" on the integration after the Rest Trigger (getNameAndEmail). This will bring up the Local Integration wizard. Provide Details and click Next. This page shows the list of all the activated integrations that you can invoke. You can type the integration name to filter the integration list. Select "Hello World (1.2.0)" and click Next. Select the Operation and Click Next. In the Summary screen, click on Done. Now edit the "CallHelloWorld" Map to map the name and email. Confgure/Edit the "getNameAndEmail" Map Now, save and close the Integration. From the landing page, activate the integration. Also enable Tracing and Payload during activation. Run the integration using the Endpoint URL, you can past the URL in browser and run it. It will be similar to https://host/ic/api/integration/v1/flows/rest/INVOKE_HELLO_WORLD/1.0/info?name=[name-value]&email=[email-value] Go to Monitoring->Tracking to monitor the integration run. You will see the Hello world Integration was successfully called from the Invoke Hello World integration. You can also go to the Hello world instance from this page. Click on the "CallHelloWorld" local integration invoke and select "Go to Local Integration instance.." Icon. It will show you a popup, click on "Go" to see the Hello World instance which will open up in an new tab. This functionality is only applicable for REST or SOAP based invoke and doesn't apply to Scheduled Orchestration. How to Invoke a Scheduled Orchestration You can also invoke a scheduled orchestration from another integration. You can only call the scheduled orchestration as "Submit now". I will be creating a new integration Invoke File Transfer which will call  the "File Transfer Sample". For more info on File Transfer Sample, see https://docs.oracle.com/en/cloud/paas/integration-cloud-service/icsug/running-file-transfer-sample.html First activate the "File Transfer Sample". Then create a scheduled orchestration "Invoke File Transfer" and drag and from the local integration. Go through the wizard and select the "File Transfer Sample" as the local integration and "runNow" as Operation. See above "Drag and Drop Local Integration" for details. Now edit the "CallFileTransfer" Map. In the mapper click on the action to go to the "Build Mapping" screen to enter "NOW". This is the important step to run a scheduled orchestration. Configure tracking. Save and Close the integration. Activate the "Invoke File Transfer" and from the menu click on Submit Now to run the integration. From the Monitoring→Runs page you can see the runs. As you can see from the below screenshot, "Invoke File Transfer" ran and in turn called the File Transfer Sample. In this blog, we have learned how to use "Local Integration" feature to call another integration. It is a good practice to break a large integration into multiple smaller integrations using this pattern which promotes better design and provides modular functionality for easier maintainability. 

In this blog, I am going to show you how to use Oracle Integration Cloud Service's ‘Local Integration’ feature to invoke an integration from another integration. With the advent of this new feature,...

Eight #OOW18 Integration Cloud Sessions You Won't Want to Miss

With Oracle OpenWorld 2018 just two weeks away, you are probably ready to get busy building your schedule and looking forward to packed days in San Francisco full of fun, learning, and networking. If you’re curious about Oracle Integration Cloud, we have some great sessions for you to check out. Make sure to take a look at the Focus On: App Integration guide to make the most of your time at #OOW18. Coming up this week on the blog, we will continue to highlight #OOW18 iPaaS sessions that you'll definitely want to make the time for.  From Roadmap to How-To, you will find just what you’re looking for in iPaaS knowledge. Monday, Oct. 22nd, 2018 Oracle Cloud Platform Strategy and Roadmap [PKN5769] Time:  9:00 a.m. - 9:45 a.m. Location: Yerba Buena Center for the Arts (YBCA) Theater Speaker: Amit Zavery, Executive Vice President, Fusion Middleware and PaaS Development, Oracle But why should I attend?? Join this session to learn about the strategy and vision for Oracle’s comprehensive and autonomous PaaS solutions. See demonstrations of some of the new and autonomous capabilities built into Oracle Cloud Platform including a trust fabric and data science platform. Hear how Oracle’s application development, integration, systems management, and security solutions leverage artificial intelligence to drive cost savings and operational efficiency for hybrid and multi-cloud ecosystems. AI-Powered Oracle Autonomous Integration Cloud and Oracle API Platform Cloud Service [PRO6176] Time: 12:30 p.m. - 1:15 p.m. Location: Moscone West - Room 2002 Speakers Vikas Anand, VP, Product Management, Oracle Susan Gorecki, Information Technology Sr. Director, American Red Cross Kevin King, Consultant / Contractor, AVIO Consulting, LLC But why should I attend?? Join this session to learn how to best leverage Oracle’s recent innovations within Oracle Autonomous Integration Cloud, Oracle API Platform Cloud Service, process automation, and robotic process automation. Learn about the most exciting artificial intelligence and machine learning integration innovations today and how you can leverage them to jumpstart tomorrow’s digital transformation. Tuesday, Oct.23rd, 2018 Oracle Cloud: Modernize and Innovate on Your Journey to the Cloud [GEN1229] Time: Tuesday, Oct 23, 12:30 p.m. - 1:15 p.m. Location: Moscone West - Room 2002 Speakers: Steve Daheb, Senior Vice President, Oracle Cloud, Oracle Erik Dvergsnes, Architect, Aker BP Michael Morales, CEO/Managing Partner, Quality Metrics Partners But why should I attend?? In this headliner session, you’ll learn how to manage conflicting mandates: modernize, innovate, AND reduce costs. The right cloud platform can address all three, but migrating isn’t always as easy as it sounds because everyone’s needs are unique, and cookie-cutter approaches just don’t work. Learn how the Oracle Autonomous Cloud Platform automatically repairs, secures, and drives itself, allowing you to reduce cost and risk while at the same time delivering greater insights and innovation for your organization. In this session learn from colleagues who found success building their own unique paths to the cloud. The Future of Integration Is Autonomous with Machine Learning and Artificial Intelligence [TIP1372] Time: 5:45 p.m. - 6:30 p.m. Location: Moscone West - Room 2004 Speakers: Daryl Eicher, Sr Director Product Marketing Oracle Autonomous Integration Cloud, Oracle Bruce Tierney, Director Product Marketing Oracle Autonomous Integration Cloud, Oracle Jagadish Manchikanti, IT Director, Tupperware But why should I attend?? Self-defining integrations take the hassle out of connecting SaaS and on-premises systems by providing prebuilt adapters and machine learning–powered mapping recommendations. In this session learn how autonomous integration, including decision modeling and other cool technologies, transform IT for speed to revenue and conversational AI wins. Come see what the future of autonomous integration looks like now! Wednesday, Oct 24th, 2018 The Next Big Things for Oracle's Autonomous Cloud Platform [PKN5770] Time: 11:15 a.m. - 12:00 p.m. Location: The Exchange @ Moscone South - The Arena Speakers: Amit Zavery, Executive Vice President, Fusion Middleware and PaaS Development, Oracle But why should I attend?? Attend this session to learn about cutting-edge solutions that Oracle is developing for its autonomous cloud platform. With pervasive machine learning embedded into all Oracle PaaS offerings, see the most exciting capabilities Oracle is developing including speech-based analytics, trust fabric, automated application development (leveraging AR and VR), and digital assistants. Find out how Oracle is innovating to bring you transformational PaaS solutions that will enhance productivity, lower costs, and accelerate innovation across your enterprise. Oracle Integration Cloud Best Practices Panel: Transforming to Hybrid Cloud [CAS5215] Time: 12:30 p.m. - 1:15 p.m. Location: Marriott Marquis (Golden Gate Level) - Golden Gate C3 Speakers: Rajendra Bhide, It Director, GE Amit Patanjali, Oracle Solution Architect, ICU Medical Chad Ulland, Software Development Supervisor, Minnkota Power Cooperative, Inc. But why should I attend?? In this session, you will get tips and tricks from Oracle Integration Cloud customers as they share expertise on how to move from on-premises deployment to a hybrid on-premises and cloud integration solution. Thursday, Oct 25th, 2018 Antipatterns for Integration: Common Pitfalls [PRO6175] Time: 1:00 p.m. - 1:45 p.m. Location: Moscone West - Room 3022 Speakers: Vikas Anand, VP, Product Management, Oracle Rajan Modi, Development Sr. Director, Oracle Aninda Sengupta, Vice President, Software Development, Oracle But why should I attend?? In this session join Oracle integration, process, and API engineers to learn anti-patterns that you should be aware of as you look at solving integration needs for your enterprise. Learn best practices for application integration based on real-life experience with multiple customers spanning various use cases across real-time/batch integrations, and cloud/ground applications. See how the system warns of common pitfalls and anti-patterns that impact production deployments. In addition to the awesome sessions above, we will also have Hands-On-Labs, Demos, and Theater Sessions. Between sessions, don’t forget to stop by The Innovation Studio at Oracle OpenWorld, located at the top of the escalators in Moscone North where you'll be able to learn about all the ways that Oracle Cloud Platform brings together technology.  To find out about our entire #OOW18 iPaaS session offerings, check out the Focus On: App & Data Integration Document here.  

With Oracle OpenWorld 2018 just two weeks away, you are probably ready to get busy building your schedule and looking forward to packed days in San Francisco full of fun, learning, and networking. If...

Integration

Integration Podcast Series: #1 - The Critical Role Integration Plays in Digital Transformation

Authored by Madhu Nair, Principal Product Marketing Director, Oracle Digital transformation is inevitable if organizations are looking to thrive in today’s economy. With technologies churning out new features based on cutting edge research like those based on Artificial Intelligence (AI), Machine Learning (ML) and Natural Language Processing (NLP), business models need to change to adopt and adapt to these new offerings. In the first podcast of our “Integration: Heart of the Digital Economy” podcast series, we discuss, among other questions: What is digital transformation? What is the role of Integration in digital transformation? What roles do Application and Data Integration play in this transformation? Businesses, small and big, are not able to convert every process into a risk reducing act or a value adding opportunity. Integration plays a central role in the digital transformation of a business. Businesses and technologies run on data. Businesses also run applications and processes. Integration helps supercharge these critical components of a business. For example, cloud platforms now offer tremendous value with their Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) offerings. Adopting and moving to the cloud would help companies take advantage of the best technologies to run their businesses on without having to worry about the costs of building and maintaining these sophisticated solutions. A good data integration solution should allow you to harness the power of data, work with big and small data sets easily and cost effectively, and make data available where data is needed. A good application integration solution would allow businesses to quickly and easily connect application, orchestrate processes, and even monetize applications with the greatest efficiency and lowest risk. Piecemeal cobbling together of so critical elements of digital transformation would undermine the whole larger cause of efficiency that such a strategic initiative aims to achieve. Digital transformation positions businesses to better re-evaluate their existing business models allowing organizations to focus on their core reason for existence. Learn more about Oracle’s Data Integration Solution here. Learn more about Oracle’s Application Integration Solution here. Oracle Cloud Café Podcast Channel Be sure to check out the Oracle Cloud Café, where you can listen to conversations with Oracle Cloud customers, partners, thought leaders and experts to get the latest information about cloud transformation and what the cloud means for your business.  

Authored by Madhu Nair, Principal Product Marketing Director, Oracle Digital transformation is inevitable if organizations are looking to thrive in today’s economy. With technologies churning out new...

Integration

Securely Connect to REST APIs from Oracle Integration Cloud

Oracle REST Adapter provides a comprehensive way for consuming external RESTful APIs including secure APIs. In this blog we will provide an overview of the available methods of consuming protected APIs using the REST Adapter. Oracle Integration cloud provides a re-usable connection that can be used to specify the security policy for accessing protected APIs. Once configured, users can test, save and complete the connection and use this in integration flows just like any other connection.   When a REST Adapter connection is updated with new security credentials, then the change is automatically visible to all the deployed integrations and there is no need to update or re-deploy the integration flows.  The integration developer must ensure that the new security credentials have identical access and privileges to the APIs and Resources being referenced within the integration flow.    Any other change, especially in the co-ordinates for the external REST APIs such as Base URI / URL to Swagger / RAML documents may call for deactivation and reactivation of impacted flows - and in some cases, it would call for re-editing of impacted adapter endpoints.  The following security policies are supported in Oracle Integration Cloud: Basic Authentication HTTP Basic authentication is a simple authentication scheme built into the HTTP protocol. The client sends HTTP requests with the Authorization header that contains the word Basic followed by a space and a base64-encoded string username:password. In the REST Adapter, users should select the Basic Authentication security policy and provide the username and password. REST Adapter ensures that the credentials are securely stored in a Credentials Store. During API invocation, the adapter will inject a HTTP header along with the request as follows:    Authorization: Basic <base64-encoded-value-of-credentials>   The username and password is not validated even if test connection is successful. Integration developers should validate the credentials before using them in this security policy.   API Key Based Authentication In order to consume APIs protected using an API-Key, integration developers should use the API Key Based Authentication security policy.   REST Adapter provides an extensible interface for developers to declaratively define how the API Key needs to be sent as part of the request. During API invocation, the adapter will inject the API-Key as specified in the API Key Usage along with the request.     The API-key is not validated even if test connection is successful. Integration developers should validate the API-Key before using it in this security policy. Please see our detailed blog API-Key Based Authentication: Quickly and Easily for more details on this security policy.   OAuth Client Credentials The client application directly obtains access on its own without the resource owner’s intervention using its Client Id and Client Secret. In the REST Adapter, users should select the OAuth Client Credentials security policy and provide the required information.   Test connection will use the provided credentials to obtain an access token from the authorization server. This access token will be securely cached internally and also refreshed when required. The REST adapter will procure an access token using the values provided in the security policy as follows: curl -X POST -H 'Content-Type: [Auth Request Media Type]' -H "Accept: application/json" -H 'Authorization: Basic {base64#[YOUR_CLIENT_ID]:[YOUR_CLIENT_SECRET]}' -d  ‘{'grant_type=client_credentials&scope=[YOUR_SCOPE]' '[YOUR_ACCESS_TOKEN_URI]' During API invocation, the adapter will inject the access token as an authorization header along with the request as follows:    Authorization: Bearer <access_token_obtained_using_oauth_client_credentials>    The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope.    Caution:  The OAuth2 specification deliberately leaves out the exact mechanism for client authentication. Access token attributes and the methods used to access protected resources are also beyond the scope of this specification. As a result, there are many implementations of OAuth2 that cannot be addressed using the standard policy. Oracle REST Adapter provides a flexible Custom Two Legged OAuth security policy that can be used with any flavour of OAuth Client Credential configuration. Please see our blog entry to see the details. We will explore this in the later sections. OAuth Resource Owner Password Credentials The resource owner password credentials can be used directly as an authorization grant to obtain an access token. Since the resource owner shares its credentials with the client, this policy is used when there is a high degree of trust between the resource owner and the client. In the REST Adapter, users can select the Resource Owner Password Credentials security policy and provide the required information.   Test connection will use the provided credentials to obtain an access token from the authorization server. This access token will be securely cached internally and also refreshed when required. The REST adapter will procure an access token using the values provided in the security policy as follows: curl -X POST -H "Authorization: Basic {base64#[YOUR_CLIENT_ID]:[YOUR_CLIENT_SECRET]}" -H "Content-Type: [Auth Request Media Type]" -d '{"grant_type": "password", "scope": "[YOUR_SCOPE]",    "username": "[USER_NAME]", "password": "[PASSWORD]" }' "[YOUR_ACCESS_TOKEN_URI]" During API invocation, the adapter will inject the access token as an authorization header along with the request as follows:    Authorization: Bearer <access_token_obtained_using_oauth_ropc>   The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope. Caution:  The OAuth2 specification deliberately leaves out the exact mechanism for client authentication. Access token attributes and the methods used to access protected resources are also beyond the scope of this specification. As a result, there are many implementations of OAuth2 that cannot be addressed using the standard policy. Oracle REST Adapter provides a flexible Custom Two Legged OAuth security policy that can be used with any flavor of OAuth Resource Owner Password Credential configuration. Please see our blog entry to see the details. We will explore this in the later sections.   OAuth Custom Two Legged Flow Custom Two legged security policy provides Oracle Integration Cloud the necessary flexibility to connect with a plurality of OAuth protected services including services protected using OAuth Client Credentials and OAuth Resource Owner Password Credentials flows. In the REST Adapter, users should select the OAuth Custom Two Legged Flow security policy and provide the required information. Test connection will use the provided credentials to obtain an access token from the authorization server. This access token will be securely cached internally and also refreshed when required. Access Token Usage provides an extensible interface for developers to declaratively define how the access token needs to be sent as part of the request. During API invocation, the adapter will inject the access token as specified in the Access Token Usage along with the request.   The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope. Please see our blog entry describing OAuth Custom Two Legged policy in more details. OAuth Authorization Code Credential The authorization code grant is an OAuth flow where the resource owner is required to provide consent before an access token can be granted to the client application. In the REST Adapter, users should select the OAuth Authorization Code Credential security policy and provide the required information. Once configured, the integration developer can click on provide consent, which will redirect the user to the authorization URL where the resource owner should authenticate with the authorization server and provide consent to the client application. This concludes the OAuth Flow successfully. The integration developer can test, save and complete the connection and use this in integration flows just like any other connection.   Test connection will validate that the provide consent flow was successful and an access token from the authorization server was obtained. This access token will be securely cached internally and also refreshed when required. During API invocation, the adapter will inject the access token as an authorization header along with the request as follows:    Authorization: Bearer <access_token_obtained_using_code_authorization>   The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope. Caution:  The OAuth2 specification deliberately leaves out the exact mechanism for client authentication. Access token attributes and the methods used to access protected resources are also beyond the scope of this specification. As a result, there are many implementations of OAuth2 that cannot be addressed using the standard policy. Oracle REST Adapter provides a flexible Custom Three Legged OAuth security policy. We will explore this in the next section.   OAuth Custom Three Legged Flow OAuth Custom Three legged security policy provides Oracle Integration Cloud the necessary flexibility to connect with a plurality of OAUTH2 protected services that include a Code Authorization Flow. In the REST Adapter, users should select the OAuth Custom Three Legged Flow security policy and provide the required information. Once configured, the integration developer can click on provide consent, which will redirect the user to the authorization URL. The resource owner should authenticate with the authorization server and provide consent to the client application. This concludes the OAuth Flow successfully. The integration developer can test, save and complete the connection and use this in integration flows just like any other connection.   Test connection will validate that the provide consent flow was successful and an access token from the authorization server was obtained. Test connection will also validate the provided refresh mechanism by refreshing the access token if specified. The refreshed access token will be securely cached internally and also refreshed when required. *Since the provide consent flow requires the resource owner’s intervention, it is recommended that a refresh mechanism is specified so that the access tokens can be refreshed without the resource owner’s intervention at runtime. Access Token Usage provides an extensible interface for developers to declaratively define how the access token needs to be sent as part of the request. During API invocation, the adapter will inject the access token as specified in the Access Token Usage along with the request.   The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope. We will include a detailed entry for describing OAuth Custom Three Legged policy in more details.   OAuth 1.0 One legged Authentication OAuth 1.0a (One-legged) enables a client to make authenticated HTTP requests to gain access to protected resources by using their credentials. The method is designed to include two sets of credentials with each request, one to identify the client, and another to identify the resource owner.  Before a client can make authenticated requests on behalf of the resource owner, it must obtain a token authorized by the resource owner. Test connection will only check that the required values are provided.  At runtime these credentials will be used to generate a signed access token. Since authenticated tokens are meant for one-time use only, the generated tokens will not be cached. During API invocation, the adapter will inject the access token as an authorization header along with the request as follows:    Authorization: OAuth <generated_oauth1.0a_access_token > The access token is for a specific scope. The integration developer must ensure that the connection is used to access resources within the same scope. In today’s connected world, where information is being shared via APIs to external stakeholders and within internal teams, security is a top concern. Most of the service providers provide secure access to their APIs using one of the mechanisms listed above. In this post, we have reviewed how Oracle REST adapter can be used to securely consume these protected services. We will follow this up with detailed accounts of most of the security policies listed above. In particular, the Custom OAuth Policies provide a flexible interface to work with a multitude of OAUTH protected services.  

Oracle REST Adapter provides a comprehensive way for consuming external RESTful APIs including secure APIs. In this blog we will provide an overview of the available methods of consuming...

Integration

#OOW18 Executive Keynotes and Sessions You Won’t Want to Miss

With Oracle OpenWorld 2018 less than two weeks away, you are probably busy crafting an agenda to fit in all the sessions you want to see. We want to make sure your experience is tailored to perfection. In a couple days, we will share our full list of integration sessions and highlight a few special events just for Integration folks. In the meantime, let’s start our planning with a bang by introducing you to some of the executive keynotes and sessions we are most excited about: CLOUD PLATFORM & CLOUD INFRASTRUCTURE EXECUTIVE KEYNOTES AND SESSIONS Cloud Platform Strategy and Roadmap (PKN5769) – Amit Zavery Mon Oct 22, 9-9:45am | Yerba Buena Theater In this session, learn about the strategy and vision for Oracle’s comprehensive and autonomous PaaS solutions. See demonstrations of some of the new and autonomous capabilities built into Oracle Cloud Platform including a trust fabric and data science platform. Hear how Oracle’s application development, integration, systems management, and security solutions leverage artificial intelligence to drive cost savings and operational efficiency for hybrid and multi-cloud ecosystems. Oracle Cloud: Modernize and Innovate on Your Journey to the Cloud (GEN1229) – Steve Daheb Tue Oct 23, 12:30-1:15pm | Moscone West 2002 Companies today have three sometimes conflicting mandates: modernize, innovate, AND reduce costs. The right cloud platform can address all three, but migrating isn’t always as easy as it sounds because everyone’s needs are unique, and cookie-cutter approaches just don’t work. Oracle Cloud Platform makes it possible to develop your own unique path to the cloud however you choose—SaaS, PaaS, or IaaS. Learn how Oracle Autonomous Cloud Platform Services automatically repairs, secures, and drives itself, allowing you to reduce cost and risk while at the same time delivering greater insights and innovation for your organization. In this session learn from colleagues who found success building their own unique paths to the cloud. Autonomous Platform for Big Data and Data Science (PKN3898) – Greg Pavlik Tue Oct 23, 5:45-6:30pm | Yerba Buena Theater Data science is the key to exploiting all your data. In this general session learn Oracle’s strategy for data science: building, training, and deploying models to uncover the hidden value in your data. Topics covered include ingestion, management, and access to big data, the raw material for data science, and integration with autonomous PaaS services. The Next Big Things for Oracle’s Autonomous Cloud Platform (PKN5770) – Amit Zavery Wed Oct 24, 11:15-12pm | The Exchange @ Moscone South - The Arena Attend this session to learn about cutting-edge solutions that Oracle is developing for its autonomous cloud platform. With pervasive machine learning embedded into all Oracle PaaS offerings, see the most exciting capabilities Oracle is developing including speech-based analytics, trust fabric, automated application development (leveraging AR and VR), and digital assistants. Find out how Oracle is innovating to bring you transformational PaaS solutions that will enhance productivity, lower costs, and accelerate innovation across your enterprise.   You can start exploring App Integration and Data Integration sessions in the linked pages. We are also sharing #OOW18 updates on Twitter: App Integration and Data Integration. Make sure to follow us for all the most up-to-date information before, during, and after OpenWorld!

With Oracle OpenWorld 2018 less than two weeks away, you are probably busy crafting an agenda to fit in all the sessions you want to see. We want to make sure your experience is tailored to...

Timezone functionality in OIC Schedules

Timezone is a very powerful feature while dealing with schedule integrations. Now user can create schedules in desired timezones. Step-by-step guide Following steps will help you to set a preferred timezone and enables you to create a schedule in the desired timezone. 1. Select Preferences by clicking on the user menu (Note this is the Preferences under OIC user menu. This is not the Oracle Cloud My Services Preferences):             2. Select Timezone and then click on Save button. Here, I selected "New York - Eastern Time (ET)" as per below snapshot: 3. While creating a schedule for Scheduled Integration, you can see "This schedule is effective:" section shows From and Until dates in selected timezone. There is also a new entry "Time zone", which displays the selected time zone. 4. Next, when you go to "Schedule and Future Runs" page after successfully creating a schedule, following new changes can be observed: New field "Schedule Time Zone" displays the timezone in which schedule has been created. All runs get executed as per this time zone Future runs timestamps are in the selected timezone as per the current session Note: Even if user selects different timezone in another browser session, this page ("Schedule Time zone" ) will continue displaying timezone in which schedule has been created. All future runs will execute as per this timezone. Let's take an example: In the above screenshots, I created a schedule with New York - Eastern Time (ET). Now in a new browser session, let's change the time zone from New York - Eastern Time (ET) to Los  Angeles  - Pacific Time (PT). "Schedule Time zone" continues showing the timezone which was used while creating a schedule. All other dates and times displays current browser session timezone. In the below screenshot, current browser session timezone is Los Angeles - Pacific Time (PT) and the schedule was created in New York - Eastern Time (ET). Future Runs displays the runs in the current session timezone - which is the exact time corresponding to the Schedule Time Zone.  

Timezone is a very powerful feature while dealing with schedule integrations. Now user can create schedules in desired timezones. Step-by-step guide Following steps will help you to set a preferred...

Using a Library in OIC

Introduction A library is a file or a collection of multiple files bundled in a JAR that contain Javascript functions. Library is used within an integration and is executed by a Javascript engine on the server as part of an integration flow. This document describes the following: Requirements that a Javascript function needs to meet to be used within integration. How to create Javascript file or collection of Javascript files that are suitable to be used in creating a Library.   1. Javascript function requirements Following are the requirements based on which Javascript function should be written so that it can be registered and works correctly in OIC. 1.1 Function return values should be named Consider this example function add ( param1, param2 ) {   return param1 + param2; } Even though the above example is a perfectly valid Javascript function it can't be registered as a library in OIC because without a named return value the library metadata editor is unable to identify parameters returned by this function so that it could be used in mapper for mapping downstream activities in an integration. OIC requires you to change the above function and name the return parameter like this example. function add ( param1, param2 ) { var retValue = param1 + param2; return retValue; } In this case the return parameter is named retValue. This change will let the user map the return parameter to a downstream activity. 1.2 Inner functions If your Javascript function defines another function within it, the inner function will not be identified by the library metadata editor in OIC so no metadata will be created for the inner function. When metadata is not created for a function it can't be used in OIC. However, the inner function can be used within the outer function. function parseDate(d) { function foo(d) { if(typeof d === 'string') { return Date.parse(d.replace(/-/g,'/')); } if(!(d instanceof Array)) { throw new Error("parseDate: parameter must be arrays of strings"); } var ret = [],k; for(k=0;k<d.length;k++) { ret[k] = foo(d[k]); } return ret; } var retVal = foo(d); return retVal; } In the above example, foo() is defined within function parseDate(). So the metadata UI editor ignores foo() and you will be able to configure only the outer function parseDate(). However, foo() is used within parseDate() which is perfectly valid. 1.3 Long running functions Javascript function execution should not exceed 1500ms. If a function execution including it's dependencies exceeds this time limit the process is automatically killed by the server; log messages will indicate the flow was killed because it exceeded 1500ms. 1.4 Input parameter types OIC currently has support for String, Number, Boolean input and return value types. If your Javascript function uses a different object type like an Array or a Date type the incoming parameter which would be either of the supported types will have to converted to the type that the function expects. Here is an example of how an input of type Array should be handled. 1.4.1 Array Input Type Consider the following array processor example function myArrayProcessor(myArray) { var status = 'FAILED'; for (i=0; i<myArray.length; i++) { } return status; } Because the supported parameter types don't include an Array type this code will have to be changed as in the following example function myArrayProcessor(myArray) { var status = 'FAILED'; var data = myArray.replace(/'/g, '"'); myArray = JSON.parse(data); for (i=0; i<myArray.length; i++) { } return status; } While configuring metadata for this function mark the input type as String and while using the function in an integration the incoming string will be parsed and converted to Array within the function. 2. Creating library using a Javascript file or multiple files 2.1 Using single Javascript file Creating a library using single Javascript file is straight forward. All dependency are contained within the single file. If a function depends on another function the dependent function should be present within the same file. When you register the library it would be enough to configure metadata only for the function that needs to be used within integration. Other functions if present to satisfy dependency need not be configured. Consider the following interdependent Javascript function example function funcOne(param1) { return param1; } function funcTwo(param2){ return param2 } function myFunc(param1, param2) { var ret = funcOne(param1) + funcTwo(param2); return ret; } In this example, funcOne() and funcTwo() are used by myFunc(). While configuring metadata for this library it is enough to configure myFunc() function so that it could be used within integration. 2.2 Using multiple Javascript files A library can be created based on multiple Javascript files. When multiple files are involved you should bundle all files into a JAR file and register the JAR file as a library. For an example consider an EncodeAndDecode library where-in encoding and decoding is done using a Base64 encode and decode scheme.  The library consist of two files Base64.js which contains the Base64 encode and decode logic and Encrypt.js which contains wrapper function that depends on functions within Base64.js. The encode and decode wrapper functions are used within integrations. Both the files are contained within a JAR file and the library is created with this JAR. To use the library configure metadata for encode and decode functions within Encrypt.js as shown in the image above. 3. Debugging Javascript in a library Over time it may be possible the Javascript library grows in size and complexity. Usually debugging a Javascript library is to test Javascript in a browser and once found working satisfactorily roll the code as a library. This may not always work because browsers are regularly updated with newer execution engines that support latest Javascript version and in general browser engines are more liberal in terms of ignoring many code errors; Javascript engine within OIC is stricter. To debug Javascript code use an instance of CXConsole() like in the sample code below. function add ( param1, param2 ) { var console = new CXConsole(); var retValue = param1 + param2; console.log("# retValue: ", retValue); console.warn("# retValue: ", retValue); return retValue; } The log messages written by the above code goes into server-diagnostic.log file.

Introduction A library is a file or a collection of multiple files bundled in a JAR that contain Javascript functions. Library is used within an integration and is executed by a Javascript engine on...

Integration

Enabling the Future Today - Feature Flags in Oracle Integration

Enabling the Future Today Last Updated: Wednesday 20 November 2019 Within Oracle Integration we are moving to a model that allows us to trial new features without making them available to everyone.  Everyone runs the same codebase but feature flags control what is available to a specific instance.  Why would we do this? For multiple reasons: Gain feedback on new features before rolling them out to the whole user base. Test new features in "the wild" in a controlled manner. Be able to rollback new features that may have unforeseen problems. How It Works Each new feature is given a flag that is used to control its availability.  For instance the flag for the small footprint OIC agent was oic.adapters.connectivity-agent.light-weight-agent.  If this flag was enabled for a given OIC instance then they could download the lightweight connectivity agent.  Other OIC instances running the same code but with the flag turned off would not offer the new agent. Flags are controlled from a central system and can be updated in real time by Oracle development and operations.  This means that feature flags can be turned on very quickly, and also if a problem occurs they can be disabled. Feature Flag Lifecycle Feature flags have a lifecycle as illustrated below. The different stages are: Internal Only You may see a product manager demo features on an instance that are not currently available, if using a production pod these may only be available to internal users.  This is where we try things out internally before turning them on for any customers.  Once we are happy with the feature internally we are ready to share it with selected customers and move the feature to Feature Controlled.  Note that this change in stage does not require any code changes, it just alters our internal approval process to enable the feature. Feature Controlled Once a feature enters the feature controlled stage then a customer may request that the flag be enabled for one or more of their OIC instances.  If approved then those instances will have the flag enabled and the feature will become available within a few minutes of being enabled.  Again there are no code changes to the customer instance, just the change in the flag status from disabled to enabled in the central feature flag server. Feature Controlled General Availability Once we are happy with the stability of a feature we will enable it for all instances.  This again does not require a code change.  We leave the flag in place so that if a specific customer has a problem we can disable the feature just for them or roll it back.  This is a safety measure in case problems occur that were not caught by internal users or early adopters of the feature. General Availability Eventually the flag controlling the feature will be removed.  This has no impact on the end user, it just allows us to keep the code paths clean and remove unused code that has been made obsolete by the new feature.  End user will see no difference between this stage and the previous one.  So I mention it here only to explain how we keep our codebase clean. What Flags are Available? The following flags are currently available in the Feature Controlled stage.  We will be blogging about these features and as we do we will update the detailed explanation with a blog entry explaining the feature in detail.  As we add new features we will update this blog.  Check the first line of the blog to see when it was last updated. Feature Flag Name Description Detailed Explanation Earliest Version oic.cloudadapter.adapter.fa.oauthSupport Connect to Oracle ERP Cloud, HCM Cloud or Engagement Cloud using OAuth   ,19.2.3 oic.cloudadapter.adapter.netsuite.AsyncSupport Support for invoking Asynchronous methods in NetSuite A Simple Guide to Asynchronous calls using Netsuite Adapter 19.31520 oic.cloudadapter.adapter.rest.inboundoctetstream Octet-Stream support for Rest Adapter Inbound support   19.31410 oic.cloudadapter.adapter.rest.mvrp REST multiple resource support in inbound. How to use Pick action in OIC Orchestration 18.4.5 oic.cloudadapter.adapter.rest_opa.inbound Enable inbound operation support in the Oracle Policy Automation adapter   19.31130 oic.cloudadapter.adapter.utilities.wsdlupload Option to upload wsdl for inbound Utilities adapter   18.2.3 oic.cloudadapter.adapters.box New Adapter for Box   19.31290 oic.cloudadapter.adapters.hcmsoapapis-ignore Ignore unsupported hcm soapapis for fa adapters Delisting of unsupported HCM SOAP APIs 19.31860 oic.cloudadapter.adapters.mqjms New (JMS-based) Adapter for IBM MQ Series   19.30920 oic.cloudadapter.adapters.salesforce.elementnames-curation Salesforce adapter element names curation   19.31521 oic.cloudadapter.adapters.soaadapter New Adapter for Oracle SOA Suite/Service Bus (incl. support for SOA Cloud Service) How SOA Suite Adapter Can Help Leverage your On-premises Investments 19.30750 oic.cloudadapter.faadapter.custom-event Consume Custom Events in FA Adapters   18.4.5 oic.ics.console.diagnostics.oracle-litmus-support Litmus support for automated testing. How to use Asserter to create OIC Integration unit tests automatically and run them to catch regressions 18.2.5 oic.ics.console.integration.generate_analysisjson Generation of analysis.json document   18.4.1 oic.ics.console.integration.invoke.local.integration Integration calling integration activity How to invoke an Integration From another Integration in OIC without creating a connection 18.3.1 oic.ics.console.library.update Allow update of a registered library Update library in continuous manner even after being consumed by integration 19.30340 oic.ics.console.monitoring.tracking.improved-activity-stream Improved activity stream UI Using the next generation Activity Stream 19.32180 oic.ics.console.notification-attachment Allows user to add attachment to notification action How to send email with attachments in OIC 19.32001 oic.ics.feature.parallel-for-each Support for Parallel For-Each   18.4.1 oic.ics.mapper.jetmap-enablement New Jet UI based mapper New JET based OIC Mapper 18.2.3 oic.ics.stagefile.pgp.key.support Support encryption/decryption in stage file read/write operation   19.31130 oic.insight.consoles.instanceprogress Support different display for Insight instance details page   18.3.3 oic.insight.instances.export Allow export of Insight instance list data in CSV format   19.1.3 oic.process.deploy.msid process msid   19.3.1 oic.process.form.externalui Enhanced integration with VBCS or other external UI   18.4.5 oic.process.search.isEnabled OIC Process Search   19.1.5 oic.processdt.generic.microprocess Micro Process support in process for OIC   19.32000 oic.processdt.generic.mlm Machine learning model in process for OIC   19.31130 oic.processrt.notify.comments Allow sending notification with comments   19.3.1 oic.suite.settings.certificate Common certificate and key management UI   19.31521 oic.suite.settings.dbspace Common simplified data retention settings   19.31521 oic.vfs.native-encryption native encryption support in virtual file system   19.30390 How to Request a Feature Flag To request a feature flag be enabled for one of your environments raise a Service Request via My Oracle Support.  First make sure that the version of your environment matches the minimum required version for the flag to be available.  Provide the following information in the SR: Name of the feature flag that you want to be enabled. URL of your OIC instance Content of the About Box from your OIC instance which should include the following: Service Instance Name e.g. myinstance (obtain from about box) Identity Domain e.g. idcs-xxxxxxxxxxxxxxx (obtain from about box) Justification, explain why you want the feature enabled, providing a use case. Your request will then be submitted to a product manager for approval.  Once approved then the feature will be enabled on your requested environment. Caveats Features are in controlled availability because they may still have some defects in them.  Be aware that by using feature flag controlled items ahead of general availability means that you are being an early adopter of new features and although we do our best to ensure a smooth ride you may experience some bumps.  Occasionally we may have to make changes to the functionality enabled in the feature flag before it becomes generally available.  Just something to be aware of.  However the feature flag enables us to release new features to customers whose use cases will benefit from them before we are ready to make a feature generally available.  We think this is good for both you, the customer and us, Oracle,  A win-win situation! Previous Flags Now Generally Available The following flags are no longer used as the features they controlled are now available to all instances of Oracle Integration Cloud. Note that if you are using User Managed Oracle Integration Cloud then you may need to upgrade to the latest release to get these features. Feature Flag Name Description Detailed Explanation Earliest Version oic.adapter.connectivity-agent.ha HA support for connectivity agent The Power of High Availability Connectivity Agent 18.3.1 oic.adapters.hcm-cloud.atom-feed-support Atom Feed support for HCM Adapter Subscribing to Atom Feeds in a Scheduled Integration 18.1.3 oic.cloudadapter.adapter.aq.rawobjectqueues This feature enables RAW and Object type Queues for consuming and producing messages via AQ Adapter in OIC   18.3.3 oic.cloudadapter.adapter.database.batchInsertUpdate Oracle Database Adapter - Operation On Table - Insert and Update   18.2.3 oic.cloudadapter.adapter.database.batchSelect Oracle Database Adapter - Operation On Table - Select and Merge   18.3.3 oic.cloudadapter.adapter.db2database.batchInsertUpdateSelectMerge DB2 Database Adapter - Operation On Table - Insert, Update, Merge and Select   18.3.3 oic.cloudadapter.adapter.dbaasdatabase.batchInsertUpdateSelectMerge Oracle DBaaS Adapter - Operation On Table - Insert, Update, Merge and Select   18.3.3 oic.cloudadapter.adapter.ebs.enableOpenInterface Support for Oracle E-Business Suite Open Interface Tables and Views in Oracle E-Business Suite adapter   18.4.1 oic.cloudadapter.adapter.erp.fileUpload File Upload functionality in ERP Adapter.   18.3.5 oic.cloudadapter.adapter.hcm.dataExtract Data Extract for HCM Adapter Configuring the Extract Bulk Data Option in an Integration 18.2.3 oic.cloudadapter.adapter.hcm.fileUpload File Upload functionality in HCM Adapter.   18.3.5 oic.cloudadapter.adapter.mysqldatabase.batchInsertUpdateSelectMerge MySql Database Adapter - Operation On Table - Insert, Update, Merge and Select   18.3.3 oic.cloudadapter.adapter.netsuite.customRecord CRUD and search support of custom record in Netsuite   18.4.1 oic.cloudadapter.adapter.netsuite.tba Token Based Authentication support in Netsuite Adapter   18.4.5 oic.cloudadapter.adapter.rest.awssignaturev4 Amazon Signature Version 4 Policy in Rest Adapter   19.2.1 oic.cloudadapter.adapter.rest.oauth10aPolicy OAuth for REST Adapter   18.1.5 oic.cloudadapter.adapter.rightnow.fileDownload Rightnow (Service Cloud) Adapter file download feature   18.1.5 oic.cloudadapter.adapter.rightnow.mtom.upload File upload as MTOM in Rightnow   19.2.1 oic.cloudadapter.adapter.rightnow.noSchema Design optimization for removing Design time artifact during runtime and generate new Runtime(RT) WSDL to improve performance   18.1.5 oic.cloudadapter.adapter.rightnow.queryCSV.Validation QueryCSV Validation is implicit query validation done by the system(ics) to check for normal syntax issues if user did not choose to Test Query button   18.1.5 oic.cloudadapter.adapter.soap.dynamicInvocation SOAP Dynamic Invocation   19.1.3 oic.cloudadapter.adapter.soap.enableMtom MTOM Support for SOAP Adapter   17.4.5 oic.cloudadapter.adapter.sqlserverdatabase.batchInsertUpdateSelectMerge SQL Server Database Adapter - Operation On Table - Insert, Update, Merge and Select   18.3.3 oic.cloudadapter.adapter.stagewrite.schema.generation Support JSON-ZIP-XML in stage file operations   19.2.3 oic.cloudadapter.adapters.aarpa Automation Anywhere Adapter   19.2.3 oic.cloudadapter.adapters.atpdatabase Adapter for ATP CS OIC-integration-with-OracleATP 18.4.5 oic.cloudadapter.adapters.dbaasdatabase Oracle DBaaS Adapter   18.2.3 oic.cloudadapter.adapters.oraclehcmtbe Taleo Business Edition (TBE) Adapter   18.2.3 oic.cloudadapter.adapters.oraclemonetization Oracle Monetization cloud adapter   18.4.5 oic.cloudadapter.adapters.otac Oracle Talent Acquisition Cloud adapter   18.4.5 oic.cloudadapter.adapters.rest_opa Oracle Policy Automation Adapter   18.2.3 oic.cloudadapter.adapters.uipathrpa UI Path RPA Adapter   18.4.3 oic.cloudadapter.cloudsdk.zipcircularref Processing circular references in zip schema file   19.2.1 oic.cloudadapter.ftp.multilevelauth FTP Multi Level Authentication   19.2.3 oic.cloudadapter.sdk.enable.default.tls TLS version handling   18.4.5 oic.common.clone_service Allow cloning of an existing ICS or OIC instance Docs for cloning an ICS instance into a new OIC instance. Blog How to Migrate from ICS to OIC  18.2.3 oic.common.clone_service.process Enables import/export for process artifacts   19.30010 oic.common.clone_service.sync_flow Clone service for synchronous flow   19.1.5 oic.ics.console.diagnostics.download-options Download options on ICS UI   19.1.5 oic.ics.console.integration.inline-menu Allow user to add actions/trigger/invoke inline from canvas instead of drag and drop   18.3.1 oic.ics.console.integration.layout View integration as pseudo style layout. See How Easily You Can Switch Your Integration Views 18.3.1 oic.ics.console.integration.nested-try-scopes Allow user to create nested scopes A simple guide to use nested scope in orchestration 18.2.5 oic.ics.console.integration.throw-action Allow users to throw error in integration Working with Throw New Fault Activity 18.2.5 oic.ics.console.monitoring.dashboard.use-tracking-api-for-metrics Up-taking new tracking APIs for dashboard metrics   19.3.1 oic.ics.console.monitoring.tracking.filter.custom-date-range Filter by custom date range in Monitoring Custom time range filter for monitoring pages in OIC 19.2.1 oic.ics.console.schedule.parameter-override-support Allows user to override the schedule parameters Overriding Schedule Parameters 18.2.3 oic.ics.console.schedule.use-scheduleparam-as-tracking-var Allow schedule parameter in business identifier dialog and use it as tracking variable Use schedule parameters as tracking variables 19.3.1 oic.ics.feature.new-version-check Control whether to use the new cversion based compatibility check or with old ICS version based compatibility check.   19.3.1 oic.ics.mapper.encode-decode-on-files Base64 Encode/Decode for Files   18.1.3 oic.ics.monitoring.historicalMetrics.migration Historical metrics migration for OIC   19.2.3 oic.ics.scheduler.filea-async-workers File-A Scheduled Flows using Async Workers   18.4.3 oic.ics.stagefile.reference.processing Allow users to configure file reference in stagefile operations   18.3.3 oic.insight.jetui.production Jet UI for Insight   18.3.1 oic.intg.uiapi.wait-action-in-seconds Allow user to define wait action in seconds   19.30750 oic.process.dt.customer-usage-email-notification Enables process to use notification   19.3.1 oic.process.dt.metrics.cache PCS projects metric cache   19.1.3 oic.trigger.error.instance Creating failed instances in trigger   19.1.5

Enabling the Future Today Last Updated: Wednesday 20 November 2019 Within Oracle Integration we are moving to a model that allows us to trial new features without making them available to everyone. ...

How to use Asserter to create OIC Integration unit tests automatically and run them to catch regressions

In this blog, I'd like to show you how easy it is to use Oracle Asserter, a new feature added to Oracle Integration Cloud for creating unit tests automatically with a few clicks and run those tests to catch regressions. Asserter supports the following use cases: Enable Integration Cloud users to create unit tests automatically and play them back to catch regressions when they modify their integrations (typically when they enhance an already created integration before making it production). Enable Integration Cloud QA to catch product regressions as part of a new release of Integration Cloud. Send Oracle a recorded instance so that Oracle can play back the instance to reproduce an issue or a bug. This is difficult without Asserter because all the dependent endpoints and third party adapters might not be available in-house to reproduce the issue. With Asserter, the endpoints are simulated and hence not needed to reproduce the issue.   Enabling Asserter Let's assume that you have built an integration which runs as per your requirements and you have completed all your manual testing. Now you are ready to go production. At this point, you might want to create a Asserter Test and want to check that into your source repository. This is so that when you want to change that integration later, you can rely on the Asserter test to catch regressions. Regression in this case is an assertion failing because the response you're sending to the client has changed due to a bug that was introduced in a mapping as an example. Enable the Asserter with below steps: A feature flag has to be enabled in OIC to enable Oracle Asserter. To turn on the feature flag, open a Service Request with Oracle support. Once the feature flag is enabled, login as a developer. From the list of integrations displayed in the integrations page, click the inline menu for the integration and click Oracle Asserter -> Enable Asserter Recording  You can also enable Asserter as part of the Activation as well.   Creating a test using Asserter After the Asserter is enabled, you can create a test for a given integration using the below steps: Run your integration once. That's it. Your Asserter Test (also called as Recording) is created now. To check the recording, go to Oracle Asserter -> Recordings and you can see the recording is displayed. The last one created will be displayed first. You can create up to 5 recordings for a specific integration. Note that a given integration can take multiple path or branches depending on certain values in your input payload. So you might want to create multiple recordings by sending different input values. You can identify each recording using the Primary Identifier column.   Exporting and Importing a test  After a recording is created, you can export and import a test for a given integration using the below steps: Login as a developer. From the list of integrations displayed in the integrations page, click the inline menu for theintegration and click Export Check the box that says Include Asserter Recordings. To import a recording click the button at the top right that says Import. Select the integration archive file (.iar) and check the box that says Include Asserter Recordings. Playing back a recording After a recording is created, you can playback a recording for a given integration using the below steps: Go to Oracle Asserter -> Recordings and identify the recording from the list that you want to playback. Click the playback button. You will see the following message displayed in the banner: "Successfully invoked playback for recording RecordName_7. Please refer Track Instances page to view the Asserter instance details." At this point, the recording is played back asynchronously and follow the next section to check the status of the test run ​ Checking the test status After a given Asserter recording is played back, you can check the test status using the below steps: Go to Monitoring -> Tracking. Identify the Asserter instance from the list of runs. To differentiate the Asserter run, you can see Asserter Instancetagged at the top as displayed in the image. You will also see the Test status displayed at the top as well. In order to see the details of the assertion, you can click Oracle Asserter Result from the top right inline menu. This will display the golden input that was stored earlier as part of the recording which was compared against the actual response. If there is a match, the test will pass else the test will fail. Note that this will be a XML comparison and not a string comparison so that the prefix differences are ignored. Using REST API to automate Asserter Asserter supports REST API so that you can automate some of the Asserter operations. The commonly used REST operation will be to playback a Asserter recording. For detailed information, you can check Asserter user guide. Some of the operations supported via REST are  Activate an Integration with recording mode. Enable/Disable Oracle Asserter Recording mode Import (with and without recordings, both integration and package) Export (with and without recordings) Play back a recorded instance Update a recording Delete a recording Get Asserter Recordings Submit recordings

In this blog, I'd like to show you how easy it is to use Oracle Asserter, a new feature added to Oracle Integration Cloud for creating unit tests automatically with a few clicks and run those tests to...

How to enable and use Tracing in less than 5 min

In this short blog, I'd like to show you how easy it is to enable tracing in OIC Integration and start tracing your integration flows. When Tracing is enabled, OIC Integration prints detailed info before and after each action that is executed (optionally the payload if needed). Hence care should be taken to make sure that it is enabled only for debugging purposes and turned off before going production. Global Tracing Let's assume that you have a requirement where you would like to enable or disable tracing for every integration you have created. You can use the global tracing for accomplishing the same. Enable the Global tracing with below steps: Login as an administrator. Click Settings on the left side. Click Trace on the left side. Select Global Tracing On and Click Save on the top right. Optionally you can select Include Payload which will additionally write the payload.   Integration Level Tracing If your requirement is to enable the tracing for one or more integrations and disable tracing for the rest of the integrations, you can use Integration Level tracing. Enable the Integration tracing with below steps: Login as an administrator. Click Settings on the left side. Click Trace on the left side. Select Integration Level and Click Save on the top right. After the above step, tracing can be enabled for a specific integration in two ways When an integration is activated, select Tracing. For an already activated integration, use the inline menu Tracing as shown in image.           Checking the info added by Tracing After the tracing is enabled (Global or Integration), the info logged by the tracing can be used to debug your integration. Check the tracing info using the below steps: Run your integration once. Go to Monitoring -> Tracking and select the instance. Once the instance is opened(showing the run in Green / Red), select the inline menu on the top right and select Activity Stream  

In this short blog, I'd like to show you how easy it is to enable tracing in OIC Integration and start tracing your integration flows. When Tracing is enabled, OIC Integration prints detailed info...

Integration

Oracle Named a Leader in 2018 Gartner Magic Quadrant for Data Integration Tools

Oracle has been named a Leader in Gartner’s 2018 “Magic Quadrant for Data Integration Tools” report based on its ability to execute and completeness of vision. Oracle believes that this recognition is a testament to Oracle’s continued leadership and focus on in its data integration solutions. The Magic Quadrant positions vendors within a particular quadrant based on their ability to execute and completeness of vision. According to Gartner’s research methodologies, “A Magic Quadrant provides a graphical competitive positioning of four types of technology providers, in markets where growth is high and provider differentiation is distinct: Leaders execute well against their current vision and are well positioned for tomorrow. Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute well. Niche Players focus successfully on a small segment, or are unfocused and do not out-innovate or outperform others. Challengers execute well today or may dominate a large segment, but do not demonstrate an understanding of market direction.” Gartner shares that, “the data integration tools market is composed of tools for rationalizing, reconciling, semantically interpreting and restructuring data between diverse architectural approaches, specifically to support data and analytics leaders in transforming data access and delivery in the enterprise.” The report adds “This integration takes place in the enterprise and beyond the enterprise — across partners and third-party data sources and use cases — to meet the data consumption requirements of all applications and business processes.” Download the full 2018 Gartner “Magic Quadrant for Data Integration Tools” here. Oracle recently announced autonomous capabilities across its entire Oracle Cloud Platform portfolio, including application and data integration. Autonomous capabilities include self-defining integrations that help customers rapidly automate business processes across different SaaS and on-premises applications, as well as self-defining data flows with automated data lake and data prep pipeline creation for ingesting data (streaming and batch). A Few Reasons Why Oracle Data Integration Platform Cloud is Exciting Oracle Data Integration Platform Cloud accelerates business transformation by modernizing technology platforms and helping companies adopt the cloud through a combination of machine learning, an open and unified data platform, prebuilt data and governance solutions and autonomous features. Here are a few key features: Unified data migration, transformation, governance and stream analytics – Oracle Data Integration Platform Cloud merges data replication, data transformation, data governance, and real-time streaming analytics into a single unified integration solution to shrink the time to complete end-to-end business data lifecycles.  Autonomous – Oracle Data Integration Platform Cloud is self-driving, self-securing, and self-repairing, providing recommendations and data insights, removing risks through machine learning assisted data governance, and automatic platform upkeep by predicting and correcting for downtimes and data drift. Hybrid Integration –Oracle Data Integration Platform Cloud enables data access across on-premises, Oracle Cloud and 3rd party cloud solutions for businesses to have ubiquitous and real-time data access. Integrated Data Lake and Data Warehouse Solutions – Oracle Data Integration Platform Cloud has solution based “elevated” tasks that automate data lake and data warehouse creation and population to modernize customer analytics and decision-making platforms. Discover DIPC for yourself by taking advantage of this limited time offer to start for free with Oracle Data Integration Platform Cloud. Check here to learn more about Oracle Data Integration Platform Cloud. Gartner Magic Quadrant for Data Integration Tools, Mark A. Beyer, Eric Thoo, Ehtisham Zaidi, 19 July 2018. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Oracle has been named a Leader in Gartner’s 2018 “Magic Quadrant for Data Integration Tools” report based on its ability to execute and completeness of vision. Oracle believes that this recognition is...

Oracle Integration Day is Coming to a City near You

Are you able to innovate quickly in the new digital world? Are you looking for ways to integrate systems and data faster using a modern cloud integration platform? Is your organization able to achieve differentiation and disruption? Is your Data Integration architecture allowing you to meet your uptime, replication and analytics/reporting needs? Join Oracle product managers and application/data integration experts to hear about best practices for the design and development of application integrations, APIs, and data pipelines with Oracle’s Autonomous Integration Platform. Hear real-world stories about how Oracle customers are able to adopt new digital business models and accelerate innovation through integration of their cloud, SaaS, on-premises applications and databases, and Big Data systems. Learn about Oracle’s support for emerging trends such as Blockchain, Visual Application Development, and Self-Service Integration to deliver competitive advantage. Tampa Integration Day   Oracle VP of Product Marketing, Vikas Anand, presenting his keynote at Integration Day at Oracle HQ in Redwood Shores, CA With interactive sessions, deep-dive demos and hands-on labs, the Oracle Integration Day will help you to: Understand Oracle's industry leading use of Machine Learning/AI in its Autonomous Integration Platform and how it can significantly increase speed and improve delivery of IT projects Quickly create integrations using Oracle’s simple but powerful Integration Platform as a Service (iPaaS) Secure, manage, govern and grow your APIs using Oracle API Platform Cloud Service Understand how to leverage and integrate with Oracle’s new Blockchain Cloud Service for building new value chains and partner networks Understand how Oracle’s Data Integration Platform Cloud (DIPC) can help derive business value from enterprise data; getting data to the right place at the right time reliably and ensuring high availability Integration Day begins on August 8 in Tampa. Register now to reserve your spot! Click the links to learn more about your local Integration Day   September 19, 2018 – New York City September 26, 2018 – Toronto October 3, 2018 – Boston December 5, 2018 – Chicago January 23, 2019 – Atlanta January 30 , 2019 –  Dallas February 6, 2019 – Washington DC February 20, 2019 – Santa Clara  

Are you able to innovate quickly in the new digital world? Are you looking for ways to integrate systems and data faster using a modern cloud integration platform? Is your organization able to achieve...

Integration

New Whitepaper: EU GDPR as a Catalyst for Effective Data Governance and Monetizing Data Assets

The European Union (EU) General Data Protection Regulation (GDPR) was adopted on the 27th of April 2016 and comes into force on the 25th of May 2018. Although many of the principles of GDPR have been present in country-specific legislation for some time, there are a number of new requirements which impact any organization operating within the EU. As organizations implement changes to processes, organization and technology as part of their GDPR compliance, they should consider how a broader Data Governance strategy can leverage their regulatory investment to offer opportunities to drive business value. This paper reviews some of the Data Governance challenges associated with GDPR and considers how investment in GDPR Data Governance can be used for broader business benefit. It also reviews the part that Oracle’s data governance technologies can play in helping organizations address GDPR. The following Oracle products are discussed in this paper: Oracle Enterprise Metadata Manager (OEMM)–metadata harvesting and data lineage Oracle Enterprise Data Quality (EDQ)–for operational data policies and data cleansing Oracle Data Integration Platform Cloud–Governance Edition (DIPC-GE)–for data movement, cloud-based data cleansing and subscription-based data governance Read the full whitepaper here.

The European Union (EU) General Data Protection Regulation (GDPR) was adopted on the 27th of April 2016 and comes into force on the 25th of May 2018. Although many of the principles of GDPR have been...

Integration

Migration from Oracle BPM to Oracle Autonomous Integration Cloud — Streamlining Process Automation in the Cloud

By Andre Boaventura, Senior Manager of Product Management In my last blog post Migrating your Oracle BPM assets into Oracle Process Cloud Service (PCS), I have described and demonstrated how to migrate modeling assets (essentially BPMN models) by leveraging the conversion framework that you can find at my repository at GitHub. As stated on the blog post above, the major use case was to demonstrate how customers using Oracle BPM Composer for modeling purposes *ONLY* could streamline their migration process from Oracle BPM into PCS. Also, as declared earlier, I have seen many customers that are using BPM for documentation purposes only, but at the other end, and as you might be likely asking yourself, there are many others that have already developed many projects and processes on top of the Oracle BPM not only for documentation purposes, but indeed for process automation, and obviously want to move them to the respective cloud version of Oracle BPM (aka PCS), given all the very known benefits of cloud adoption such as lower costs, greater agility, improved responsiveness and better resource utilization among other technical and business drivers. Thus, with Process Automation in mind, asset migration from Oracle BPM to PCS becomes an even more serious matter, but the good news is that this is really possible. As the major goal of my posts is to share experiences that I have seen with customers I have worked in the field, the following technique you will find below obviously could not avoid the rule. This exercise came as a challenge from a specific customer that was running in production all their processes on Oracle BPM for process automation purposes, so this means it also can be applied to many others since there is an increasing demand for this sort of migration given the high number of customers relying on Oracle BPM for process automation, and which at the same time, want to bring their processes to the Cloud as well. Look at the video below for a quick introduction about the Oracle BPM Path to the Cloud. After getting started with this introduction, I guess you might be thinking yourself about the new suite introduced a couple of months ago called Oracle Integration Cloud(aka OIC) that is a combination of both PCS, ICS and other known Oracle cloud services for Integration such as Integration Insight, Visual Builder Cloud Services and others. So the question that comes to mind is: Should this migration framework work for OIC as well? The good news is that the answer is also *YES* for OIC. Although there will be some caveats that I will be explaining along this article, since OIC only supports the new Web Forms technology(not the old one based in Frevvo) that was introduced in PCS, then the current forms technologies available in BPM(ADF or Basic Web Forms) won’t work within OIC, but even so, through my migration scripts you should be able to import other artifacts such as processes, indicators, integrations, etc. Again, throughout this post you will understand all alternatives currently available. However, before digging deeper into the details of this migration technique, let me give you a brief introduction about the new Oracle Cloud Integration Platform called OIC. Oracle Integration Cloud (OIC) brings together all the capabilities of Application Integration, Process Automation, Visual Application Building and Integration Analytics into a single unified cloud service. It now brings real-time and batch based integration, structured and unstructured processes, case management, stream analytics and integration insight allowing customers to service all their end to end integration needs in one cohesive platform so that all users can now build and deliver capabilities needed to realize true Digital Business Transformations. Billing is simplified with a single metric of OCPUs per hour (no more connection or user counting!!). Natively discover and invoke integration flows from processes in OIC and vice versa. Customers can configure and schedule patching according to their own schedule, and they can scale the Database to accommodate their business’s retention policy. The Integration Cloud runtime can be scaled out to meet the most demanding customer volumes. Also, customers now can choose to use the following alternatives for on-premises application connectivity: Connectivity Agent, API Platform, VPN or Oracle OCI FastConnect. OIC Key Features SaaS and On-Premises Integration: Quickly connect to thousands of SaaS or on-premises Applications seamlessly through 50+ native app adapters or technology adapters. Support for Service Orchestration and rich integration patterns for real-time and batch processing.   OIC Integration(formerly ICS) Process Automation: Bring agility to your business with an easy, visual, low-code platform that simplifies day to day tasks by getting employees, customers, and partners the services they need to work anywhere, anytime, and on any device. Support for Dynamic Case Management   OIC Process (formerly PCS) Visual Application Design: Rapidly create and host engaging business applications with a visual development environment right from the comfort of your browser. Integration Insight: The Service gives you the information you need — out of the box, with powerful dashboards that require no coding, configuration, or modification. Get up and running fast with a solution that delivers deep insight into your business.   Integration Insight(Also available in both SOA on-prem as an option & SOA Cloud Service SKU as Integration Analytics) Stream Analytics: Stream processing for anomaly detection, reacting to Fast Data. Super-scalable with Apache Spark and Kafka. Challenges Now that you already know that migration from Oracle BPM to either PCS or OIC is achievable by leveraging the migration framework I will be explaining along this article, and also and also had the opportunity to have a brief introduction about the new integration platform in the Oracle cloud called OIC, now let’s look at some of the challenges that we will need to overcome in order to make this migration happen. Getting back again to my earlier post about BPM Migration, where I have highlighted some of the differences among what is supported in Oracle BPM and PCS regarding the BPMN notation, you will notice that with process automation into the picture, the scope is larger since we should take into account the way BPMN notation behaves while running the process itself. Additionally, we have other extensions such as Human Tasks, Script tasks, Web forms, Data Objects, Business Indicators, etc, so the bottom of the issue is still a long way off, so let’s go for parts for seeing the whole. The Scope of this Migration As mentioned earlier, the goal of this blog is to provide you with experiences and real cases that I have seen in real customer in the field. As such, for obvious reasons, I can’t share customer processes information within this post, but I can use generic examples that will be covering all findings which I have discovered along customer migration process in order to fully showcase how to do a process migration from Oracle BPM to PCS in a smoothly way. For the sake of understanding, let us use the following Oracle BPM project(Oracle Travel Approval) as the sample to demonstrate migration in-action.   Oracle BPM Travel Approval Sample Process Note 1: This blog post is leveraging an Oracle BPM project sample with forms implemented in Web Forms(aka Frevvo forms). If your Oracle BPM projects have forms implemented with Oracle ADF technology, you still should be able to properly import your Oracle BPM projects into Oracle PCS, however at the other end, you will be forced to rewrite all your forms from scratch to get your applications fully migrated, up and running on the Cloud, since ADF is not supported in either PCS or OIC. Note 2: The recommended forms technology for PCS(which now is part of Oracle Integration Cloud) is the new Web forms technology. The previous one, that is a frevvo heritage, called Basic Forms in PCS, is no longer supported in OIC, so please be advised to get your forms migrated to Web forms in order to be able to upgrade to OIC, the latest Oracle Cloud integration platform, as described in the introduction section earlier in this post. In the process sample above, we can find some examples of activities or extensions that must be carefully handled by the conversion framework so that the BPM project can be imported and deployed on PCS in a smoothly way. They are as follow: Human Tasks Script Tasks Project Data Objects Business Indicators Initiator Tasks 1. Human Tasks The following are some of the attributes that are currently available in Oracle BPM(maybe you can find more within your Oracle BPM project), but not supported in Oracle PCS. hideCreator excludeSaturdayAndSunday So, if you don’t remove them from your task definitions files, you will get the following errors while trying to deploy your application project on PCS: … The errors are [2]: Element ‘hideCreator’ not expected. at line -1 at column -1. exception.fix: Make sure that the task definition conforms to the task definition XML schema definition. . — Please contact the administrator for more information… The errors are [2]: Element ‘excludeSaturdayAndSunday’ not expected. at line -1 at column -1. exception.fix: Make sure that the task definition conforms to the task definition XML schema definition. . — Please contact the administrator for more information. 2. Script Tasks Whether if you are creating Oracle BPM applications, it is very likely that you might have already played with script tasks. In general, script tasks are a very powerful component in Oracle BPM to enable users to pass data objects through data associations. Although an user can use script tasks for implementing Groovy scripts, most of implementations out there only use script tasks for data association. Also, as you probably must know, and as indicated in my previous post, Script tasks are not available in PCS. So, how can an Oracle BPM process be imported into PCS once this activity is not available in the component palette, resulting in a compilation error if trying to do so? Don’t worry, I have made the job for you, so you don’t have to get your hands dirty fixing this sort of things. The magic behind is to convert every script task into something supported in PCS so one can migrate process application smoothly. The approach I have figured out to solve this problem was converting all script tasks across all bpmn files into service tasks activity, then make those service tasks to call any service that would allow us to keep script task behaving as it was in Oracle BPM. A simple workaround was to call an internal PCS REST API(in my conversion script I am using getProcessDefinitions but could be any other one) only to be able to run the same data transformation as used in Oracle BPM. This is only valide por PCS as you might have realized. As for OIC, the signature is slightly different(process-definitions). Please check out the REST APIs for Oracle Autonomous Integration Cloud Service here. Also, there is an alternative while running the conversion script which is to disable all service Tasks activities when converting from scriptTask activity by passing the parameter disableScriptTasks to the conversation framework.   As such, after importing application project into OIC/PCS, then all these service Tasks need to be manually setup to point to any valid and existing service URL in order to get the process properly validated and deployed after all. If you want to learn more how it was done under the covers, please check out the following function in my linux bash script: function addPCSRESTConnectionToProjectDefinition() function addPCSRESTConnectionToScriptTask() function disableScriptTasks() Migration of script tasks was only tested against those with no implementations. If you are leveraging script tasks for running Groovy scripts, then these steps would need to be converted in a manual basis, since you can’t run Groovy scripts in OIPCS. 3. Data Objects Unlike Oracle BPM, Oracle OIC/PCS no longer has the concept of Project Data Object vs Process Data object. There is only Data Objects that works on the scope of the processes, and as opposed to Project data objects, they can’t be shared across different processes.   BPM Process and Project Data Objects   OIC/PCS Data Objects With that said, we first need to find out where the project data objects are stored within a BPM project, and then move them to the right place within the process to be properly validated by the Oracle OIC/PCS process validation engine. So, having a closer look at the project files, you will notice that Project Data Objects get stored within projectInfo.xml (under the SOA Folder), that is under the root folder. Particularly in my example, it will be called as BpmProject, since it is always stores under the folder containing the Process project name. See picture below for more details:   This is a sample Oracle BPM Project export(.exp) file structure By drilling down to the projectInfo.xml, you will find the following content:   Project Data Objects section Similarly, if you go the process file (in my case Process.bpmn), you will find the following section:   Process.bpmn file dataObjects section So, in order to make it OIC/PCS compatible, all project data objects must be within a process file (with .bpmn extension). In my case, with a simple process called Process.bpmn, I will add all Project Data objects from projectInfo.xml into this file, so at the end this work, Process.bpmn should look like the following:   Final Process.bpmn file after migration. It includes all Project Data Objects from the original Oracle BPM project, but now as process data objects. Please note that I had to find all references in the projectInfo.xml file for ns3and replace by bpmn before adding to the target process file. Similarly I had to do the same for all ns7 references and replacy by bpmnext. Also, you must figure out in which bpmn files you should add those former project data objects given the way they are referenced across processes. 4. Business Indicators One of the most powerful features available in Oracle BPM and Oracle OIC/PCS is the concept of Process Analytics. Process Analytics enable you to obtain performance and workload metrics of the processes in your project. You can use these metrics to make decisions about your process. Process analysts can monitor standard pre-defined metrics and process specific user-defined metrics. Process developers can define process specific metrics using Business Indicators. In Oracle BPM, business Indicators can be bound to project data objects. Once bound, the BPMN service engine publishes the business indicator values to process analytics stores when it runs the BPMN processes. However, as mentioned above, there is no such concept of Project Data Objects in OIC/PCS. Please see the pictures below to visualize the differences between them:   Oracle OIC/PCS Indicators   Oracle BPM Business Indicators As you may have noticed, there is a small difference between BPM and OIC/PCS regarding business indicators, but as said earlier, my conversion script does the dirty job for you by converting Project Data Objects only supported by Oracle BPM into Process data objects, which are then mapped to their respective process data objects in PCS. 5. Initiator Tasks The initiator task is one among the many human task interactive patterns in Oracle BPM. It’s used to trigger a BPM process flow from the defined human task user interaction interface. When you are using the initiator task to initiate a BPM process, the process always starts with the none start event. The none start event will not trigger the process; however, the human task initiator will initiate the process. It’s the role associated with the swim lane that defines the process participant, and that process participant/assignee is the one the initiator task gets assigned to. If your Oracle BPM processes have initiator tasks, this will be converted to approval tasks in Oracle OIC/PCS, however to keep the original behavior of an initiator task used in Oracle BPM in either PCS or OIC, that is the capability of triggering a BPM process flow from the defined human task user interface(i.e: An initiator task allows creating a BPM process instance through the BPM workspace or any other UI), this should be changed either to a Form Start Activity, with the form assigned to that activity, or using the submit task. I preferred to use the Form start activity since it simplifies the process by removing the human task and assigning the web form straight to the form start activity. Although I haven’t added this as a feature to my migration script yet, you can simply follow the steps I have showed in this quick video below, by performing these steps right after running a BPM migration and importing migrated artifacts from Oracle BPM(.exp files) into Oracle OIC/PCS, similarly to what will be covered in the next section “Getting Started with your Migration to the Cloud”. Getting started with your Migration to the Cloud After all the topics covered so far, that were essential for a deeper understanding of this migration procedure, finally the so awaited time has come, that is, where you can get your hands dirty and finally see migration in-action. Let’s get down to work!! Oracle BPM -> Oracle Autonomous Integration Cloud(OIC) Please watch the video below to find out how to migrate a process from Oracle BPM straight to OIC. TravelApplication is a sample application built on Oracle BPM, leveraging Webforms(frevvo) as the forms technology. As it is not supported in OIC, note that one step done in the video is to remove the formsfolder to allow this application to be properly imported within OIC. Then, as mentioned earlier, these forms will need to be created from scratch, but at least all other artifacts such as processes, business types, integrations, indicatores, etc were successfully imported into OIC. Oracle BPM -> Oracle Process Cloud Service (PCS) This is a similar video as shown above, but this time you can see an end-to-end migration from Oracle BPM to PCS, highlighting the following steps: Export process from Oracle BPM Migration with my framework Import into PCS Application Validation Test & Deploy Running the process from the Process Workspace   Now it's time to see an actual migration in action Below you will find videos describing a real migration done for a customer by using the migration framework to migrate Oracle BPM processes to PCS, ICS and Integration Analytics. Note that even though this process was done for a previous version of OIC(PCS & ICS), the majority of procedures and techniques shared below would be valid for OIC as well. Part 1 BPM to OIC/PCS Migration This is the first step towards generate the first BPM project to be imported into PCS. It walks you through the original Oracle BPM application(BPM Composer and Studio) and shows how to create the first migratable project on Oracle OIC/PCS by leveraging the BPM Cloud Migration framework available at this github repository. Although you can find references to PCS in this video, the same technique applies to OIC as well. Part 2 OIC(formerly ICS) — the integration Layer This step demonstrates how to install and setup OIC/ICS connectivity agent to be used by integrations that require access to customer’s Oracle database tables. Also, it shows how to build an integration from scratch in OIC/ICS to access customer database tables and then expose them as REST services to be consumed by Oracle OIC/PCS. Although you can find references to ICS in this video, the same technique applies to OIC as well. Part 3 — PCS & ICS Integration This video demonstrates how to leverage services created in OIC/ICS to replace those from the original process created with Oracle DB adapter within a SOA composite. Also, it showcases how to link those OIC/ICS services to OIC/PCS service call activities and how to map inbound and outbound data. Also, it shows the first deployable version to be tested and run on PCS. Although you can find references to both ICS & PCS in this video, the same technique applies to OIC as well. Part 4 — Integration Analytics This video guides you on how to create a Business Insight model with milestones, business metrics(measures and dimensions), assign them to their respective milestones and finally expose those milestones APIs to be consumed by Oracle OIC/PCS. Part 5 — OIC/PCS Business Insight Integration This video shows how to enable and link Business Insight within OIC/PCS and also deploy the final version to be used and tested in run-time. Although you can find references to PCS in this video, the same technique applies to OIC as well. PCS/ICS/Integration Analytics(Now OIC) End-to-End demo after a successful migration This video walks you through all products described earlier, but looking from the run-time perspective. It starts showing a process instance kicked-off through a Web forms, then an approval by a Mobile app, integration with Content Experience Cloud. Also, it guides you through all default and custom dashboards created on Integration Analytics as well as how to monitor integrations and track process integration instances in OIC/ICS. This is a comprehensive and seamlessly integrated demo that highlights how this Oracle PaaS service can work to bring more value and benefits for customers that have the same or a similar use case. Although you can find references to both ICS & PCS in this video below, the same technique applies to OIC as well. Stuff to Know Here is a summary of all available migration paths considering the current BPM offerings: Oracle BPM, PCS and OIC(the latest one), with their respective caveats/comments depending on the forms technology being used for you process automation. Oracle BPM->Oracle PCS Web forms: Migration will be smoothly since PCS supports the old web forms technology based on frevvo web forms. ADF: Although you can migrate processes and all other assets, you should rewrite your forms. Recommendation is to do it by leveraging the new Web Forms technology. BPM->OIC Regardless the forms technology it is being currently used in your Oracle BPM project, they will have to be created from scratch if you need a straight migration to OIC. Oracle is working in a forms migration tool. When available, it will allow you to migrate your old web forms based on frevvo to the latest web forms technology. As such, a reasonable approach(only applied if you are using frevvo web forms) would be to migrate from BPM->PCS to keep forms as they are, followed by a migration from PCS->OIC by leveraging the new web forms migration expected to be available this calendar year. PCS->OIC Transparent migration since the forms have been made in the new forms technology also available in PCS, otherwise it will be needed to rewrite the forms or wait until the migration framework, mentioned in the item above, is available to be used, which will allow to migrate the forms of the PCS built with old forms technology(frevvo) to be properly migrated to the OIC. Conclusion Hopefully this article has provided to you with some guidance on how to perform a migration from Oracle BPM to our managed version in the cloud called OIC, and also has indicated some of the options currently available depending on the sort of technology being employed during implementation with Oracle BPM. Below you will find some highlights of the key steps for an end-to-end migration. Disclaimer Although the procedure described in this blog post is a result of multiple success engagements done for some customers while migrating them from Oracle BPM to Oracle OIC/PCS, please be advised that this is not supported by Oracle and must be used at your own discretion. Please like on this article if it has helped you to understand the migration procedures from Oracle BPM to Oracle Autonomous Integration Cloud Service. And please leave a comment and state your feedback. This is very important to help me keep writing new articles about this sort of content.

By Andre Boaventura, Senior Manager of Product Management In my last blog post Migrating your Oracle BPM assets into Oracle Process Cloud Service (PCS), I have described and demonstrated how to...

Integration

Where and How Blockchain can be a better option than the traditional centralized systems? — A straightforward answer for a very common question

By Andre Boaventura, Senior Manager of Product Management After having written this article explaining a variety of consensus mechanisms for Blockchain available today, I have received hundreds of comments and positive feedback from those were struggling to understand what makes cryptocurrencies and blockchain securely working in a decentralized way, after having reading my article and thus have clearly understood the foundation supporting Blockchain implementations (thanks for all my readers by the way). However, there are others that are still struggling to understand the benefits of a blockchain vs a traditional model based system, and where blockchain can be really a “breakthrough” when compared to what can be done and achieved by leveraging a centralized system model. Among dozens of mails I have received regarding the same sort of questions, that is, where Blockchain can be better than those traditional systems that we have been using for decades, I have decided to share one of my answers to one of these colleagues, since I have noticed that this could benefit others that eventually could be questioning themselves about the same topic. The Question So, here is a sample question related to this concern mentioned above that I received from a colleague from the UK. Hi Andre I’ve read your comprehensive article and thanks for it being most useful and informative. I confess I may need to read it a second time to get a proper understanding of the intricate details and the “Proof of . . .” mechanisms and their pros and cons, but I think I understand the principle of Consensus Mechanisms. My fundamental question (and the reason for this reply to you) is “what are the real-world applications of a blockchain, and (playing devil’s advocate, if you’ll forgive me, why is it any more than an interesting computer science project?”.  I can easily see its application to virtual currencies like Bitcoin (whose critics would decry it as no more than a baseless speculation mechanism), but I’m struggling to imagine real-world examples of why I’d need a blockchain, or why it’s better, in some way or other, than what already exists. As an example — suppose I want to send £100 (or 100 Reais) to you. I contact my bank, tell them your banking details, and issue instructions to send the money: it arrives in your account, in due course, via the international banking system. Whilst I can imagine how the equivalent could be achieved via a blockchain in which you and I are both participants, why would doing so be any “better” than the existing mechanism? I speculate that your reply will include that it removes the need for the two banks acting as middlemen, but is that the only advantage? Or have I imagined a poor example? Thanks in advance for your expert reply.   The Answer Here was my answer to him(except the comics that I added to it while writing this article): First of all, thanks for spending the time to have a look at my article about consensus mechanisms found in some of the most popular Blockchain implementations available today. Hopefully it has helped you and provided some insights about key terminologies that are becoming more and more popular while talking about distributed ledger technologies. As I initially mentioned in this article, people generally understand(and also associates) the usage of this technology with cryptocurrencies, which is natural indeed, since it was born due to undeniably ingenious invention, to be used as the foundation of the most popular and valuable cryptocurrency we know today: bitcoin. However, sometimes it is a bit hard(I also agree with you) to get started with Blockchain and imagine something else that is different from the traditional model, based on the very common example related to money exchange between banking accounts. Therefore, let me try to provide you with a few other benefits and examples of usage that Blockchain technology can bring that go beyond the original usage leveraged by Satoshi Nakamoto, the creator of Bitcoin. The first question one should be asking is: Why is Blockchain so important leading to everyone talking about it? Fundamentally, it cryptographically addresses the problem of shared trust. However, how can entities that don’t trust each other transact?  Before answering this question, let us try to answer the following: What are the current approaches used by financial services companies(the same used in your use case above applies here) for solving this problem:   1) Trusted intermediary, e.g.: Visa/MC, SWIFT, DTCC, EuroClear  Issues: cost, latency, single-point-of-failure  Blockchain can remove the need for intermediary and replace it with cryptographically secure protocols 2) Separate records stored by all the different entities  Issues: reconciliation costly and error prone, does not scale, delayed settlement  Blockchain’s distributed ledger is a single source of truth — no reconciliation needed In a nutshell, Blockchain as a decentralized, peer-to-peer network with no central or controlling authority, means eliminating intermediaries that results in reduced transaction costs and near real-time transaction execution, which is a way different from what has been done by financial institutions, specially when transferring money overseas, which is exactly what you described in the use case below used to illustrate your question. Additionally, as a distributed ledger based technology, where all participants maintain a copy of the ledger, it eliminates manual efforts and delays due to reconciliation needs since data consistency is a key attribute of the distributed ledger. Often, data integration between systems of record (SORs) is driven by offline or batch reconciliation processes characterized by delays and manual exception handling. As such, Blockchain can help by using the cryptographically-secured consensus protocols that assure validation and agreement by all relevant parties, as well as real-time replication of data to each participant’s copy of the ledger. So, getting back to the first question, Blockchain is important and different from what already exists because it enables distributed and autonomous marketplaces,reduces friction in business transactions and reconciliations, securely maintainand share decentralized records, which can be used for a variety of use cases, such as the provenance of products, documents, materials, etc, which are not necessarily related only to the financial perspective as can be noticed, but can be applied for a countless number of use cases, which by the way, I have put them together below for you convenience. Potential Use Cases for Blockchain Here is a sample list of potential use cases(0f course not limited only to the ones listed below) to which you are likely to achieve real benefits by leveraging the key blockchain capabilities such as single source of truth, trusted transactions,immutable ledger store and near-real time data sharing, as highlighted above. Supply Chain Genealogy and traceability of parts, components, ingredients Maintenance parts tracking in multi-layered distribution Parts & maintenance tracking for aircraft & other regulated assets Farm-to-table food provenance Country of origin traceability Electronic compliance records Quality control records Tamper-proof IoT sensor data, non-repudiation of monitored activities Public Sector Government records (titles, birth certificates, licenses, etc.) sharing Customs (import/export licensing, excise taxes) Regulatory certifications (food, pharma, etc.) Procurement/Acquisitions Citizen services, e.g., benefits, multi-agency programs Healthcare Electronic Health Record Service provider credential management Clinical Tamper-proof IoT sensor data, non-repudiation of monitored activities, trials Anti-counterfeit track & trace for drugs Cold chain track & trace Integration with IoT devices monitoring health or equipment Telecom Roaming & Interconnect billing 3rd Party Service Providers eSIM   Key Characteristics to Remember After reviewing the answer, we could summarize some of the key characteristics that make Blockchain unique, different, better and more innovative than a traditional centralized model based system. Decentralized and Distributed (Ledger Storage & Integrity) Maintains distributed ledger of facts and history of the updates All participants see consistent data Distributed among participants Updates replicated across participants Authorized participants access data Irreversible and Immutable (Validated/Non-Repudiable Transactions) Each new block contains a hash of the previous block creating the chain All records are encrypted, and only those authorized with corresponding keys can view the data Records cannot be undetectably altered or deleted, only appended Consensus from a subset of nodes on new blocks/transactions Existence and validity of the record can not be denied When consensus is reached under network’s policies, transactions and their results are grouped into blocks, which are appended to the ledger with cryptographically secured hashes for immutability Near Real Time (Transactions verified and settled in minutes versus days) Parties interact directly. No intermediaries Changes to the ledger are made by smart contracts (business logic) when triggered by transactions from external applications Participants execute smart contracts on the validating nodes (peers) and follow consensus protocols to verify results Blockchain is not a solution to all problems! We used to say “there is an app for that” — but nowadays it seems there is a blockchain for everything. So, what is Blockchain good for? Besides the hype around Bitcoin and the people it made rich, what are the real applications?A lot of what we hear about the Blockchain is not what it is doing, but what it can do. In many ways, you can do with centralized systems what Blockchain promises to do — with one core difference; trust. That is the big advantage for Blockchain, however as stated, blockchain is not a silver bullet to solve all use cases, then there are a few useful questions that one should be asking to determine blockchain applicability. So, if your enterprises/customers can answer "YES" to these questions below, then it is likely that Blockchain can be a good fit for them. So, here are the questions to be used as a guideline for Blockchain selection (or not): Is my business process pre-dominantly cross-departmental/cross-organizational? Are there cross-system discrepancies that slow down the business? Is there less than full trust among transacting parties? Does it involve intermediaries, possibly charging expensive fees, adding risk or delay? Does it require periodic offline (batch) reconciliations? Is there a need to improve traceability or audit trail? Do we need real time visibility of the current state of multi-party transaction? Can I improve a multi-party business process by automating certain steps in it? Although Blockchain technology is well-suited to record certain kinds of information, traditional systems such as Database systems are better suited for other kinds of information. It is crucial for every organization to understand what it wants from these different approaches, and gauge this against the strengths and vulnerabilities of each kind of solution before selecting one.   Conclusion Blockchain is speculated, but it has great potential. The tech is getting better and better, and there are solutions evolving for many of the problems I have mentioned above. What is imperative is to see if a proposed Blockchain project is really a solution for the mentioned problems (i.e. it cannot be accomplished without using Blockchain), or is just trying to take advantage of the hype.

By Andre Boaventura, Senior Manager of Product Management After having written this article explaining a variety of consensus mechanisms for Blockchain available today, I have received hundreds of...

Integration

Oracle Named a Leader in 2018 Gartner Magic Quadrant for Enterprise Integration Platform as a Service for the Second Year in a Row

Oracle announced in a press release today that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Enterprise Integration Platform as a Service” report for the second consecutive year. Oracle believes that the recognition is testament to the continued momentum and growth of Oracle Cloud Platform in the past year.   As explained by Gartner, the Magic Quadrant positions vendors within a particular quadrant based on their ability to execute and completeness of vision separating into the following four categories: Leaders execute well against their current vision and are well positioned for tomorrow. Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute well. Niche Players focus successfully on a small segment, or are unfocused and do not out-innovate or outperform others. Challengers execute well today or may dominate a large segment, but do not demonstrate an understanding of market direction.   Gartner views integration platform as a service (iPaaS) as having the “capabilities to enable subscribers (aka "tenants") to implement data, application, API and process integration projects involving any combination of cloud-resident and on-premises endpoints.” The report adds, “This is achieved by developing, deploying, executing, managing and monitoring integration processes/flows that connect multiple endpoints so that they can work together.”   “GE leverages Oracle Integration Cloud to streamline commercial, fulfilment, operations and financial processes of our Digital unit across multiple systems and tools, while providing a seamless experience for our employees and customers,” said Kamil Litman, Vice President of Software Engineering, GE Digital. “Our investment with Oracle has enabled us to significantly reduce time to market for new projects, and we look forward to the autonomous capabilities that Oracle plans to soon introduce.”   Download the full 2018 Gartner “Magic Quadrant for Enterprise Integration Platform as a Service” here.   Oracle recently announced autonomous capabilities across its entire Oracle Cloud Platform portfolio, including application and data integration. Autonomous capabilities include self-defining integrations that help customers rapidly automate business processes across different SaaS and on-premises applications, as well as self-defining data flows with automated data lake and data prep pipeline creation for ingesting data (streaming and batch).   Oracle also recently introduced Oracle Self-Service Integration, enabling business users to improve productivity and streamline daily tasks by connecting cloud applications to automate processes. Thousands of customers use Oracle Cloud Platform, including global enterprises, along with SMBs and ISVs to build, test, and deploy modern applications and leverage the latest emerging technologies such as blockchain, artificial intelligence, machine learning and bots, to deliver enhanced experiences.   A Few Reasons Why Oracle Autonomous Integration Cloud is Exciting    Oracle Autonomous Integration Cloud accelerates the path to digital transformation by eliminating barriers between business applications through a combination of machine learning, embedded best-practice guidance, and prebuilt application integration and process automation.  Here are a few key features: Pre-Integrated with Applications – A large library of pre-integration with Oracle and 3rd Party SaaS and on-premises applications through application adapters eliminates the slow and error prone process of configuring and manually updating Web service and other styles of application integration.  Pre-Built Integration Flows – Instead of recreating the most commonly used integration flows, such as between sales applications (CRM) and configure, price, quoting (CPQ) applications, Oracle provides pre-built integration flows between applications spanning CX, ERP, HCM and more to take the guesswork out of integration.  Unified Process, Integration, and Analytics – Oracle Autonomous Integration Cloud merges the solution components of application integration, business process automation, and the associated analytics into a single seamlessly unified business integration solution to shrink the time to complete end-to-end business process lifecycles.   Autonomous – It is self-driving, self-securing, and self-repairing, providing recommendations and best next actions, removing security risks resulting from manual patching, and sensing application integration connectivity issues for corrective action.   Discover OAIC for yourself by taking advantage of this limited time offer to start for free with Oracle Autonomous Integration Cloud.   Check here for Oracle Autonomous Cloud Integration customer stories.   Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.    

Oracle announced in a press release today that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Enterprise Integration Platform as a Service” report for the second consecutive...

A Practical Path to AI Podcast Series: Podcast #10 – Self-Service Integration Closes the Productivity Gap for AI Adoption

By Kellsey Ruppel, Principal Product Marketing Director   For the tenth podcast in our "Practical Path to AI" podcast series, I had a conversation with Daryl Eicher, Senior Product Marketing Director at Oracle. This was another podcast in our "Practical Path to AI" podcast series where we've been covering how Artificial Intelligence (AI) is reshaping the business landscape and helping you better understand how to get on the path to AI adoption.   IDC is forecasting spending on AI and machine learning will grow from $8B in 2016 to $47B by 2020. Automation, integration, machine learning, and AI technologies are so pervasive that more than 68 percent of us trust and leverage these powerful technologies without knowing it. What began as consumer-centric selling and investing is entering the enterprise, and transformation leaders need to know how to employ these and related disruptive technologies to grow share of wallet and reach new markets in the attention economy. Self-service integration closes the gap between enterprise applications and the tsunami of productivity apps that threatens to overrun enterprise IT if left unmanaged. In this segment of our podcast series, Daryl and I discussed how self-service integration brings the productivity leap of thousands of innovative apps together with the governance that financial services, healthcare, public sector, manufacturing, and retail firms need to build their digital brands.   Wondering what else Daryl had to say? Be sure to catch this podcast “Self-Service Integration Closes the Productivity Gap for AI Adoption” to learn more. You can also listen to the other podcasts in “A Practical Path to AI” podcast series here!   Additionally, we invite you to attend the webcast, Introducing Oracle Self-Service Integration, on April 18th at 10:00am PT. Vikas Anand, Oracle Vice President of Product Management, will discuss: Integration trends such as self-service, blockchain, and artificial intelligence the solutions available in Oracle Self-Service Integration Cloud Service Register today!

By Kellsey Ruppel, Principal Product Marketing Director   For the tenth podcast in our "Practical Path to AI" podcast series, I had a conversation with Daryl Eicher, Senior Product Marketing Director at...

Integration

Demystifying Blockchain and Consensus Mechanisms - Everything You Wanted to Know But Were Never Told

  Article written by Andre Boaventura, Senior Manager of Product Management It is likely that you’ve heard so far, many descriptions of what blockchain is, and that description probably is related somehow with money. Of course, this is not happening by chance, but actually due to many popular technologies such as Bitcoin, Ethereum, Ripple and many others currently available in the cryptocurrency marketplace, which have this solution based on DLT(Distributed Ledger Technology), as their core implementation foundation, which is the basis for trading cryptocurrencies and other assets through public & private markets. However, Blockchain technology goes much further than just cryptocurrencies. Today, blockchain is already adopted as part of many everyday B2B transactions, including those powered by enterprise applications such as ERPs, Supply Chain, Financial Services, Healthcare systems, etc, and the list is much longer than this one. The Blockchain is an undeniably ingenious invention – the brainchild of a person or group of people known by the pseudonym, Satoshi Nakamoto. But since then, it has evolved into something greater, and the main question every single person is asking is: What is Blockchain? By definition, Blockchain is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a cryptographic hash of the previous block, a timestamp and transaction data. By design, a blockchain is inherently resistant to modification of the data. It is "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way". For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority. Generally speaking, a blockchain network is a system for maintaining distributed ledgers or facts and the history of the ledgers' updates. This approach allows organizations that don't fully trust each other to agree on the updates submitted to a shared ledger by using peer-to-peer protocols rather than a central third party or manual offline reconciliation process. Blockchain enables real-time transactions and securely shares tamper-proof data across a trusted business network.   There are essentially two types of Blockchain considering the access perspective: Permissionless Basically, anyone can read the chain, anyone can make legitimate changes and anyone can write a new block into the chain (as long as they follow the rules). Bitcoin is by far the most popular example of a successful public blockchain network. It is totally decentralized. It is also described as a 'censor-proof' blockchain. Bitcoin and other cryptocurrencies such as Ethereum currently secure their blockchain by requiring new entries to include a proof of work. However, due to the way a public blockchain works, they require computer resource intensive mining process to add blocks cryptographically. Also, Consensus models based on computationally expensive algorithms requiring the processing power of many nodes to ensure security The great advantage to an open, permissionless, or public, blockchain network is that guarding against bad actors is not required and no access control is needed. This means that applications can be added to the network without the approval or trust of others, using the blockchain as a transport layer. For these reasons, it's also known by its widest description, a public blockchain. But, obviously, this is not the only way to build a blockchain. Permissioned Essentially they are a closed ecosystem where members are invited to join and keep a copy of the ledger. e.g.: (Hyperledger, R3 Corda).  Permissioned blockchains use an access control layer to govern who has access to the network. In contrast to public blockchain networks, validators on private blockchain networks are vetted by the network owner. They do not rely on anonymous nodes to validate transactions nor do they benefit from the network effect, but they rely on something called consensus protocol, like bitcoin's proof of work (the one we hear about most often), that does two basic things: it ensures that the next block in a blockchain is the one and only version of the truth, and it keeps powerful adversaries from derailing the system and successfully forking the chain. Consensus protocol comprises of 3 basic steps: Endorsement: determine whether to accept or reject a transaction Ordering: sort all transactions within a time period into a sequence Validation: verify endorsement satisfy policy and Read set is valid Permissioned networks can also go by the name of 'consortium' or 'hybrid' blockchains.       Blockchain Consensus mechanisms A blockchain is a decentralized peer-to-peer system with no central authority figure. While this creates a system that is devoid of corruption from a single source, it still creates a major problem. How are any decisions made? How does anything get done? Think of a normal centralized organization. All the decisions are taken by the leader or a board of decision makers. This isn’t possible in a blockchain because a blockchain has no “leader”. For the blockchain to make decisions, they need to come to a consensus using “consensus mechanisms”. So, how do these consensus mechanisms work and why did we need them? What are some of the consensus mechanisms used in cryptocurrencies and in some Blockchain implementations such as Hyperledger? All these questions will be answered later on, however let's understand how a consensus work prior to talk about some available implementations. In simpler terms, consensus is a dynamic way of reaching agreement in a group. While voting just settles for a majority rule without any thought for the feelings and well-being of the minority, a consensus on the other hand makes sure that an agreement is reached which could benefit the entire group as a whole. A method by which consensus decision-making is achieved is called “consensus mechanism”. So now that we have defined what a consensus is, let’s look at what the objectives of a consensus mechanism are: Agreement Seeking: A consensus mechanism should bring about as much agreement from the group as possible. Collaborative: All the participants should aim to work together to achieve a result that puts the best interest of the group first. Cooperative: All the participants shouldn’t put their own interests first and work as a team more than individuals. Egalitarian: A group trying to achieve consensus should be as egalitarian as possible. What this basically means that each and every vote has equal weight. One person’s vote can’t be more important than another’s. Inclusive: As many people as possible should be involved in the consensus process. It shouldn’t be like normal voting where people don’t really feel like voting because they believe that their vote won’t have any weight in the long run. Participatory: The consensus mechanism should be such that everyone should actively participate in the overall process. Now that we have defined what consensus mechanisms are and what they should aim for, we need to think of the other questions: Which consensus mechanisms should be used for blockchain network to keep their original characteristics such reliability, security and availability? We hear plenty of talk of how public blockchains are going to change the world, but to function on a global scale, a shared public ledger like Bitcoin needs a functional, efficient and secure consensus algorithm. Before Bitcoin, there were loads of iterations of peer-to-peer decentralized currency systems which failed because they were unable to answer the biggest problem when it came to reaching a consensus. This problem is called “Byzantine Generals Problem(BGP)”.   Byzantine Generals Problem(BGP) Imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching agreement. The generals must decide on when to attack the city, but they need a strong majority of their army to attack at the same time. The generals must have an algorithm to guarantee that: (a) All loyal generals decide upon the same plan of action, and A small number of traitors cannot cause the loyal generals to adopt a bad plan. The loyal generals will all do what the algorithm says they should, but the traitors may do anything they wish. The algorithm must guarantee condition (a) regardless of what the traitors do. The loyal generals should not only reach agreement, but should agree upon a reasonable plan.     Looking at the picture above, you can understand the problem and what is the challenge for Byzantine generals while attacking a city. They are facing two very distinct problems: The generals and their armies are very far apart so centralized authority is impossible, which makes coordinated attack very tough. The city has a huge army and the only way that they can win is if they all attack at once. What these generals need, is a consensus mechanism which can make sure that their army can actually attack as a unit despite all these setbacks. This has clear references to blockchain as well. The chain is a huge network; how can you possibly trust them? If you were sending someone 4 Bitcoin from your wallet, how would you know for sure that someone in the network isn’t going to tamper with it and change 4 to 40 Bitcoins? This is where consensus mechanisms come to the rescue. As such, now we are going to go through a list of consensus mechanisms which can solve the Byzantine Generals problem for some very known Blockchain networks such as Bitcoin, Ethereum, Ripple, Peercoin, Hyperledger and many others. Proof of Work (PoW) Bitcoin uses Proof of Work(PoW) to ensure blockchain security and consensus. “Proof of Work”, as its name implies, requires that the decentralized participants that validate blocks show that they have invested significant computing power in doing so. In bitcoin, validators (known as “miners”) compete to process a block of transactions and add it to the blockchain. In proof of work, miners compete to add the next block (a set of transactions) in the chain by racing to solve a extremely difficult cryptographic puzzle. They do this by churning enough random guesses on their computer to come up with an answer within the parameters established by the bitcoin. This process requires immense amount of energy and computational usage. The puzzles have been designed in a way which makes it hard and taxing on the system. Essentially this puzzle that needs solving is to find a number that, when combined with the data in the block and passed through a hash function, produces a result that is within a certain range. This is much harder than it sounds. The main character in this game is called a “nonce”, which is an abbreviation of “number used once”. In the case of bitcoin, the nonce is an integer between 0 and 4.294.967.296. How do they find this number? By guessing at random. The hash function makes it impossible to predict what the output will be. So, miners guess the mystery number and apply the hash function to the combination of that guessed number and the data in the block. The resulting hash has to start with a pre-established number of zeroes. There's no way of knowing which number will work, because two consecutive integers will give wildly varying results. What's more, there may be several nonces that produce the desired result, or there may be none (in which case the miners keep trying, but with a different block configuration). When a miner solves the puzzle, they present their block to the network for verification. Verifying whether the block belongs to the chain or not is an extremely simple process. The first to solve the puzzle, wins the lottery. As a reward for his or her efforts, the miner receives newly bitcoins - and a small transaction fee. The difficulty of the calculation (the required number of zeroes at the beginning of the hash string) is adjusted frequently, so that it takes on average about 10 minutes to process a block. Why 10 minutes? That is the amount of time that the bitcoin developers think is necessary for a steady and diminishing flow of new coins until the maximum number of 21 million is reached (expected some time in 2140). Yet, although a masterpiece in its own right, bitcoin's proof of work isn't quite perfect. Common criticisms include that it requires enormous amounts of computational energy, that it does not scale well (transaction confirmation takes about 10-60 minutes) and that the majority of mining is centralized in areas of the world where electricity is cheap, leading to an inefficient process because of the sheer amount of power and energy that it eats up. That said, people and organizations that can afford faster and more powerful ASICs(Application-specific integrated circuit chips) usually have better chance of mining than the others. As a result of this, bitcoin isn’t as decentralized as it wants to be. Theoretically speaking, there are big mining pools that could simply team up with each other and launch over than 51% on the bitcoin network. As a result, those who have significant financial resources have come to dominate the bitcoin mining space. Mining today is embodied by the emergence of enterprise-style, datacenter-hosted mining operations. Bitcoin creator Satoshi Nakamoto woke us up to the potential of the blockchain, but that doesn't mean we can't keep searching for faster, less centralized and more energy-efficient consensus algorithms to carry us into the future. Other examples can be find below such as PoS(Proof-of-Stake), Proof-of-Activity and some others available today. Proof-of-Stake (PoS) The most common alternative to proof of work is proof of stake. In this type of consensus algorithm, instead of investing in expensive computer equipment in a race to mine blocks, a 'validator' invests in the coins of the system. Note the term validator. That's because no coin creation (mining) exists in proof of stake. Instead, all the coins exist from day one, and validators (also called stakeholders, because they hold a stake in the system) are paid strictly in transaction fees. The systems that don’t use proof-of-work are also often called virtual mining systems because they don’t have a mining activity. The network selects an individual to approve new messages (that is to say, confirm the validity of new information submitted to the databse) based on their proportional stake in the network. In other words, instead of any individual attempting to calculate a value in order to be chosen to establish a consensus point, the network itself runs a lottery to decide who will announce the results, and system participants are exclusively and automatically entered into that lottery in direct proportion to their total stake in the network. As in the PoW system run by Bitcoin, the PoS system run by organizations such as Peercoin also provides an incentive to participation, which ensures broadest possible network participation and therefore the most robust network security possible. In the Peercoin system, the chosen party is rewarded with a new Peercoin in a process called ‘minting’ (rather than BitCoin’s ‘mining’). As mentioned, proof of stake will make the entire mining process virtual and replace miners with validators. Here is an outline on how the process will work: The validators will have to lock up some of their coins as stake. After that, they will start validating the blocks. Meaning, when they discover a block which they think can be added to the chain, they will validate it by placing a bet on it. If the block gets appended, then the validators will get a reward proportionate to their bets. In proof of stake, your chance of being picked to create the next block depends on the fraction of coins in the system you own (or set aside for staking). A validator with 300 coins will be three times as likely to be chosen as someone with 100 coins. Once a validator creates a block, that block still needs to be committed to the blockchain. Different proof-of-stake systems vary in how they handle this. There are some implementations where  every node in the system has to sign off on a block until a majority vote is reached, while in other systems, a random group of signers is chosen. As you can see, the PoS protocol is a lot more resource-friendly than PoW. In PoW, you NEED to waste a lot of resources to go along with the protocol, it is basically resource wastage for the sake of resource wastage. Although PoS seems to be the most reasonable replacement for PoW, due to not having the issues found in PoW(requires enormous amounts of computational energy, not decentralized as it wants to be since there are just a few large pools that own over than 50% of Bitcoin network together), there is a very common problem that needs to be solved by PoS prior to be largely adopted by a production blockchain implementation. So, reviewing the way PoS works with regards to security, the common questions that could arise would be the following: What is to discourage a validator from creating two blocks and claiming two sets of transaction fees? And what is to discourage a signer from signing both of those blocks? This has been called the 'nothing-at-stake' problem. A participant with nothing to lose has no reason not to behave badly. In the burgeoning field of 'crypto-economics', blockchain engineers are exploring ways to tackle this and other problems. One answer is to require a validator to lock their currency in a type of virtual vault. If the validator tries to double sign or fork the system, those coins are slashed. Additionally, this system, however, by rewarding those who already are most deeply involved in the network inherently creates an increasingly centralized system. This is inimical to a truly robust network. Therefore proponents of PoS systems have put forward a number of various modifications to help ensure the base for their networks remain as broad (and therefore secure) as possible. Peercoin was the first coin to implement proof of stake. Ethereum currently relies on proof of work, but is planning a move to proof of stake in early 2018 by solving the PoS problem called 'nothing-at-stake' by leveraging a new approach to address this PoS issue called Casper protocol. Also, there is a variation of this method called a delegated proof-of-stake (DPoS). This system works along the same lines as the PoS system, except that individuals choose an overarching entity to represent their portion of stake in the system. So imagine, each individual decides if entity 1, 2, or 3 (these could be, for example, computer servers, and are called ‘delegate nodes’ within a DPoS system) will ‘represent’ his or her individual stake in the system. This allows individuals with smaller stakes to team up to magnify their representation, thereby creating a mechanism to help balance out the power of large stake holders. This comes at the cost, however of greater network centralization. Bitshares is one company that employs a DPoS system. Proof-of-Activity(PoA) So, proof of activity was created as an alternative incentive structure for bitcoin. Proof of activity is a hybrid approach that combines both proof of work and proof of stake. In proof of activity, mining kicks off in a traditional proof-of-work fashion, with miners racing to solve a cryptographic puzzle. Depending on the implementation, blocks mined do not contain any transactions (they are more like templates), so the winning block will only contain a header and the miner's reward address. At this point, the system switches to proof of stake. Based on information in the header, a random group of validators is chosen to sign the new block. The more coins in the system a validator owns, the more likely he or she is to be chosen. The template becomes a full-fledged block as soon as all of the validators sign it. If some of the selected validators are not available to complete the block, then the next winning block is selected, a new group of validators is chosen, and so on, until a block receives the correct amount of signatures. Fees are split between the miner and the validators who signed off on the block. Criticisms of proof of activity are the same as for both proof of work (too much energy is required to mine blocks) and proof of stake (there is nothing to deter a validator from double signing). Decred is the only coin right now using a variation of proof of activity. Practical Byzantine Fault Tolerance Algorithm(PBFT) The Practical Byzantine Fault Tolerance Algorithm (PBFT) was designed as a solution to a problem presented in the form of an allegory described earlier in this introduction chapter under the Byzantine Generals Problem(BGP) section. To clarify the allegory for our purposes: the ‘generals’ in the story are the parties participating in the distributed network running the blockchain (database) in question. The messengers they are sending back and forth are the means of communication across the network on which the blockchain is running. The collective goal of the “loyal generals” is to decide whether or not to accept a piece of information submitted to the blockchain (database) as valid or not. A valid piece of information would be, in our allegory, a correct opportunity to decide in favor of attack. Loyal generals, for their part, are faithful blockchain participants, who are interested in ensuring the integrity of the blockchain (database) and therefore ensuring that only correct information is accepted. The trecherous generals, on the other hand, would be any party seeking to falsify information on the blockchain (the database). Their potential motives are myriad — it could be an individual seeking to spend a BitCoin that she does not actually own or another person who wants to get out of contractual obligations as outlined in a smart contract he already signed and submitted. Various computer scientists have outline a number of potential solutions to the Byzantine generals problem from the allegory. The practical byzantine fault tolerance algorithm (PBFT), which is used to establish consensus in blockchain systems, is only one of those potential solutions. Three examples of blockchains that rely on the PBFT for conses are Hyperledger, Stellar, and Ripple. Very roughly and without explaining the whole algorithm (which would take a multiple page research paper), what the PBFT does is as follows: Each ‘general’ maintains an internal state (ongoing specific information or status). When a ‘general’ receives a message, they use the message in conjunction with their internal state to run a computation or operation. This computation in turn tells that individual ‘general’ what to think about the message in question. Then, after reaching his individual decision about the new message, that ‘general’ shares that decision with all the other ‘generals’ in the system. A consensus decision is determined based on the total decisions submitted by all generals. Among other considerations, this method of establishing consensus requires less effort than other previous methods described earlier. Also, PBFT is a system initially devised for low-latency storage systems - something that could be applicable in digital asset-based platforms that don't require a large amount of throughput, but do demand many transactions. Hyperledger's approach for consensus The Hyperledger project allows developers to create their own digital assets with a distributed ledger powered by nodes built on the principle of PBFT. The system could be used to digitally back a real asset (such as a house), create new coins, or form a fault-tolerant system of consensus. The idea for Hyperledger's use of PBFT goes beyond asset-based systems. It takes the idea of an algorithm for consensus and uses it to distribute all sorts of technical solutions - not just the low latency, high-speed file storage solution it was originally built to provide. This might be a good method of testing the power of nodes that do not use incentive to develop their strength. What will happen without such rewards? Systems like Hyperledger aim to find out. If you use Byzantine Fault Tolerance, ideally corruption problems are contained. The other nodes can realize a node is misbehaving, and not respond to its messages. In distributed ledger technology, consensus has recently become synonymous with a specific algorithm, within a single function. However, consensus encompasses more than simply agreeing upon the order of transactions, and this differentiation is highlighted in Hyperledger Fabric through its fundamental role in the entire transaction flow, from proposal and endorsement, to ordering, validation and commitment. In a nutshell, consensus is defined as the full-circle verification of the correctness of a set of transactions comprising a block. As for Hyperledger implementation, consensus is ultimately achieved when the order and results of a block’s transactions have met the explicit policy criteria checks. These checks and balances take place during the lifecycle of a transaction, and include the usage of endorsement policies to dictate which specific members must endorse a certain transaction class, as well as system chaincodes to ensure that these policies are enforced and upheld. Prior to commitment, the peers will employ these system chaincodes to make sure that enough endorsements are present, and that they were derived from the appropriate entities. Moreover, a versioning check will take place during which the current state of the ledger is agreed or consented upon, before any blocks containing transactions are appended to the ledger. This final check provides protection against double spend operations and other threats that might compromise data integrity, and allows for functions to be executed against non-static variables. Also, since Hyperledger Fabric requires all participants to be authenticated, due to its permissioned implementation nature, in addition to take advantage of this characteristic to govern certain levels of access control (e.g. this user can read the ledger, but cannot exchange or transfer assets), it also can benefit on this dependence on identity as a great advantage in that varying consensus algorithms (e.g. byzantine or crash fault tolerant) can be implemented in place of the more compute-intensive Proof-of-Work and Proof-of-Stake varieties, as it was properly described earlier in this section. As a result, permissioned networks tend to provide higher transaction throughput rates and performance. In addition to the multitude of endorsement, validity and versioning checks that take place, there are also ongoing identity verifications happening in all directions of the transaction flow. Access control lists are implemented on hierarchal layers of the network (ordering service down to channels), and payloads are repeatedly signed, verified and authenticated as a transaction proposal passes through the different architectural components. To summarize, consensus is not merely limited to the agreed upon order of a batch of transactions, but rather, it is an overarching characterization that is achieved as a byproduct of the ongoing verifications that take place during a transaction’s journey from proposal to commitment. Conclusion about consensus mechanisms While these systems for establishing consensus are currently the most dominant, the field is still wide open to innovation by creating variations of these implementations as well as new approaches to them. Some other examples are: Proof of burn, Proof of capacity, and Proof of elapsed time. As blockchain systems continue to gain in popularity, they will also continue to grow in scale and complexity. Which of these consensus building systems (if any) is best equipped to handle this ongoing expansion remains to be seen. Currently, companies choose a system for their product that best meets their (or their customer’s) needs for speed, efficiency, and security. It is important to note, these systems differ not only in the details of the formation of their respective consensus-building communities, but importantly they differ in how they would handle potential attacks. This is, in fact, one of the clearest distinguishing features between the consensus-building systems: the potential size of an attack on the system that could be easily managed. If you've made it this far, then congratulations! There is still so much more to explain about the Blockchain and Hyperledger, but at least now you have an idea of the broad outline of the genius of the programming and the concept. For the first time we have a system that allows for convenient digital transfers in a decentralized, trust-free and tamper-proof way. Sky is the limit for Blockchain!!

  Article written by Andre Boaventura, Senior Manager of Product Management It is likely that you’ve heard so far, many descriptions of what blockchain is, and that description probably is related...

Integration

Great APIs need a plan!

We recently released API Platform Cloud service 18.1.5 and with that we are introducing phase 1 of plans!  To be fully transparent, we've had plans built into the service from day 1, but this marks a step in making the feature available. What are plans you may ask? Plans provide the measured access to one or more APIs serving as the foundation for monetization. Plans define limits at the subscriber level that stretches across APIs. To explain this further, let's use the example of a rate-limit.  A rate limit controls the number of calls within a certain time period.  The API Rate Limit protects a system by limiting the number of calls that may be made to a particular API, no matter who is calling the API.  For example, if my back-end system can handle no more than 10000 requests per second, I may set an API Rate Limit of 10000 per second which would apply for all callers.  Another limit is the Application Rate Limit, which we can call the "fair share" limit.  This stipulates that no one application can get more than a limited number of calls within a certain time period.  For example, I may decide that no one application can get more than 1000 calls per second.  If I have 5 applications subscribed then this means that there can be a total of 5000 requests per second.  Plans takes this forward in a much richer way in that I can set limits for the consumer.  With plans, the API consumer now subscribes to the plan and APIs are entitled to that plan.  When a consumer subscribes to the plan, the consumer gets access to all of the APIs entitled in the plan. We can now set a limit at the plan level.  For example, a consumer may be limited to 100 calls per second.  This limit would apply across all of the APIs.  This means that while the API can handle up to 10000 calls/second, and any application can call up to 1000 calls/second, the plan that the subscriber happens to be subscribed to set a limit of 100 calls/second so the limit for that subscriber is the lower of the three limits.  This limit would also stripe across all APIs in the plan meaning that the calls are counted for the subscriber no matter how many APIs happen to be entitled in the plan We can also within the plan, set limits for specific APIs in that plan.  This is not to be confused with the API Rate Limit policy, rather it is a plan limit applied specifically to that API entitled in the plan Plans allow us to create consumer groups where we can control access based on the subscription rather than just the API itself.  This provides the foundation for monetization where you can segment consumers based on the plan they are entitled.  This is just the beginning as we will be bringing more monetization features, but this already provides great value for enterprises that want to define limits across groups.  To learn more about API Plans, visit Managing Plans in our documentation!

We recently released API Platform Cloud service 18.1.5 and with that we are introducing phase 1 of plans!  To be fully transparent, we've had plans built into the service from day 1, but this marks a...

Integration

Introducing Oracle Self-Service Integration Cloud Service

REGISTER HERE One of the most exciting innovations in integration over the last decade is arriving just in time to address the surge of productivity apps that need to be integrated into the Enterprise. On a general scale, there are approximately 2,300 SaaS apps businesses use that need to be integrated. Line of business (LOB) users such as marketing campaign managers and sales managers are looking to perform quick and simple self-service integration of these apps themselves without the need for IT involvement. Oracle Self-Service Integration Cloud Service (SSI) provides the right tools for anyone that wants to connect productivity apps such as Slack or Eventbrite into their business. For example, perhaps you are a Marketing Campaign Manager and want to receive an alert each time a new digital asset is ready for your campaign. Or you are a Customer Support Representative trying to automate the deployment of survey links when an incident is closed. Or you are a Sales Manager who wants to feed your event attendees and survey respondents into your CRM. SSI has the tools to address all these needs and more.     Oracle Self-Service Integration is solving these business challenges by: Connecting productivity with enterprise apps - Addressing the quick growth of social and productivity apps that need to be integrated with enterprise apps. Enabling Self-Service Integration - Provide line of business (LOB) users the ability to self-service connect applications with no coding to automate repetitive tasks. Recipe-based Integration - Making it easier to work faster and smarter with modern cloud apps with an easy to use interface, library of cloud application connectors, and ready to use recipes. SSI increases productivity by bringing together collaborative applications such as Slack with traditional enterprise applications, reduces IT workloads allowing IT to deliver initial set-up and any required advanced integration and then offloading basic integration updates to LOB, and delivers faster integration and integration updates. There is no training required to use SSI. Simply ‘activate’ ready-to-run recipes and customer added events will automatically trigger the flow of integration.  To learn more, attend this webcast on April 18, 2018 at 10am PT/1pm ET to hear from Vikas Anand, Oracle Vice President of Product Management, as he discusses: Integration trends such as self-service, blockchain, and artificial intelligence the solutions available in Oracle Self-Service Integration Cloud Service the journey to a friction-less enterprise  Register here For a comprehensive overview of Oracle Self-Service Integration Cloud Service, take a look at our SSI ebook: Make Your Cloud Work for You.

REGISTER HERE One of the most exciting innovations in integration over the last decade is arriving just in time to address the surge of productivity apps that need to be integrated into the...

Visit COLLABORATE 18 to Hear the Latest on Oracle Cloud

    Post by Wincy Ip, Oracle     As Oracle's Cloud solutionscontinue to expand across SaaS, IaaS, and PaaS, customers are eagerly evaluating how these offerings can help transform how they run their businesses. Whether users are looking to modernize their business and optimize with new cloud investments, integrate and extend an existing hybrid environment with on-premise systems, or build a personalized path to the cloud, COLLABORATEis the annual Oracle user conference where attendees can learn how they can accelerate business innovation and digital transformation with Oracle Cloud.   This year's program at COLLABORATE, nearly 50% of the 1,200+ sessions will focus on cloud, developer, and emerging technologiesto complement Oracle's on-premise solutions. Here's a preview of some of the education available at COLLABORATE.   In the Oracle keynote session on Monday, April 23 at 2:30 p.m., Steve Daheb, Senior Vice President for Oracle Cloud, will illuminate how the Oracle Cloud Platform makes it possible for organizations to develop their own unique path to cloud from wherever they choose—SaaS, PaaS, or IaaS—and share how organizations have designed their unique journeys.   With the introduction of the world's first-ever autonomous database, COLLABORATE attendees will also hear about the exciting developments and get a sneak peek on the Oracle Autonomous Database Cloud, and how Oracle is integrating AI and machine learning to its suite of cloud services to make them fully autonomous and cognitive. These sessions will explore how organizations can benefit from more autonomy in their software, from business users to app developers to DBAs.   Additionally, there are more than 500 sessions available that span across Oracle's SaaS, IaaS, and PaaS solutions where attendees can learn how Oracle's cloud offerings can accelerate business transformation, increase agility, and optimize security with their existing solutions. Some of these sessions include:   Your Journey to Cloud with Choice and Control [Session ID: 109730] Move Your Oracle Workloads to Oracle Cloud: No Pain, Lots of Gain [Session ID: 109430] Oracle Cloud Infrastructure - The Best of On-Premises and Cloud in a Single Infrastructure Solution [Session ID: 112020] Advanced Architectures for Deploying Oracle Applications on Oracle Cloud Infrastructure [Session ID: 107820] Extend and Enhance ERP and Supply Chain with Oracle Cloud Platform [Session ID: 104320] Bitcoin Tech: How Blockchain Helps Extend Boundaries for Enterprise Applications and SaaS [Session ID: 104340] The Next Big Things: AI, Machine Learning, Chatbots, IOT, and Blockchain [Session ID: 110080]   COLLABORATE is the largest annual technology and applications forum for the Oracle user community in North America. Taking place on April 22-26 in Las Vegas, Nevada, and hosted by three Oracle user groups – IOUG, OAUG, and Quest International Users Group – the five-day conference will host more than 5,000 attendees in keynotes, sessions, workshops, networking events, and an exhibitor showcase with 200+ vendors.   See what COLLABORATE 18 has to offer. You can also review the complete agendaand search by keyword, education track, product line, or business goal. Register at attendcollaborate.comby April 18 and save up to 25% from the onsite registration price.

    Post by Wincy Ip, Oracle     As Oracle's Cloud solutionscontinue to expand across SaaS, IaaS, and PaaS, customers are eagerly evaluating how these offerings can help transform how they run their...