X

Welcome to All Things Data Integration: Announcements, Insights, Best Practices, Tips & Tricks, and Trend Related...

Recent Posts

Data Integration

New Whitepaper: Oracle GoldenGate - Innovations for Another 20 Years

  Over the past 20 years, the GoldenGate data replication platform has evolved from a startup technology targeted for ATM bank networks to what is now a global phenomenon used in every industry by 1000’s of businesses on every continent of the planet. By most measures, GoldenGate has become the most successful data integration product in the history of enterprise software. What started it all was an intense focus on solving the most demanding business continuity challenges that demand zero-downtime of databases and constant availability of important business data. As the technology advanced further, it became widely used for high-end analytic data warehouses and decision support scenarios for most of the Global 2000 industrial base. After 20 years of being on top, there are a whole new set of innovations that will propel the GoldenGate technology for another two decades of market dominance. These recent innovations include: Non-Relational Data Support – for SaaS Applications, Big Data, and Cloud Kernel Integration with Oracle Database – far better performance than any other vendor Remote Capture for Non-Oracle Databases – reduced workloads and simpler admin Simplification, Automation and Self-Service – no need for DBAs with most actions Microservices Core Foundation – more secure, more modular, and easier to work with Simplified, Open Framework for Monitoring – more choices for DevOps Containers, Kubernetes and Docker – faster and easier to deploy GoldenGate Stream Processing and Stream Analytics – added value with event processing Autonomous Cloud – let Oracle Cloud do the patching and optimizing for you Low-Cost (Pay As You Go) Subscriptions – GoldenGate for the cost of a cup of coffee The remainder of this paper provides more details for these innovations and explains how they will drive business results for the kind of modern digital transformation that IT and business leaders are seeking today. Click here to read the full whitepaper! .  

  Over the past 20 years, the GoldenGate data replication platform has evolved from a startup technology targeted for ATM bank networks to what is now a global phenomenon used in every industry by...

Data Integration

GoldenGate for Big Data 12.3.2.1.1 Release Update

Date: 05-Sep-2018 I am pleased to announce the release of Oracle GoldenGate for Big Data 12.3.2.1.1  Major features in this release include the following: New Target - Google BigQuery Oracle GoldenGate for Big Data release 12.3.2.1.1 can deliver CDC data to Google BigQuery cloud data storage from all the supported GoldenGate data sources. New Target – Oracle Cloud Infrastructure Object Storage Cloud Oracle GoldenGate for Big Data can now directly upload CDC files in different formats to Oracle Object Storage on both Oracle Cloud Infrastructure (OCI) and Oracle Cloud Infrastructure Classic (OCI-C). The integration to Object Storage cloud is provided by File Writer Handler. New Target: Azure Data Lake You can connect to Microsoft Azure Data Lake to process big data jobs with Oracle GoldenGate for Big Data. Other Improvements: Extended S3 Targets: Load files to third Party S3 compatible Object Storages Oracle GoldenGate for Big Data can now officially write to third-party Object Storages which are compatible with S3 API such as Dell-ECS Storage. Support for Kafka REST Proxy API V2 You can now either use Kafka REST Proxy API V1 or V2 and it can be specified in the Big Data Properties file. Security: Support for Cassandra SSL Capture Length Delimited Value Formatter The Length Delimited Value Formatter is a row-based formatter. It formats database operations from the source trail file into a length delimited value output. Timestamp with Timezone Property  You can consolidate the format of timestamp with this timezone property Avro Formatter Improvements You can write the Avro decimal logical type and Oracle NUMBER type. Newer Certifications like Apache HDFS 2.9, 3.0, 3.1 Hortonworks 3.0, CDH 5.15,  Confluent 4.1, 5.0 Kafka 1.1, 2.0 and many more !!! More information on Oracle GoldenGate for Big Data Learn more about Oracle GoldenGate for Big Data 12c Download Oracle GoldenGate for Big Data 12.3.2.1.1 Documentation for Oracle GoldenGate for Big Data 12.3.2.1.1 Certification Matrix for Oracle GoldenGate for Big Data 12.3.2. Prior Releases: May 2018 Release: Oracle GoldenGate for Big Data 12.3.2.1 is released Aug 2017 Release: What Everybody Ought to Know About Oracle GoldenGate Big Data 12.3.1.1 Features

Date: 05-Sep-2018 I am pleased to announce the release of Oracle GoldenGate for Big Data 12.3.2.1.1  Major features in this release include the following: New Target - Google BigQueryOracle GoldenGate...

Oracle Named a Leader in 2018 Gartner Magic Quadrant for Data Integration Tools

Oracle has been named a Leader in Gartner’s 2018 “Magic Quadrant for Data Integration Tools” report based on its ability to execute and completeness of vision. Oracle believes that this recognition is a testament to Oracle’s continued leadership and focus on in its data integration solutions. The Magic Quadrant positions vendors within a particular quadrant based on their ability to execute and completeness of vision. According to Gartner’s research methodologies, “A Magic Quadrant provides a graphical competitive positioning of four types of technology providers, in markets where growth is high and provider differentiation is distinct: Leaders execute well against their current vision and are well positioned for tomorrow. Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute well. Niche Players focus successfully on a small segment, or are unfocused and do not out-innovate or outperform others. Challengers execute well today or may dominate a large segment, but do not demonstrate an understanding of market direction.” Gartner shares that, “the data integration tools market is composed of tools for rationalizing, reconciling, semantically interpreting and restructuring data between diverse architectural approaches, specifically to support data and analytics leaders in transforming data access and delivery in the enterprise.” The report adds “This integration takes place in the enterprise and beyond the enterprise — across partners and third-party data sources and use cases — to meet the data consumption requirements of all applications and business processes.” Download the full 2018 Gartner “Magic Quadrant for Data Integration Tools” here. Oracle recently announced autonomous capabilities across its entire Oracle Cloud Platform portfolio, including application and data integration. Autonomous capabilities include self-defining integrations that help customers rapidly automate business processes across different SaaS and on-premises applications, as well as self-defining data flows with automated data lake and data prep pipeline creation for ingesting data (streaming and batch). A Few Reasons Why Oracle Data Integration Platform Cloud is Exciting Oracle Data Integration Platform Cloud accelerates business transformation by modernizing technology platforms and helping companies adopt the cloud through a combination of machine learning, an open and unified data platform, prebuilt data and governance solutions and autonomous features. Here are a few key features: Unified data migration, transformation, governance and stream analytics – Oracle Data Integration Platform Cloud merges data replication, data transformation, data governance, and real time streaming analytics into a single unified integration solution to shrink the time to complete end-to-end business data lifecycles.  Autonomous – Oracle Data Integration Platform Cloud is self-driving, self-securing, and self-repairing, providing recommendations and data insights, removing risks through machine learning assisted data governance, and automatic platform upkeep by predicting and correcting for downtimes and data drift. Hybrid Integration –Oracle Data Integration Platform Cloud enables data access across on-premises, Oracle Cloud and 3rd party cloud solutions for businesses to have ubiquitous and real-time data access. Integrated Data Lake and Data Warehouse Solutions – Oracle Data Integration Platform Cloud has solution based “elevated” tasks that automate data lake and data warehouse creation and population to modernize customer analytics and decision-making platforms. Discover DIPC for yourself by taking advantage of this limited time offer to start for free with Oracle Data Integration Platform Cloud. Check here to learn more about Oracle Data Integration Platform Cloud. Gartner Magic Quadrant for Data Integration Tools, Mark A. Beyer, Eric Thoo, Ehtisham Zaidi, 19 July 2018. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.  

Oracle has been named a Leader in Gartner’s 2018 “Magic Quadrant for Data Integration Tools” report based on its ability to execute and completeness of vision. Oracle believes that this recognition is...

Data Integration

New Whitepaper: Leverage the Oracle Data Integration Platform Inside Azure and Amazon Cloud

In this whitepaper, find out how you can leverage Oracle Data Integration Platform Cloud to move your on-premises data onto Azure and Amazon Web Services:   Oracle Data Integration Platform Cloud (DIPC) is a highly innovative data integration cloud service, providing a series of industry-first capabilities have been rolled out since inception in 2017, including streaming data replication, pushdown processing for data transformations (no engine required) ,and first class big data ingestion capabilities that support a wide variety of Apache open source projects such as Hive, HBase, Flume, Cassandra, and Kafka.   One of the most innovative architectural patterns that Oracle Data Integration Platform Cloud supports is the ability to push workloads to compute resources of the customer’s choice while preserving the capability for customers to keep their physical records behind their own firewalls and within their own security zones.   While you can move data to both Oracle and non-Oracle environments, this paper focuses on moving data to Azure and Amazon Clouds. There is absolutely no DIPC requirement for customers to put any of their business data into Oracle networks or any cloud resources at all. Oracle DIPC allows customers to keep their data within any of the following:   On Premise Data Centers – which could include any regional or geographically distributed data centers that customers operate themselves or lease from 3rd party operators Amazon Cloud Data Centers – supporting IaaS and PaaS integrations with Amazon services in any AWS regional data centers Azure Cloud Data Centers – supporting IaaS and PaaS integrations with Microsoft Azure data centers across regions Or any other data center that needs to support their workloads   The remainder of this paper will provide specific details about supported use cases for Oracle DIPC to support innovative next-generation data flows within Amazon and Azure clouds.   Click here to read the full whitepaper.  

In this whitepaper, find out how you can leverage Oracle Data Integration Platform Cloud to move your on-premises data onto Azure and Amazon Web Services:   Oracle Data Integration Platform Cloud (DIPC)...

Data Integration

Data Integration Platform Cloud (DIPC) 18.3.3 New Tasks

Just a few days ago, we wrote about the newest release of Data Integration Platform Cloud (DIPC) 18.3.3.  This is all very exciting!  Now, a few bits in an effort to share a bit more on the two newest Elevated Tasks and inclusion of Stream Analytics to DIPC.   This release of DIPC helps with data lake automation, enabling an intuitive instantiation and copy of data into a data lake, in an effort to help reduce some of the existing data engineer/ data scientist friction through a new Data Lake Builder task.  You can quickly create a comprehensive, end-to-end repeatable data pipeline to your data lake.  And – note that nothing is moved to data lake without being fully governed!  When you add data to the data lake, DIPC follows a repeatable pattern to harvest, profile, ingest, shape, copy, and catalog this data. Data can be ingested from a variety of sources, including relational sources, flat files, etc. Harvested metadata will be stored in the DIPC Catalog, and the data will be transformed and secured within the target data lake for downstream activities.  For more information, see Adding Data to Data Lake.   The Replicate Data Task helps address high availability.  Replicate into Oracle… or Kafka!  And, bring that together with Stream Analytics whereby event process is made possible on real-time data streams, including Spatial, Machine Learning, queries on the data stream or cubes.  With Stream Analytics, you can analyze complex event data streams that DIPC consumes using sophisticated correlation patterns, enrichment, and machine learning to provide insights and real-time business decisions. Very simply, the Replicate Data Task delivers changes from your source data to the target.  You set up connections to your source and target, and from the moment that you run this task, any new transaction in the source data is captured and delivered to the target. This task doesn't perform an initial copy of the source (for the initial load see Setting up a Synchronize Data Task) so you'll get all the changes from the point of time that you started your job. This task is especially ideal for streaming data to Kafka targets.  For more information, see Setting up a Replicate Data Task.   For more tutorials, videos, etc on DIPC – please visit the Documentation, as well as the A-Team Chronicles for interesting Data Integration technical know-how.

Just a few days ago, we wrote about the newest release of Data Integration Platform Cloud (DIPC) 18.3.3.  This is all very exciting!  Now, a few bits in an effort to share a bit more on the two...

Data Integration Platform Cloud (DIPC) 18.3.3 is available!

Data Integration Platform Cloud (DIPC) 18.3.3 is now available! Do you know what DIPC is?  Check out this short 2 minute video!   DIPC is innovative!  With an Elevated Task driven approach that guides users through their Data Integration journey, DIPC seeks to simplify and revolutionize Data Integration!  The Elevated Task is a simple, pre-defined set of steps to assist in creating a specific and useful, common job within Data Integration.  These tasks result in simpler solutions such as data migrations or data warehouse/data lake automation and projects that are delivered more quickly, but yet, well designed and effective. Let’s cover some of the brand new and exciting features and tasks in this release!      This release helps with data lake automation, enabling an intuitive instantiation and copy of data into a data lake, in an effort to help reduce some of the existing data engineer/ data scientist friction through a new Data Lake Builder task!  You can quickly create a comprehensive, end-to-end repeatable data pipeline to your data lake.  And – note that nothing is moved to data lake without being fully governed!      The Replicate Data Task helps address high availability.  Replicate into Oracle… or Kafka!  And, bring that together with Stream Analytics whereby event process is made possible on real-time data streams, including Spatial, Machine Learning, queries on the data stream or cubes.  With Stream Analytics, you can analyze complex event data streams that DIPC consumes using sophisticated correlation patterns, enrichment, and machine learning to provide insights and real-time business decisions. Additionally, you have already heard us mention the Synchronize Data Task, the Data Preparation Task, the ODI Execution Task, but this release features enhancements to many of these! For a quick recap:   Synchronize Data:  The Synchronize Data task enables you to copy your selected data from a source to target, and then keeps both databases in sync. You can also use filter rules to include or exclude specific data entities in your job. Any change in the source schema is captured and replicated in the target and vice versa. After you create a synchronize data task and set it up to synchronize all data or specific data entities, you can run the job. If you setup policies for your job, you'll receive notifications for your specified criteria. For more information, see Creating a Synchronize Data Task.   Data Preparation:  The Data Preparation task enables you to harvest data from a source File or Oracle database Connection, and then cleanse, organize, and consolidate that data, saving it to a target Oracle database. For more information, see Setting up a Data Preparation Task.   ODI Execution:  Invoke an existing ODI Scenario to perform bulk data transformations. For more information, see Setting up an ODI Execution Task.   This release also provides updated components under the covers such as:  Enterprise Data Quality 12.2.1.3.0, Oracle Data Integrator 12.2.1.3.1, GoldenGate for Oracle 11g/12c 12.3.0.1.0, GoldenGate for Big Data 12.3.2.1.0, and GoldenGate for MySQL 12.2.0.1.1.  Oracle Stream Analytics is part of DIPC as well, and is available only for user-managed Data Integration Platform Cloud instances.   Want to learn more?  Visit the DIPC site and check out some of our archived webcasts HERE!  

Data Integration Platform Cloud (DIPC) 18.3.3 is now available! Do you know what DIPC is?  Check out this short 2 minute video!   DIPC is innovative!  With an Elevated Taskdriven approach that guides...

Data Integration

Oracle Integration Day is Coming to a City near You

Are you able to innovate quickly in the new digital world? Are you looking for ways to integrate systems and data faster using a modern cloud integration platform? Is your Data Integration architecture allowing you to meet your uptime, replication and analytics/reporting needs? Is your organization able to achieve differentiation and disruption?  Join Oracle product managers and application/data integration experts to hear about best practices for the design and development of application integrations, APIs, and data pipelines with Oracle Autonomous Integration Cloud and Data Integration Platform Cloud. Hear real-world stories about how Oracle customers are able to adopt new digital business models and accelerate innovation through integration of their cloud, SaaS, on-premises applications and databases, and Big Data systems. Learn about Oracle’s support for emerging trends such as Blockchain, Visual Application Development, Self-Service Integration, and Stream Analytics to deliver competitive advantage. Tampa Integration Day With interactive sessions, deep-dive demos and hands-on labs, the Oracle Integration Day will help you to: Understand how Oracle’s Data Integration Platform Cloud (DIPC) can help derive business value from enterprise data; getting data to the right place at the right time reliably and ensuring high availability Understand Oracle's industry leading use of Machine Learning/AI in its Autonomous Integration Cloud and how it can significantly increase speed and improve delivery of IT projects Quickly create integrations using Oracle’s simple but powerful Integration Platform as a Service (iPaaS) Secure, manage, govern and grow your APIs using Oracle API Platform Cloud Service Understand how to leverage and integrate with Oracle’s new Blockchain Cloud Service for building new value chains and partner networks Integration Day begins on August 8 in Tampa. Register now to reserve your spot! Click the links to learn more about your local Integration Day: August 8, 2018 – Tampa August 15, 2018 – Los Angeles August 22, 2018 – Denver September 6, 2018 – San Francisco September 19, 2018 – New York City September 26, 2018 – Toronto October 3, 2018 – Boston December 5, 2018 – Chicago January 23, 2019 – Atlanta January 30 , 2019 –  Dallas February 6, 2019 – Washington DC February 20, 2019 – Santa Clara  

Are you able to innovate quickly in the new digital world? Are you looking for ways to integrate systems and data faster using a modern cloud integration platform? Is your Data...

New Whitepaper: EU GDPR as a Catalyst for Effective Data Governance and Monetizing Data Assets

The European Union (EU) General Data Protection Regulation (GDPR) was adopted on the 27th of April 2016 and came into force on the 25th of May 2018. Although many of the principles of GDPR have been present in country-specific legislation for some time, there are a number of new requirements which impact any organization operating within the EU. As organizations implement changes to processes, organization and technology as part of their GDPR compliance, they should consider how a broader Data Governance strategy can leverage their regulatory investment to offer opportunities to drive business value. This paper reviews some of the Data Governance challenges associated with GDPR and considers how investment in GDPR Data Governance can be used for broader business benefit. It also reviews the part that Oracle’s data governance technologies can play in helping organizations address GDPR. The following Oracle products are discussed in this paper: Oracle Enterprise Metadata Manager (OEMM)–metadata harvesting and data lineage Oracle Enterprise Data Quality (EDQ)–for operational data policies and data cleansing Oracle Data Integration Platform Cloud–Governance Edition (DIPC-GE)–for data movement, cloud-based data cleansing and subscription-based data governance Read the full whitepaper here.

The European Union (EU) General Data Protection Regulation (GDPR) was adopted on the 27th of April 2016 and came into force on the 25th of May 2018. Although many of the principles of GDPR have been...

Data Integration

Data Integration Platform Cloud for SaaS Applications

Customers generate enormous amounts of data in SaaS applications which are critical to business decisions such as reducing procurement spend or maximizing workforce utilization. With most customers using multiple SaaS applications, many of these decisions are made in analytical engines outside of SaaS, or need external data to be brought to SaaS to make decisions within. In this blog we shall examine common data movement and replication needs in the SaaS ecosystem and how Oracle’s Data Integration Platform Cloud (DIPC) enables access to SaaS data and helps with decision making. Data Integration Challenges for SaaS As applications moved from on-premise to SaaS, while they provided a number of benefits, a number of pre-existing assumptions and architectures changed. Let us examine a few changes in enterprise landscape here, which are by no means comprehensive. First, on-premise applications in most cases provided access to applications at a database level, typically read only. This has changed with hardly any SaaS vendor providing database access. Customers now work with REST APIs (or earlier versions of SOAP APIs) to extract and load bulk data. While APIs have many advantages, including removing dependency on application schema, they are no match for SQL queries and have pre-set data throttling limitations defined by SaaS vendor. Second, most customers have multiple SaaS applications which makes it imperative to merge data from different pillars for any meaningful analysis/insight; Sales with Product, Leads with Contacts; Orders with Inventory and the list goes on. While each of the SaaS applications provide some analytical capability, most customers would prefer modern best of breed tools and open architectures for their data for analytical processing. This could be from traditional relational databases with Business Intelligence to modern Data Lakes with Spark engines.  Third most enterprise customers have either an application or an analytical/reporting platform on-premise, which necessitates data movement between cloud to on-premise; i.e, a hybrid cloud deployment. Fourth, semi-structured and unstructured data sources are increasingly used in decision making. Emails, Twitter feeds, Facebook and Instagram posts, Log files and device data all provide context for transactional data in relational systems.  And finally, decision making timelines are shrinking with need for real-time data analysis more often than not. While most SaaS applications provide batch architectures, and REST APIs they struggle to provide robust streaming capability for real time analysis. Customers need SaaS applications to be part of both Kappa and Lambda style architectures. Let us take a peek into how Oracle Data Integration Platform Cloud addresses these issues.   Mitigating SaaS Data Integration Challenges with DIPC Data Integration Platform Cloud (DIPC) is acloud-based platform for data transformation, integration, replication and governance.  DIPC provides batch and real-time data integration among cloud and on-premises environments and brings together the best of breed Oracle data integration products of Oracle GoldenGate, Oracle Data Integrator and Oracle Enterprise Data Quality within one unified cloud platform. You can find more information on DIPC here. For Oracle’s Fusion applications, such as ERP Cloud, HCM Cloud and Sales Cloud, DIPC supports a number of load and extract methods with out of the box connectors. These include BI Publisher, BI Cloud Connector and other standard SOAP/REST interfaces. The choice of interface depends on specific use case. For example, to extract large datasets for a given subject area (say Financials-> Accounts), BI Cloud Connector (BICC) is ideal with its incremental extract setup in Fusion. BICC provides access to Fusion Cloud data via Public View Objects (PVOs). These PVOs are aggregated into Subject Areas (Financials, HCM, CRM etc), and BICC can be setup to manually or programmatically pull full or incremental extracts. DIPC integrates with BI Cloud Connector to kick off an extract, download the PVO data files in chunks, unzip and decrypt them, extract data from CSV formats, read metadata formats from mdcsv files and finally load them to any target such as Database Cloud Service or Autonomous Data Warehouse Cloud Service. For smaller datasets, DIPC can call existing or custom built BI Publisher reports and load data to any targets. For other SaaS applications, DIPC has drivers for Salesforce, Oracle Service Cloud, Oracle Sales Cloud and Oracle Marketing Cloud. These drivers provide a familiar jdbc style interface for data manipulation while accessing SaaS applications over REST/SOAP APIs. In addition, other SaaS applications that provide JDBC style drivers, such as NetSuite can become a source and target for ELT style processing in DIPC. DIPC has generic REST and SOAP support allowing access to any SaaS REST APIs. You can find list of sources and targets supported by DIPC here. DIPC simplifies data integration tasks using the Elevated Tasks, and users can expect more wizards and recipes for common SaaS data load and extract tasks in future. The DIPC Catalog is populated with metadata and sample data harvested from SaaS applications. In the DIPC Catalog users can create Connections to SaaS applications, and subsequent to which a harvest process will be kicked off and populate the Catalog with SaaS Data Entities. From this Catalog, users will be able to create Tasks with Data Entities as Sources and Targets, and wire together a pipeline data flow including JOINs, FILTERS and standard transformation actions. Elevated tasks can also be built to feed SaaS data to a Data Lake or Data Warehouse such as Oracle Autonomous Data Warehouse Cloud (ADWCS). In addition, there is a full featured Oracle Data Integrator embedded inside for existing ODI customers to build out Extract, Load and Transform scenarios for SaaS data integration.  Customers can also bring their existing ODI scenarios to DIPC using ODITask. ODITask is an ODI scenario exported from ODI and imported into DIPC for execution. ODITask can be wired to SaaS source and targets.   Figure above shows DIPC Catalog populated with ERP Cloud View Objects.   Figure above shows details for Work Order View Object in DIPC Catalog   For Hybrid cloud architectures, DIPC provides a remote agent that includes connectors to a wide number of sources and targets. Customers who wish to move/replicate data from on-prem sources can deploy the agent, and have data pushed to DIPC in the cloud for further processing, or vice versa for data being moved to on-premise applications. The remote agent can also be deployed on non-Oracle cloud for integration with Databases running on 3rdparty clouds. For real-time and streaming use cases from SaaS Applications, DIPC includes Oracle Golden Gate, the gold standard in data replication. When permissible, SaaS Applications can deploy Golden Gate to stream data to external Databases, Data Lakes and Kafka Clusters. Either Golden Gate can be deployed to read directly from the SaaS Production database instance to mine the database redo log files or can run on a standby/backup copy of SaaS database and use the cascading redo log transmission mechanism. This mechanism leads to minimal latency and delivers Change Data Capture of specific SaaS transaction tables to an external database or data warehouse providing real-time transaction data for business decisions. Using these comprehensive features in DIPC, we are seeing customers sync end of day/end of month batches of Salesforce Account information into E-Business Suite. Fusion Applications customers are able to extract from multiple OTBI Subject areas and merge/blend Sales, Financials and Sales / Service objects to create custom datamarts. And in Retail, we have customers using Golden Gate’s change data capture to sync Store data to Retail SaaS Apps at corporate in real time. In summary, DIPC provides a comprehensive set of features for SaaS customers to integrate data into Data Warehouses, Data Lakes, Databases and with other SaaS Applications in both real-time and batch. You can learn more about DIPC here.

Customers generate enormous amounts of data in SaaS applications which are critical to business decisions such as reducing procurement spend or maximizing workforce utilization. With most customers...

Oracle GoldenGate Veridata 12.2+ BP new enhancements

In last week, we have released GoldenGate Veridata bundle patch (12.2.1.2.180615). The release contains the two significant improvements along with few bug fixes. The GoldenGate Veridata certifies the High Availability (HA) for Veridata Server and Veridata Agents. The GoldenGate Veridata is leveraging the High availability support provided by WLS. We officially certified it and had documented the detailed steps for all of you to harness it. You may find the details provided in Oracle By Example created by Anuradha Chepuri.   When primary Veridata server fails down, the other Veridata server(backup or slave server) will serve the requests to connected Veridata agents. All the existing requests need to re-initiate again by users. Both the Veridata servers are connected to the shared repository so that all the metadata are available and updated to both the servers. The Veridata Agent HA support has also been tested, when the primary Veridata agent fails down, the other slave or backup Veridata agent will take over. All the new requests will be diverted to Veridata Agent, and existing requests need to re-initiate by users. The VIP address needs to be added into Veridata Agent configuration file so that seamless fail-over could happen.   The other major feature was to allow Active Directory (AD) users access of GoldenGate Veridata product. The Active Directory users can use the GoldenGate Veridata product. We have created new roles in Veridata for Active Directory.   Following are AD Veridata Roles added:- ExtAdministrator ExtPowerUser ExtDetailReportViewer ExtReportViewer ExtRepairOperator You need to import these roles by creating them in your WebLogic server. In below screen, I have shown how to create the ExtAdministrator role. All other roles can be created similarly. Once, all the required roles are imported in the WebLogic server, and you may assign these roles to your AD users or group of users. Over here, I am editing the ExtAdministration Role. For ExtAdministrator role, I want to add the Group. I am adding the existing AD group called "DIQAdmin" to it.   The AD users who are all part of DIQAdmin can access the Veridata product.   You may see my blog on earlier GoldenGate Veridata release over here. Let me know if you have any questions.

In last week, we have released GoldenGate Veridata bundle patch (12.2.1.2.180615). The release contains the two significant improvements along with few bug fixes. The GoldenGate Veridata certifies the...

Data Integration

Walkthrough: Oracle Autonomous Data Integration Platform Cloud Provisioning

We recently launched Oracle Autonomous Data Integration Platform Cloud (ADIPC) Service, a brand new Autonomous cloud-based platform solution for all your data integration needs that helps migrate and extract value from data by bringing together capabilities of a complete Data Integration, Data Quality, and Data Governance. You can get more information about it in Introducing Oracle Autonomous Data Integration Platform Cloud (ADIPC). In this article, I will focus on the provisioning process and walk you through how to provision Autonomous Data Integration Platform Cloud (ADIPC) Instance in the Oracle Cloud. In the previous blog my colleague Julien Testut has walked you through how to provision the Data Integration Platform Cloud - User Managed, cloud service. Here onward, I will refer "Autonomous Data Integration Platform Cloud" as ADIPC "Autonomous Data Integration Platform Cloud Instance" as ADIPC Instance "Data Integration Platform Cloud" as DIPC User Managed. First, you will need to access your Oracle Cloud Dashboard. You can do so by following the link you received after subscribing to the Oracle Cloud, or you can go to cloud.oracle.com and Sign In from there. The Service does not have any pre-requisite, you can directly create ADIPC Instance. We are providing you the in-built Database Cloud Service (DBCS) Instance for storing the ADIPC Repository content for all Editions. The DBCS Instance will not be accessible to the user, and it is used for internal ADIPC purpose. It is self-managed Instance. Go to the Dashboard, click on Create Instance, click on All Services and scroll down to find Data Integration Platform under Integration. Click on Create next to it. This will get you to the ADIPC Service Console page:   You can create Service by clicking either on QuickStarts or Create Instance. The QuickStarts template will provide you ready to use templates for different Editions. Click on QuickStarts at the right-top corner. The page will display Governance Edition template. The upcoming release will have new templates. The Instance name is automatically generated for you. You may change the name if requires. Click on Create and the ADIPC Instance will be created for you.   If you want to select various input parameters while provisioning, click on Create Instance on Service Console page to navigate to the provisioning screens. In the Details screen, Under Instance Details, enter Service Name, Description, Notification Email, Tags, and On Failure Retain Resources. In Configuration Section, you may select the Region and Availability Domain where you want to deploy your ADIPC Instance. In Service Section, select Data Throughput (Data Volume) that has mainly four choices. The ADIPC has new data volume based metering model, where you choose the option based on your data volume in your Data Integration environment. Your Instance will have the compute resource as per selected data volume.  If you want to utilize your on-premises licenses, you may choose to Bring Your Own License option. In this example, I have selected the Governance Edition that includes Data Governance capabilities in addition to everything included with ADIPC Standard and Enterprise Editions.  When done, click Next to review the configuration summary: Finally, click Confirm to start the ADIPC Instance creation. You can see the status of the new Instance being provisioned in the Oracle Cloud Stack Dashboard. You can also check the Stack Create and Delete History at the bottom of the page. It has more detailed information. You can go to the Dashboard by clicking on Action Menu on left top corner, click Dashboard.   Next, let's customize your dashboard to show ADIPC in the Oracle Cloud Dashboard. From the Dashboard, click on minus - button on right top corner,  then click Customize Dashboard: Scroll down in the list and click Autonomous Data Integration Platform Cloud under Integration section: Autonomous Data Integration will then start appearing on the Dashboard: Click the Autonomous Data Integration Platform Cloud (ADIPC) Action Menu to see more details about your subscription and click Open Service Console to view your ADIPC instances and View Account Usage Details to find out how much data you have already consumed: You can also see the Instance status through ADIPC Service Console. You can click on the instance name to get more details about it. When the provisioning process is over, the ADIPC Instance will show as ‘Ready’: Congratulations! We now have a new Autonomous Data Integration Platform Cloud Instance to work with. You can get more information about it on the product page: Data Integration Platform. In future articles, we will cover more DIPC autonomous capabilities. In the meantime, please write your comments if you have any questions.

We recently launched Oracle Autonomous Data Integration Platform Cloud (ADIPC) Service, a brand new Autonomous cloud-based platform solution for all your data integration needs that helps migrate...

Data Integration

Introducing Oracle Autonomous Data Integration Platform Cloud (ADIPC)

Data has always been the critical asset that sets organizations apart from their competition. As more and more businesses realize this truth, Data Integration is cementing its importance in strategic projects and business transformation conversations. The importance of data integration, the ability to provide access to and unearth value from vast quantities of data, has moved beyond the realm of the IT department and is being recognized as key to business success.  Good business decisions can only come from good – properly accessed, prepared, and manipulated – data! We are pleased to introduce Oracle Autonomous Data Integration Platform Cloud, the next generation, cloud-based data integration platform from Oracle that can listen to users, learn from user interaction, and take corrective and value adding actions from usage patterns. Built-in Artificial Intelligence and Machine Learning features elevate the platform into a responsive, self-aware data platform. Autonomous Data Integration Platform Cloud (ADIPC) is: Self-Driving – Continuously extract value hidden within data sets through data correlations and other predictive operations. Self-Securing – Identify and classify data that is stored and processed in the data platform to provide a layer of Machine Learning enabled data governance. Self-Repairing – Reduce manual labor, eliminate costs, and minimize risks by proactively detecting and automating platform upkeep. With ADIPC, you and your business will experience increased flexibility to modernize your business platform, universal data access for better decision making, and best in class SaaS connectivity to fully exploit your cloud applications.  Take your business to the next level by easily addressing mission critical challenges through the various solutions that ADIPC addresses, such as;  heterogeneous and global data high availability, database migration with simple point and click, self-service data warehouse automation, truly low latency streaming and integration, and data governance for the business or IT.  More detailed uses include self-service data preparation to help minimize manual work in the creation of data pipelines, self-optimizing ETL and machine learning to expect changing data, and Solution Tasks to speed up development. Oracle Autonomous Data Integration Platform Cloud delivers the industry’s most comprehensive data integration solution in the cloud, providing all the capabilities you expect as well as new ways to capitalize and innovate on your organization’s data. ADIPC allows you to: Access Data Ensure that all data, from any data sources, on premise or in the cloud is made available Access and switch between multiple big data sources without having to learn scripting and programming languages Enrich Data Make sure that the data that you use is complete and enriched Data is made fit for form, to suit the need of what the data is to be used for Stream and Migrate Data Move data across various hybrid environments, cloud and on-premises, both in real time and bulk Make decisions with data in flight to make instantaneous decisions for time sensitive actions Transform Data Unlock value from data by combining various data sets allowing users to discover value by combining data Govern Data Ensure that data is fully identified and easy to use by both business and IT users, dispelling data blind spots within data lakes Have complete control over your data lifecycle to ensure that the data is being accessed by the right people With Oracle Autonomous Data Integration Platform Cloud, you can easily and quickly: Access and manipulate hundreds of data sources Eliminate downtime Perform cloud onboarding Extract, load, and transform (ELT) data entities Replicate selected data sources Offers real-time actionable business insight on streaming data Trust in data by maintaining and governing data quality with prebuilt processes Simplify IT and provide self-service for the business Be more agile and react faster than your competition Oracle Autonomous Data Integration Platform Cloud is now available to provide you with a comprehensive, cloud-based solution for all of your data integration and data governance needs. Get Started with Oracle Cloud for Free: Click here to experience Oracle Autonomous Data Integration Cloud free!

Data has always been the critical asset that sets organizations apart from their competition. As more and more businesses realize this truth, Data Integration is cementing its importance in strategic...

Looking for Cutting-Edge Data Integration & Governance Solutions: 2018 Cloud Platform Excellence Awards

It is nomination time!!!  This year's Oracle Excellence Awards: Oracle Cloud Platform Innovation will honor customers and partners who are on the cutting-edge, creatively and innovatively using various products across Oracle Cloud Platform to deliver unique business value.  Do you think your organization is unique and innovative and is using Oracle Data Integration and Governance?  Are you using Data Integration Platform Cloud, GoldenGate Cloud Service, Oracle Data Integrator Cloud Service, Oracle Stream Analytics, etc?  And are you addressing mission critical challenges?  Is your solution around heterogeneous and global data high availability, database migrations to cloud, data warehouse and data lake automation, low latency streaming and integration, or data governance for the business or IT for example?  Tell us how Oracle Data Integration is impacting your business! We would love to hear from you!  Please submit today in the Data Integration and Governance category. The deadline for the nomination is July 20, 2018.  Win a free pass to Oracle OpenWorld 2018!! Here are a few more details on the nomination criteria: Solution shows innovative and/or visionary use of these products There is a measurable level of impact such as ROI or other business benefit (or projected ROI) Solution should have a specific use case identified Nominations for solutions which are not yet running in production will also be considered Nominations will be accepted from Oracle employees, Oracle Partners, third parties or the nominee company We hope to honor you! Click here to submit your nomination today! And just a reminder:  the deadline to submit a nomination is 5pm Pacific Time on July 20, 2018.

It is nomination time!!!  This year's Oracle Excellence Awards: Oracle Cloud Platform Innovation will honor customers and partners who are on the cutting-edge, creatively and innovatively using...

Data Integration

Oracle GoldenGate for Big Data 12.3.2.1 is released

What’s new in Oracle GoldenGate for Big Data 12.3.2.1 ? New Source - Cassandra Starting Oracle GoldenGate for Big Data release 12.3.2.1, GoldenGate can read from NoSQL data stores. With this release, you will be able to capture changes from Cassandra which is a columnar NoSQL data store. It can also capture from the beginning or also known as Initial Capture.   New Target – Kafka REST Proxy Oracle GoldenGate for Big Data can now natively write Logical Change Records (LCR) data to a Kafka topic in real-time using the REST Proxy interface. Supports DDL changes, Operations such as Insert, Update, Delete and Primary Key Update can be handled. It can support Templates and formatters. It can also provide encoding formats such as AVRO and JSON. It can support HTTPS/SSL layer security   New Target – Oracle NoSQL Oracle GoldenGate for Big Data can now officially write to Oracle NoSQL data stores. It can handle Oracle NoSQL data types, mapping between table and columns, DDL changes to be replicated, Primary key updates. It can support both Basic and Kerberos method of authentication.   New Target – Flat files Oracle GoldenGate for Big Data has a new Flat file writer. This is designed to load to a local file system and then load completed files to another location like HDFS. This means that analytical tools will not try to access the real-time half processed files and also can do post processing capabilities like transform, merge like calling a native function. New Target - Amazon Web Services S3 Storage Oracle GoldenGate for Big Data can create a local file system and then load completed files to another location like AWS S3. S3 handler can write to pre-created AWS S3 buckets or create new buckets using AWS OAUTH authentication method. New Data Formats – ORC & Parquet Oracle GoldenGate for Big Data can write newer data formats such as ORC and Parquet using the new Flat File handler..   Newer Certifications like MapR, Hortonworks 2.7, CDH 5.14, Confluent 4.0, MongoDB 3.6, DataStax Cassandra 5.1, Elasticsearch 6.2, Kafka 1.0 and many more !!! More information on Oracle GoldenGate for Big Data Learn more about Oracle GoldenGate for Big Data 12c Download Oracle GoldenGate for Big Data 12.3.2.1 Documentation for Oracle GoldenGate for Big Data 12.3.2.1 Certification Matrix for Oracle GoldenGate for Big Data 12.3.2.1

What’s new in Oracle GoldenGate for Big Data 12.3.2.1 ? New Source - CassandraStarting Oracle GoldenGate for Big Data release 12.3.2.1, GoldenGate can read from NoSQL data stores. With this release,...

Data Integration

Easily Integrate Planning & Budgeting Cloud Service (PBCS) with Oracle Data Integrator (ODI)

Author: Thejas B Shetty (Oracle SSI) Safe Harbor Statement: The following article represents the views of the author only & Oracle does not endorse it. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.  Introduction: In the On-Premise version of Hyperion Planning, it is possible to use the pre-packaged ODI Knowledge Modules (KMs) for Essbase to integrate data with the plan types. These knowledge modules use the Essbase Java APIs to connect and communicate directly with the underlying Essbase cubes (plan types), having additional options to execute calculation scripts, MaxL commands etc. In the Cloud version of Hyperion Planning, namely Planning Budgeting Cloud Service(PBCS)/Enterprise Planning Budgeting Cloud Service(EPBCS), customers do not have the direct access into the underlying Essbase cube and hence they cannot communicate/integrate directly with the Essbase cube. Oracle has no plans of providing native access to the underlying Essbase cubes of the PBCS/EPBCS applications and hence recommends using alternative methods like EPM Automate / REST APIs to communicate with PBCS application components & perform various integration operations. These different integration operations are pre-defined as JOBS within the Simplified Interface which are then invoked from either EPM Automate or direct REST API calls. Many customers, who have extensively used ODI in the past to integrate with Hyperion Planning applications, would still like to use ODI with PBCS the same way, without investing significant time/efforts in building new interfaces using FDMEE/Data Management. It would also help customers migrating their On-Prem Planning applications to PBCS/EPBCS, to move their historical data from the source Essbase cubes to the new PBCS/EPBCS application. The transformation/re-mapping of data if any can be managed in the ODI layer instead of manipulating the data in either the source Planning application or the new PBCS/EPBCS application. However the limitation of not having direct access to the underlying Essbase cube of PBCS/EPBCS application makes it impossible to use the existing Essbase Knowledge Modules & needs the ODI developer to either utilize an additional command line utility called EPM Automate within ODI OS Commands/procedures or learn a new technology namely REST API & leverage it to perform various jobs within PBCS/EPBCS. All of these cannot be done in a single ODI Mapping & requires sequencing of multiple components in an ODI package. I have created the ODI KMs for PBCS/EPBCS utilizing the PBCS REST APIs in the ODI 12c version. The KM options are similar to the On-Prem Essbase KM options & contains the KM code written in Jython that invokes REST APIs to the PBCS to perform equivalent jobs. There are additional KM options added specific to PBCS, to make the integration process simpler, but yet very robust. This way, any data in the source, regardless of the technology/platform, can be integrated with PBCS/EPBCS using the data stores on the ODI Mapping by treating it as if it were an on-premise Essbase data store. For an ODI developer, this KM code decouples the underlying technology (REST APIs) from the standard knowledge/skills needed to load data into PBCS, thereby letting the developer only concentrate on the core ODI components & PBCS functionalities. All other processes like the file-generation, uploading of data file to cloud, clear data etc are done within the KM code by invoking REST APIs in the background. The required end-point URLs for the REST API calls are also hardcoded within the KM & hence the ODI developer need not have a knowledge on the same. These KMs are presently not officially supported by Oracle and hence no SRs can be raised for the same. However, by sharing them with a wider community, I encourage people to use them, modify them, and contribute their valuable input and feedback. You can find these KMs in the ODI Exchange within ODI Studio as well as on this website: link. Pre-Requisites to use the Knowledge Modules: There are no pre-requisite .jar files that are required to use the PBCS Knowledge Modules. All the Java libraries used within the KM code should be present in the ODI agent/studio by default during installation of ODI. However, a new ODI Technology by name EPM Cloud Service needs to be added to the ODI repository before the KMs can be used. The ODI technology can be downloaded from here Additional Setup before using the KMs Make sure that the machine where ODI Agent (or ODI Studio) is installed, has the required network connection enabled to the PBCS Application URL. Make sure there are no firewall restrictions to the PBCS application URL. Setting up the Topology for PBCS/EPBCS Applications Create a new Data Server under the EPM Cloud Service technology. Choose a name for the Data Server. Update the Topology as below for the Data Server by entering the PBCS Cloud URL in the ServiceURL (Data Server) field. Please note that this Service URL should end with oraclecloud.com & not suffixed with anything else in the end, including special characters and/or spaces. *Oracle employees using the PBCS/EPBCS VM on their laptops can use the Service URL as http://192.168.56.101:9000 (Please modify the IP address as per your VM configuration) Update the Username / Password used for connecting to PBCS/EPBCS application. The Username should be of the format: domain.username for all PBCS/EPBCS GSE/Test/Production pods. *Oracle Employees using the PBCS/EPBCS VM on their laptop, can put only the username without the domain. Eg: epm_default_cloud_admin Create a physical schema, under this Data Server, to represent the ApplicationType and ApplicationName. PBCS should be used as Application Type for both PBCS/EPBCS type of applications. The KM has not been tested with FCCS/TRCS & hence may or may not work with FCCS/TRCS applications. Using the Reverse Engineering KM (RKM) Use the RKM EPM Cloud Service-PBCS Knowledge Module to reverse engineer the PBCS/EPBCS Application Datastores into your ODI Models, pointing to an EPM Cloud Service type of logical schema. A separate datastore is fetched corresponding to each of the BSO/ASO plantypes within the PBCS/EPBCS application: ·<AppName>_<CubeName>_Data: Used for loading/extracting data into/from PBCS/EPBCS. The datastore will have each of the dimensions excluding the attribute dimensions as ODI Column Attributes. The numeric column attribute for the data amount is added in the end. Using IKM SQL to EPM Cloud Service-PBCS (DATA). Use any Datastore that contains source data on the left hand side of an ODI Mapping, and connect it with a PBCS/EPBCS Datastore on right hand side. If the source datastore is a RDBMS technology (Oracle/MS SQL Server / Generic SQL), it does not require any Staging area. To load data from other technologies including text files, use an appropriate LKM (eg: LKM File to SQL) to extract data into a SQL Staging area (eg: SUNOPSIS_MEMORY_ENGINE, Oracle Staging Area etc). The source data store need not be in the same format as that of PBCS/EPBCS and can have different column names / column count. The IKM will apply the mappings defined, before loading data into PBCS/EPBCS. Map the target expressions from the source datastore. The below image shows an example of how a text file can be integrated with PBCS/EPBCS using an Oracle Database as staging area. Click on the PBCS/EPBCS datastore in Target_Group on the Physical tab and choose the IKM as IKM SQL to EPM Cloud Service-PBCS(DATA). The IKM Options are categorized into 3 groups: General Pre-Load Operations Post-Load Operations General: ·DATA_LOAD_JOB_NAME The name of the job of type IMPORT_DATA defined in the PBCS UI. This job should have Source Type as Essbase & have the same Cube name as that of the Plan type of the target datastore. The Source File name can be any arbitrary name ending with a .txt extension. The actual file name passed to perform the data import, is passed dynamically from ODI using the ODI session number as suffix & overrides any static filename already defined in the job. ·LOG_ENABLED Flag to indicate if logging should be performed during the load process. If set to Yes, logging would be done to the file specified by the LOG_FILE_NAME option. ·LOG_FILENAME The fully qualified name of the file into which logging is to be done. Do not use backslashes (\) in the filename. Pre-Load Operations: ·PRE_LOAD_CLEAR_CUBE: Boolean Flag to indicate whether to run a CLEAR_CUBE job on the Cloud before data is loaded. This requires a job of type CLEAR_CUBE to be setup onetime in the PBCS UI. The name of the CLEAR_CUBE job must be referenced in the CLEAR_CUBE_JOB_NAME option. The definition of the Clear Cube operation has to be defined in the PBCS UI using the various options as shown below & cannot be controlled from the ODI Option. ·CLEAR_CUBE_JOB_NAME Name of the CLEAR_CUBE job defined in the PBCS UI. ·PRE_LOAD_RUN_RULE: Determines whether to run a business rule before data load. This is useful if you want to selectively clear an intersection of the cube, by passing run time parameters. The CLEAR_CUBE job doesn’t let you choose the selective slice of the cube that needs to be cleared. ·PRE_LOAD_RULE_NAME Name of the pre-load business rule as defined/deployed from the Calc Manager. ·PRE_LOAD_RULE_PARAMS Run time parameters for pre-load business rule. Format: varname:value In case of multiple parameters, separate each parameter by comma delimiters. eg: var1:value1,var2:value2 Post-Load Operations: ·POST_LOAD_RUN_RULE: Determines whether to run a business rule after data load. It is useful if one intends to aggregate or copy data after data is loaded by passing run time parameters. ·POST_LOAD_RULE_NAME Name of the post-load business rule as defined/deployed from the Calc Manager. ·POST_LOAD_RULE_PARAMS Run time parameters for post-load business rule. Format: varname:value In case of multiple params, separate each param by comma delimiters. eg: var1:value1,var2:value2 Sample Process Logging: Conclusion: This concludes the initial setup and will get you up and running with the ODI 12c Knowledge Module for PBCS/EPBCS to load data. In the next blog, I will include more Knowledge Modules with features to extract data. For any troubleshooting /queries/suggestions, do not hesitate to contact me: thejas.b.shetty@oracle.com  

Author: Thejas B Shetty (Oracle SSI) Safe Harbor Statement: The following article represents the views of the author only & Oracle does not endorse it. It is intended for information purposes only, and...

Data Integration

New Releases for Oracle Stream Analytics: Data Sheet Now Available

More than ever before, companies across most industries are challenged with handling large volumes of complex data in real-time. The quantity and speed of both raw infrastructure and business events is exponentially growing in IT environments. Mobile data, in particular, has surged due to the explosion of mobile devices and high-speed connectivity. High velocity data brings high value, so companies are expected to process all their data quickly and flexibly, creating a need for the right tools to get the job done.     In order to address this need, we have released a new version 18.1 of Oracle Stream Analytics (OSA). The product is now available in three capacities: In the cloud as part of Oracle Data Integration Platform Cloud On premise as Oracle Stream Analytics On premise as part of Oracle GoldenGate for Big Data   To try OSA for yourself, you can download it here.   The OSA product allows users to process and analyze large scale real-time information by using sophisticated correlation patterns, enrichment, and machine learning. It offers real-time actionable business insight on streaming data and automates action to drive today’s agile businesses.   Oracle Stream Analytics platform targets a broad variety of industries and functions. A few examples include: Supply Chain and Logistics: OSA provides the ability to track shipments in real-time, alerts for any possible delivery delays, and helps to control inventory based on demand and shipping predictions. Financial Services: OSA performs real-time risk analysis, monitoring and reporting of financial securities trading and calculate foreign exchange prices. Transportation: OSA can create passenger alerts and detect the location of baggage, mitigating some common difficulties associated with weather delays, ground crew operations, airport security issues, and more.   One of the most compelling capabilities of OSA is how it is democratizing the ability to analyze streams with its Interactive Designer user interface. It allows users to explore real-time data through live charts, maps, visualizations, and graphically built streaming pipelines without any hand coding. Data can be viewed and manipulated in a spreadsheet-like tabular view, allowing users to add, remove, rename, or filter columns to obtain the desired result. Perhaps most importantly, users can get immediate feedback on how patterns applied on the live data create actionable results.     A few other notable capabilities include the ability to analyze and correlate geospacial information in streams and graphically define and introspect location data and rules, the predictive analytics availablebased on a wide range of Machine Learning models, and the reusable Business Solution Patterns from which users can select a familiar solution analysis.   Other Data Integration products complement OSA to process information in real-time, including Oracle GoldenGate and Oracle Data Integration Platform Cloud. In particular, Oracle Stream Analytics is integrated with the GoldenGate change data capture platform to process live transaction feeds from transactional sources such as OLTP databases to detect patterns in real-time and prepare and enrich data for analytical stores.   To get a deeper look at the features and functionalities available, check out this new data sheet.   You can learn even more about Oracle Stream Analytics at this product’s Help Center page.  

More than ever before, companies across most industries are challenged with handling large volumes of complex data in real-time. The quantity and speed of both raw infrastructure and business events...

Data Integration

Data Integration Platform Cloud (DIPC) 18.2.3 is Available!

  Data Integration Platform Cloud (DIPC) 18.2.3 is now available!   Here are some of the highlights: DIPC boasts an expanded Intuitive Enhanced User Experience which includes: Data Preparation and Data Synchronization Tasks, and the ODI Execution Task providing better management, execution and monitoring.  There is also a continued focus on overall data integration productivity and ease of use on one cloud platform all within one single pane of glass.   The Synchronize Data Task promises new features to enable schema to schema data synchronization in 3 clicks!  This allows for more control during an initial load and/or during on-going synchronizations.  Better monitoring is also possible.     The new Data Preparation Task provides out of the box end to end data wrangling for better data.  The task allows for simple ingest and harvesting of metadata for easy data wrangling, including integrated data profiling.     The new ODI Execution Task provides the ability to easily execute and monitor Oracle Data Integrator scenarios!  This task supports a hybrid development mode where one can develop or design on-premises to thus import into DIPC and integrate with DIPC Tasks.  This is then all executed and monitored in the DIPC Console.   Additionally, there are also enhancements to the Remote Agent, enabling on-premises to on-premises use cases such as custom Oracle to Oracle replication.  And the Data Catalog provides new data entities and new connections as well, furthering its underpinnings for governance at every enterprise!   Want to learn more?  Visit the DIPC site and check out this short 2 minute video on DIPC!

  Data Integration Platform Cloud (DIPC) 18.2.3 is now available!   Here are some of the highlights: DIPC boasts an expanded Intuitive Enhanced User Experience which includes: Data Preparation and Data...

Oracle Named a Leader in 2018 Gartner Magic Quadrant for Enterprise Integration Platform as a Service for the Second Year in a Row

Oracle announced in a press release today that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Enterprise Integration Platform as a Service” report for the second consecutive year. Oracle believes that the recognition is testament to the continued momentum and growth of Oracle Cloud Platform in the past year.   As explained by Gartner, the Magic Quadrant positions vendors within a particular quadrant based on their ability to execute and completeness of vision separating into the following four categories: Leaders execute well against their current vision and are well positioned for tomorrow. Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute well. Niche Players focus successfully on a small segment, or are unfocused and do not out-innovate or outperform others. Challengers execute well today or may dominate a large segment, but do not demonstrate an understanding of market direction.   Gartner views integration platform as a service (iPaaS) as having the “capabilities to enable subscribers (aka "tenants") to implement data, application, API and process integration projects involving any combination of cloud-resident and on-premises endpoints.” The report adds, “This is achieved by developing, deploying, executing, managing and monitoring integration processes/flows that connect multiple endpoints so that they can work together.”   “GE leverages Oracle Integration Cloud to streamline commercial, fulfilment, operations and financial processes of our Digital unit across multiple systems and tools, while providing a seamless experience for our employees and customers,” said Kamil Litman, Vice President of Software Engineering, GE Digital. “Our investment with Oracle has enabled us to significantly reduce time to market for new projects, and we look forward to the autonomous capabilities that Oracle plans to soon introduce.”   Download the full 2018 Gartner “Magic Quadrant for Enterprise Integration Platform as a Service” here.   Oracle recently announced autonomous capabilities across its entire Oracle Cloud Platform portfolio, including application and data integration. Autonomous capabilities include self-defining integrations that help customers rapidly automate business processes across different SaaS and on-premises applications, as well as self-defining data flows with automated data lake and data prep pipeline creation for ingesting data (streaming and batch).   Oracle also recently introduced Oracle Self-Service Integration, enabling business users to improve productivity and streamline daily tasks by connecting cloud applications to automate processes. Thousands of customers use Oracle Cloud Platform, including global enterprises, along with SMBs and ISVs to build, test, and deploy modern applications and leverage the latest emerging technologies such as blockchain, artificial intelligence, machine learning and bots, to deliver enhanced experiences.   A Few Reasons Why Oracle Autonomous Integration Cloud is Exciting    Oracle Autonomous Integration Cloud accelerates the path to digital transformation by eliminating barriers between business applications through a combination of machine learning, embedded best-practice guidance, and prebuilt application integration and process automation.  Here are a few key features: Pre-Integrated with Applications – A large library of pre-integration with Oracle and 3rd Party SaaS and on-premises applications through application adapters eliminates the slow and error prone process of configuring and manually updating Web service and other styles of application integration.  Pre-Built Integration Flows – Instead of recreating the most commonly used integration flows, such as between sales applications (CRM) and configure, price, quoting (CPQ) applications, Oracle provides pre-built integration flows between applications spanning CX, ERP, HCM and more to take the guesswork out of integration.  Unified Process, Integration, and Analytics – Oracle Autonomous Integration Cloud merges the solution components of application integration, business process automation, and the associated analytics into a single seamlessly unified business integration solution to shrink the time to complete end-to-end business process lifecycles.   Autonomous – It is self-driving, self-securing, and self-repairing, providing recommendations and best next actions, removing security risks resulting from manual patching, and sensing application integration connectivity issues for corrective action.   Discover OAIC for yourself by taking advantage of this limited time offer to start for free with Oracle Autonomous Integration Cloud.   Check here for Oracle Autonomous Cloud Integration customer stories.   Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.    

Oracle announced in a press release today that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Enterprise Integration Platform as a Service” report for the second consecutive year....

Data Integration

How to Increase Productivity with Self-Service Integration

By Kellsey Ruppel, Principal Product Marketing Director, Oracle One of the most exciting innovations in integration over the last decade is arriving just in time to address the surge of productivity apps that need to be integrated into enterprises, including small-to medium-size businesses (SMBs). On a general scale, there are approximately 2,300 Software-as-a-Service (SaaS) apps that SMBs use that need to be integrated. Line of business (LOB) users such as marketing campaign managers and sales managers are looking to perform quick and simple self-service integration of these apps themselves without the need for IT involvement – a huge benefit for SMBs who likely might not have a large IT department to lean on. Oracle Self-Service Integration Cloud Service (SSI) provides the right tools for anyone that wants to connect productivity apps such as Slack or Eventbrite into their SMBs. For example, perhaps you are a Marketing Campaign Manager and want to receive an alert each time a new digital asset is ready for your campaign. Or you are a Customer Support Representative trying to automate the deployment of survey links when an incident is closed. Or maybe you are a Sales Manager who wants to feed your event attendees and survey respondents into your CRM. SSI has the tools to address all these needs and more for your SMB. ​For a comprehensive overview of Oracle Self-Service Integration Cloud Service, take a look at our ebook: Make Your Cloud Work for You. Oracle Self-Service Integration is solving these business challenges by: Connecting productivity with enterprise apps - Addressing the quick growth of social and productivity apps that need to be integrated with enterprise apps. Enabling Self-service Integration - Providing line of business (LOB) users the ability to self-service connect applications with no coding to automate repetitive tasks. Recipe-based Integration - Making it easier to work faster and smarter with modern cloud apps with an easy to use interface, library of cloud application connectors, and ready to use recipes.   To learn more, we invite you to attend the webcast, Introducing Oracle Self-Service Integration, on April 18th at 10:00am PT.   Vikas Anand, Oracle Vice President of Product Management, will discuss: Integration trends such as self-service, blockchain, and artificial intelligence the solutions available in Oracle Self-Service Integration Cloud Service Register today!

By Kellsey Ruppel, Principal Product Marketing Director, Oracle One of the most exciting innovations in integration over the last decade is arriving just in time to address the surge of productivity...

Data Integration

How to Use Oracle Data Integrator Cloud Service (ODI-CS) to Manipulate Data from Oracle Cloud Infrastructure Object Storage

Guest Author:  Ioana Stama - Oracle, Sales Consultant   Introduction This article presents an overview on how to use Oracle Data Integrator Cloud Service (ODI-CS) in order to manipulate data from Oracle Cloud Infrastructure Object Storage. The scenarios here present loading the data in an object stored in Oracle Cloud Infrastructure in a table in Database Cloud Service (DBCS) and then move the object to another storage container. We are going to showcase how ODI-CS can connect to Object Storage Classic with both RESTful Services and CURL commands.   About Object Storage by Oracle Cloud Infrastructure Oracle Object Storage is an internet-scale storage, high performance, and durable storage platform. Developers and IT administrators can use this storage service to store an unlimited amount of data, at a very low cost. With the Oracle Object Storage, you can safely and securely use the web-based console to store or retrieve data directly from the internet or from within the cloud platform, at any time. Oracle Object Storage is agnostic to the data content type. It enables a wide variety of use cases. You can send your backup and archive data offsite, store data for Big Data Analytics to generate business insights, or simply build a scale-out web application. The elasticity of the service enables you to start small and scale your application as needed, and you always pay for only what you use. Oracle Object Storage provides a native REST API, along with OpenStack Swift API compatibility, and an HDFS plug-in. Oracle Object Storage also currently offers a Java SDK, as well as Console and Python CLI access for management. We are going to see how the containers look like in the beginning. The source container: The target container: The target table:   Let’s see first how we prepare the topology for the REST services, the File topology and the database topology. We are going to start with the REST services. We are going to create a connection for the source Object Storage and one for the destination one. Preparing the RESTful Services Topology Go to the Topology tab in ODI Studio and right click on the RESTful Services and select New Data Server. You have to give it a name – eg: TestRest In the REST Service endpoint URL you have to write the endpoint URL of the cloud container. This URL can be built accordingly to Oracle documentation or take it from the cloud dashboard. It is already available there.   In order to connect you have to use the cloud account user as per the below picture. After we save this connection we are going to create two new physical schemas. One for the source Object Storage Container and another one for the destination Object Storage Container. We are creating them under the same data server because both containers are created on the same Oracle Cloud Infrastructure. Right click on the newly created data server and select New Physical Schema. The first one is for the TEXTFILE object in the ODICS container, the source one. The resource URL it is also available in the Cloud Dashboard. Now, we are going to go to the Operations Tab. Here we are going to define some operations in order to manipulate the object. As you can see in the list there are methods from where we can pick a method. We defined operations for deleting, getting and putting the object in the container. Let’s test the service by pressing the Test Restful Service button. A pop-up window will open. Here, by selecting the desired operation, the effective URL is built. We can see here that we can add and modify the parameters. The save request content button opens another pop-up that will give you the chance to select the location where you want to save the content that you get from the object. We are going to do the same for the other container. A new physical schema will be created with another resource URL. In the operations tab we only defined operations for the GET and PUT method. The defined operations for both Objects are defined for the purpose of this demonstration. Preparing the File Topology In the Topology tab in ODI Studio right click on the File topology and select new data server. The host here is the name of the host where we are saving the file. Give it a name eg: JCS File. Please leave the JDBC connection to its default state and the save button. Right click on the File data server created and create a schema. Here we are going to mention the path to the file where the files that we are going to use will be stored. Also we have to mention the path of the folder where ODI is going to create his log and error files. Preparing the Database Topology In the Topology tab in ODI Studio right click on the Oracle Technology and select new data server. Give a name to the connection and specify the database user that you want to connect with. In this case storage is our database user. Please go to the JDBC tab and select the Oracle JDBC driver. Please modify the URL according to your details. The next step is to reverse engineer the database table and the file. After we are going to create a mapping to load data from the file to the table.   Go in the Designer tab and after in the Models tab. Here, click on the arrow and select create new model. We will choose the parameters accordingly to the used technology. For the first model we are going to select Oracle as a technology and DB_Storage_logical as the logical schema. After we do that press save. The next step is to reverse engineer the tables. Click on Selective Reverse-Engineering. Select the FILE_STORAGE table and press the Reverse Engineer button. Now we have to reverse engineer the file. Because the content for this is the same we already have a file created. We are going to create a new model and we will select File as a technology and the logical schema created in the topology tab.   After that, press the save button. Next, right click on the new model and select New Datastore. Give it a name, e.g.: TEXT_DATA and in the resource name tab press on the magnifying glass.  Go to the path where we saved the file (the one mentioned in the physical schema). The next step is to go to the Files tab. Here we have to mention the type of file and the delimitators. File format: Delimited Header (if needed) Record separator: e.g.: Unix Filed Separator: e.g.: Tab.   Press save and go to the Attributes Tab. Here you have to click on the Reverse Engineer Tab.   Preparing the mapping and the packages. Create a new project folder and go to the mappings. Right click and select new mapping. Give it a name: e.g. File_to_Oracle. Here, in the canvas, drag and drop the reversed engineered file and the table. Then connect them with the files as a source and the table as a target. Then press save. The next step is to create the packages. We are going to have two packages. One where we are going to call the RESTful services and one where we are going to call the cURL commands.   RESTful services Right click on the package and select new package. Give it a name. e.g. Flow. Here we are going to use the OdiInvokeRESTfulService component from the tool panel. We are going to use it three times. Once for getting data and save it in a file, then for putting the file in the second Object Storage Container, and the third one to delete the file from the source container. The flow is simple: OdiInvokeRESTfulService for saving the data in the object. The mapping that loads the data in the table. OdiInvokeRESTfulService to put the object in the other container. OdiInvokeRESTfulService to delete the object from the source container. The OdiInvokeRESTfulService has different parameters in the General tab. Here we have to select the operation that we want to use. Also, in the Response File parameter we have to specify the location and the file where we want to save the content of the object. In the command tab we can see the command that is going to be executed when we are going to run the flow. The same applies for the other OdiInvokeRESTfulService commands. Let’s run the workflow.   We can see that the execution was successful. We can see that in the table. Also we can check the containers and see that the object has been moved. The target container: The source container: cURL commands.   We are going to create a new package as we did with the previous one. But from the toolbox we are going to select the OdiOSCommand tool. In the command tab we are going to write the cURL commands that you can find below GET: curl -u cloud.admin:account_password https://identity _domain.storage.oraclecloud.com/v1/Storage-identity_domain/ODICS/TEXTFILE_CURL --output /u01/app/oracle/tools/home/oracle/files_storage/test_data.txt PUT: curl -X PUT -F 'data=@/u01/app/oracle/tools/home/oracle/files_storage/test_data.txt' -u cloud.admin:account_password https:// identity _domain.storage.oraclecloud.com/v1/Storage- identity _domain /ODICS_ARCHIVE/TEXTFILE_CURL DELETE: curl -X DELETE -u cloud.admin:account_password https:// identity _domain.storage.oraclecloud.com/v1/Storage- identity _domain /ODICS/TEXTFILE_CURL The steps are: OdiOSCommand to use cURL to get the content of the object The mapping to load data in the table OdiOSCommand to use cURL to put the new object in the second container. OdiOSCommand to use cURL to delete the object from the source container.   Conclusion Oracle Data Integrator Cloud Service (ODI-CS) is able to manipulate objects in Oracle Cloud Infrastructure Classic. You can leverage on ODI-CS capabilities on using RESTful Services and also commands written in any language in order to integrate all your data.

Guest Author:  Ioana Stama - Oracle, Sales Consultant   Introduction This article presents an overview on how to use Oracle Data Integrator Cloud Service (ODI-CS) in order to manipulate data from Oracle...

Data Integration

Oracle GoldenGate Adapters for Base24 version 12c is released

Oracle GoldenGate Product Management is pleased to announce the release of Oracle GoldenGate Adapters for Base24 version 12c. Oracle GoldenGate Adapters for Base24 version 12c is available for both HPE Integrity NonStop Itanium and HPE Integrity NonStop x86 chipset architectures. GoldenGate with BASE24 offers comprehensive data movement and management solutions by enabling real-time data capture and delivery between processing systems. Oracle GoldenGate for Base24 Adapters 12c contains three modules for Active-Active data synchronization operations. They are as follows: D24 : This enables bi-directional, real-time transactional data synchronization of customer and transaction data continuously, also called as Active-Active configuration. In the event of an outage on one system, D24 processes the full transaction load on the remaining machine, ensuring continuous availability and no data loss. N24: This coordinates the notifications associated with full refresh processing and eliminates operational labor and co-ordination of loading newly refreshed files into production between two BASE24 sites. T24: This helps to move structured (tokenized/segmented) data from BASE24 to heterogeneous targets and formats (file formats or databases or big data). Additional information:  Download Oracle GoldenGate Adapters for Base24 from My Oracle Support. Note: Look for Patch # 27024312 for HP NonStop (Guardian) Itanium and HP NonStop (Guardian) on x86. Documentation for Oracle GoldenGate for Adapters Base24. Certification Matrix for Oracle GoldenGate for Adapters Base24 12c for any platform support clarification. For more information on Base24 or ACI products, you may contact ACI Support.

Oracle GoldenGate Product Management is pleased to announce the release of Oracle GoldenGate Adapters for Base24 version 12c. Oracle GoldenGate Adapters for Base24 version 12c is available for both...

Data Integration

Synchronize Data between Source and Target in 2 Clicks!

The recently launched Data Integration Platform Cloud (DIPC) provides capabilities for various data integration requirements covering data transformation, integration, replication and governance. DIPC introduces the concept of Elevated Tasks to hide the complexity of underlying data integration processes used for achieving an end-to-end use case. Synchronize Data is the first such elevated task that allows you to synchronize a source schema with a target schema with no effort. Synchronize Data allows you to keep data in your target database schema in sync with a production database so that the target database can be used for real time Business Intelligence and reporting without affecting production system performance. Let us quickly understand the challenges in implementing such a data synchronization solution. The entire process can be achieved in two steps. First perform initial load of the existing source data to target and then configure a replication process to replicate all the ongoing transactions to the target. Let’s understand  the steps required if you were implementing it using Oracle Data Integrator (ODI) and Oracle GoldenGate (OGG). Create ODI mappings for each of the tables in the source schema Create and run an ODI procedure to retrieve the System Change Number (SCN) from the database. The data up to this SCN will be loaded by initial load in ODI and all transactions after this SCN will be replicated by OGG Run OGG extract process to capture transactions from the source database Run the ODI mappings created in step 1 to perform initial load up to the SCN Run OGG pump to push trail files Run the replicat process to start applying transactions from SCN checkpoint You notice that there are a number of intricate steps involved here that must be performed in the right order, across different products and requires a handshake between ODI and OGG. To implement all of this, you would require deep understanding of both ODI and OGG and it may take several days if not weeks to achieve the process end to end. Additionally, monitoring the progress of each of the steps and getting consolidated statistics will be another challenge. With the new Synchronized Data task in DIPC, this entire operation is now done with few clicks without worrying about the complexity of the underlying steps. All you need to do is create a Synchronize Data task, which entails having to specify the source and target schema, and run it.  DIPC takes care of creating the appropriate ODI scenarios, retrieving SCN in ODI, passing SCN  from ODI to OGG, and initializing and running relevant OGG processes – OGG extract, OGG pump, OGG replicat. DIPC also provides central monitoring capability so that you can view the ongoing progress of each of the steps and their statistics. Let us go through the steps for synchronizing data to see how easily you can do it in DIPC. First go to the DIPC home page and click on the create Synchronize Data Task On the Task creation screen enter the source and target information and click “Save and Run”. DIPC will save the task and kick off the execution. As part of execution DIPC will perform following operations Create ODI scenario to create tables in target schema and perform initial load Retrieve the System Changes Number (SCN) from the database Run OGG extract process to capture transactions from the source database Run the ODI scenario to perform initial load up to the SCN Run OGG pump to push trail files Pass SCN retrieved by ODI to GG Replicat process Start Replicat process to apply transactions from SCN Congratulations! You have created and executed the Synchronize Data Task. You can see the corresponding Job status on the Jobs page Click on the Job to see status and statistics of different steps. It provides you individual process level and consolidated statistics on inserts, updates, duration and lag. You can also view details on the underlying process that is executed for each step As shown above, DIPC has drastically simplified the data synchronization use case between two databases. Now, anybody can implement such an end-to-end scenario without needing the deep expertise previously required or the  juggle of multiple underlying products. Stay tuned for upcoming blogs on other exciting features introduced in DIPC. Meanwhile, check the product blogs to get more information: Data Integration Platform Cloud and Getting a Data Integration Platform Cloud (DIPC) Trial Instance.

The recently launched Data Integration Platform Cloud (DIPC) provides capabilities for various data integration requirements covering data transformation, integration, replication and governance. DIPC...

Data Integration

Connecting the Dots Between Data and Artificial Intelligence

Podcast: Connecting the Dots between Data and AI Artificial Intelligence Among the many definitions of Artificial Intelligence (AI) there is one trait that is never compromised. That the “intelligence” should always grow and never be static. In other words, the decision-making ability of any AI platform should keep learning and become more sophisticated. As the AI platform encounters more data, it keeps refining its decision algorithm. This process, usually called training the AI, is one of the trickier and exciting part of putting together an AI solution. The more data the AI model encounters, the more the AI platform is trained; which in turn makes the decisions more relevant and real world like. The best solutions that incorporate AI absorb all the data they are exposed to, sifting through them to pick and choose those that are relevant and can add value to maximizing the probability of fulfilling their reason to exist, be it surfacing the next best TV show to watch or sending an emergency signal to a maintenance control room for preventive case of the machine part. Data From All Over to Train The AI Data is crucial to train the AI models. Data Integration provides the necessary technologies to access the data that is required to successfully maintain and grow Artificial Intelligence solutions. Data Access Even at first glance, the volume and variety of data is mind boggling. Just the initial challenge of how to get a handle on the different types and sources of data becomes a challenge of scale and complexity. There is data that is being produced by machines (log data), there is data being produced by humans and healthcare devices. Then there is data that is being generated by business systems. There are video files, audio formats, JSON files, structured and unstructured and semi structured data. The list goes on and on. Data Integration helps with accessing data from all sources onto the platform of choice where the AI “brain” sits, refining and making decisions. Data Latency and Deep Learning Without getting too technical, there are two other important considerations to make AI more powerful. The first consideration is how recent and up-to-date is the information the AI uses to make decisions. Real-time data streaming capabilities fulfil this need. The second consideration is the ability to mine, transform and iteratively move and sift through large data sets. This comes from classic data migration and transformation capabilities. Both these capabilities together combine to stream, feed, and extract insights from data for the AI models. For more information on how to ensure your AI powered platforms and devices have the best access to data, read more about Data Integration Platform Cloud here.

Podcast: Connecting the Dots between Data and AI Artificial Intelligence Among the many definitions of Artificial Intelligence (AI) there is one trait that is never compromised. That the...

Data Integration

Data Integration Platform Cloud (DIPC) Home Page Navigation

Guest Author:  Jayant Mahto - Senior Manager, Data Integration Product Management - Oracle   Home Page Navigation The recent release of Data Integration Platform Cloud (DIPC) provides an easy access to data integration tasks by hiding the complexity with an easy to use interface. The home page layout has been created with this ease of use in mind. All the functions for data transformation, integration, replication and governance are accessible with easy navigation within a few clicks.   Home Page Layout Let us look at the different regions in home page: Top blue bar is for quickly accessing Data Integration actions. Notifications and user information is available at top right area. Left bar is used for quickly jumping to different areas for further details. Bottom Left panel shows high level information quick health check. Bottom Right panel gives further details for the item selected in Left panel.   Now Let us look at these areas for more details. Quick Jump to Home Page Clicking on the Home icon on top left takes user to the home page layout at any time. The top blue bar is used for quickly accessing the Data Integration features. Summary and Details Panel The Summary panel gives you high level information for Agents, Connections and Tasks. This is very useful to the user to get a quick health check for the Data Integration environment. By clicking on the summary information, users can get further details on the right side. In the following example the connection list is shown on the right side. By clicking on the details, user can jump to the corresponding page. In this example user clicks on one of the connections and jumps to the connections page. Note the left arrow next to the connection name. Clicking on it will take the user back to the previous page. Similar navigation to go back is available in other pages as well. Notifications It provides alerts showing number of notifications. Clicking on the notification takes user to the notification page. Clicking on the notifications will take the user to the jobs detailed page for further investigation. Notifications are based on the Job Policies rules. Refer to the section below for Policy creation. Policy Creation for Notification In order to receive notifications, user needs to create policies with notification rules. Policies can be created from the top blue bar or left navigation bar in the Home page.  The following information is needed for the policy: Name: Name of the policy appears in the notification indicating which policy is responsible for the notification Description:  Brief description of policy Severity:  High/Medium/Low. It is used in sorting/filtering the notification Enable: Policy is in effect only if it is enabled. Sometimes policy is created and not enabled unless needed. Policy Metrics: Multiple conditions with option to satisfy all or any condition before notifications are generated.  In the following example, notifications will be generated if Job fails or runs for more than 30 minutes.     Stay tuned for upcoming blogs on other exciting features introduced in DIPC. Meanwhile, check the product blogs to get more information: Data Integration Platform Cloud and Getting a Data Integration Platform Cloud (DIPC) Trial Instance.

Guest Author:  Jayant Mahto - Senior Manager, Data Integration Product Management - Oracle   Home Page Navigation The recent release of Data Integration Platform Cloud (DIPC) provides an easy access to...

Data Integration

Using Oracle Data Integrator (ODI) for Big Data to Load CSV Files Directly into HIVE Parquet Format

Guest Author:  Ionut Bruma - Oracle, Senior Sales Consultant   Introduction This article presents an overview of how to use Oracle Data Integrator (ODI) for Big Data with Hive parquet storage. The scenario shows how we can ingest CSV files into Hive and store them directly in Parquet format using standard connectors and Knowledge Modules (KMs) offered by Oracle Data Integrator for Big Data. For those new to this extension of ODI, Oracle Data Integrator for Big Data brings advanced data integration capabilities to customers who are looking to implement a seamless and responsive Big Data Management platform. For the practical scenario we have used Big Data Lite VM. This machine can be downloaded from OTN following the link attached: http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html   Overview of Hive The Apache Hive software projects a structure over the large datasets residing in distributed storage thus facilitating querying and managing this data using a SQL-like language called HiveQL. At the same time this language also allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in HiveQL. Let me walk you through how to set up ODI for this use case. Preparing the File Topology Create a File Data Server: Use the Topology tab of ODI Studio. Right click File technology and click on New to create a new Data Server In the Definition panel, enter a Name: eg: CSV Source In the JDBC panel, leave the default driver details:             JDBC driver: com.sunopsis.jdbc.driver.file.FileDriver             JDBC driver URL: jdbc:snps:dbfile Press  button. A pop-up should show that the connection has been successful as below. Go back to the Topology and right click the newly created Data Server. Choose New Physical Schema. The new configuration window will appear. In the Definition tab enter the connection details. Directory (Schema): <fill_path_to_the_folder_containing_csv_files>. E.g: /home/oracle/movie/moviework/odi/Flat_Files Directory (Schema): repeat the same string here. Press Save. An information screen will pop-up. Press ok. Then expand the Logical Architecture tab and find the File technology. Right click on it and click New Logical Schema to associate it with the newly created Physical Schema. In the Definition tab select the Context (Global in our case, but it can be Production, Development and so on). Associate the context with the Physical Schema previously created. Press Save.     Creating the Model reflecting CSV structure Go to Designer window and expand Models, right click on the  folder and click New Model. In the Definition tab fill in the Name, Technology and Logical Schema. For the Technology choose File, and then choose the Logical Schema you have just created for this technology. Click  button to retrieve all the files. You should see them under the model created. In order to have a preview of the data inside the CSV file, right-click on the datastore and choose View Data: Similar to the creation of the File Data Server in the Topology tab, create a Hive Data Server. Fill in the Name as Hive for example. Under the Connection specify the user (the authentication will be internal to the Hive engine, we will specify password=default in the driver specifications). Fill in the metastore URI and go to the JDBC tab. In the JDBC tab fill in the correct driver and URL for the Hive technology as shown in the following example. We have everything emulated inside Big Data Lite environment [JT1] so we will use localhost and the password default (for default authentication).   Click . The connection test should be successful. Now out of the new Hive Data Server (that’s actually the connection to the server) we will create a new schema or in ODI language a new Physical Schema. Right click on the data server and choose new Physical Schema. Fill in the details related to the schema (the schema should exist in ODI already) that you want to include in the Topology. You can use the same schema for both Work and Staging schema.   Next, expand the Logical Architecture window and locate the Hive technology. Right click on it and create a new Logical Schema that will be linked to the Hive schema. Give it a name and associate the Logical Schema with the Physical Schema as shown below (a dropdown menu is available under Physical Schemas area):   Creating the Model reflecting Hive structure If using reverse engineering then a table should exist in Hive. Here is a table creation DDL example: CREATE TABLE Test (ID int, Name String, Price String) STORED as PARQUET;   Log into Hive and run this code. From the OS command line, run bee command like shown below. Set the database where you want to deploy: Return to ODI Studio, go to Designer window and expand Models, right click on the  folder and click New Model. In the Definition tab fill in the Name, Technology and Logical Schema. For the Technology choose Hive, then choose the logical schema you have just created for this technology. Next, go to Reverse Engineer tab and choose the radio button for Customized. Choose the proper knowledge module as RKM Hive and click . Optionally you can choose a mask for the tables you want to reverse engineer or you will reverse the entire schema. The reverse engineering process can be monitored inside Operator Navigator: If you need to forward engineer your Hive Datastore, you will need to create it and fill it out manually. Right click the Hive data model and click New Datastore. Fill in the Name with any name. For the Datastore Type pick Table. Navigate to Storage attributes and fill in the Table Type: Managed, Storage Type: Native, Row Format: Built-In, Storage Format: PARQUET as shown in the picture below. Navigate to Attributes tab and fill in the attributes of the table:  Next, create the mapping that will convert automatically the CSV file into a Hive Parquet stored structure. Click the project you need to create the mapping in, expand the Folder and right-click on the mappings. Click on New Mapping. Give it a name, and start building the mapping. Our mapping is a one to one mapping because we only load the data into Hive but you can also perform any transformation ODI is capable of before loading the data into Hive Parquet table.   Navigate to Physical tab  and make the following adjustments. Select the access point (MCC_AP in my case) and configure the Loading Knowledge Module as shown in the following picture: It is important that LKM File to Hive LOAD DATA is used (and NOT LKM File to Hive LOAD DATA Direct). In addition, the FILE_IS_LOCAL (outside the cluster) option must be set to True, otherwise it will look for the file in HDFS instead of taking the CSV file from outside the cluster. Configure the Integration part (IKM settings): Select the target table (this should be the Hive Parquet one): Open the Integration Knowledge Module tab and choose the IKM Hive Append.GLOBAL KM. Next, if the table does not exist (you use the forward engineering approach described above) then set the CREATE_TARG_TABLE option to TRUE otherwise keep it false. If you want to truncate the table after each run then set the TRUNCATE option to TRUE. Run the mapping and review the results in Operator Navigator:   Examine the Hive Table: Look inside the Parquet file to see the data: Data_lake.db is the name of my database and mccmnc is the name of my table. You should replace them with your own names.   Conclusion ODI offers out of the box integration with Big Data technologies such as Hive, Hadoop, Spark, Pig, etc. When building a data lake or a data warehouse many files come as flat files in different formats like CSV, TXT, JSON and have to be injected in HDFS/HIVE in formats like Parquet.  ODI is able to build a reusable flow in order to automatically transfer the CSV files as they come from sources directly into the target HIVE tables.  This post presents a step-by-step guide on how to setup the right topology that reflects bots CSV source and HIVE Parquet table target. Then we tried to show two ways of building the target model (reverse engineering an existing table or forward engineer a structure created in ODI) and furthermore how to choose the proper Knowledge Modules for the loading and integration part.                                                                           

Guest Author:  Ionut Bruma - Oracle, Senior Sales Consultant   Introduction This article presents an overview of how to use Oracle Data Integrator (ODI) for Big Data with Hive parquet storage. The...

Data Integration

How to get IDCS OAuth details?

Some of you might not be aware of the process to get the OAuth keys for configuring Data Integration Platform Cloud (DIPC) with On-Premises agents. Keeping this in mind, I have given stepwise details including screenshots to get the OAuth keys for configuring Data Integration Platform Cloud (DIPC) with On-Premises agents. You would need the following four parameters for connecting an on-prem agent to DIPC server with OAuth authentication: idcsServerUrl agentIdcsScope agentClientId agentClientSecret ​You will automatically get the idcsServerUrl and idcsscope when a DIPC instance is provisioned. However you will not have the ClientID and ClientSecret parameter to configure the agent. For getting the ClientID and Client Secret, you would need to do the following three steps as mentioned below:  1. Login to Oracle Identity Cloud Service (IDCS) Console a) Use your browser to go to 'http://cloud.oracle.com' b) Click on 'Sign-in' c) Select 'Cloud Account with Identify Cloud Service' and click "My Services" d) Login to IdCS 2. Create Trusted Application a. Click on 'Applications' Tab within the IDCS Menu b. Click ' + Add' and Select 'Trusted Application' c. Provide a Name for the Trusted Application. Click 'Next'.  Note: All other parameters need not be necessarily filled. d. Select 'Configure this application as a client now' and then check Grant Permissions for 'Resource Owner' , 'Client Credentials', 'JWT Assertion', and  'Refresh Token'. e.  Click to 'Add Scope' in the below section. Select the DIPC Application URL that you have been provisioned. Note: This URL will be used as the idcsScope parameter.   f. Trusted 'Application' would be listed in the scope g. Click 'Next' and then select 'skip for later'  and click 'Next' again in the 'resources' sub-section h. Click 'Finish' in 'Authorization' sub-section i. This will list the trusted application with 'Client ID' and 'Client Secret'. Note down the values down for ClientID and Client Secret for future reference. This will be used later for authentication of your DIPC Remote authentication.   3. Activate the newly created 'Trusted Application' a. Select the Trusted Application from the Application Menu b. Click on 'Activate' button c. Click on 'Activate Application' Confirmation button in the pop-up box d. You will get a confirmation message once the application is activated. Note: In case you need to regenerate the ClientID and ClientToken, you would need the open the Trusted Application and click on button 'Generate Access Token' e. The new application will be listed as a 'Trusted Application' in the 'Applications' Section This step completes the creation of an IDCS OAUTH Application creation and the credentials can be used in other Oracle Cloud Applications.   Reference and Additional Information DIPC Product Documentation Homepage Setting up DIPC Agents to understand what each configuration parameter means or to understand how to manually edit the configuration file to change port number or heartbeat interval Hybrid Data Integration in 3 Steps using Oracle DIPC Blog DIPC certification matrix to know the supported versions of Operating Systems and Source/Target Applications.    

Some of you might not be aware of the process to get the OAuth keys for configuring Data Integration Platform Cloud (DIPC) with On-Premises agents. Keeping this in mind, I have given stepwise details...

Data Integration

Hybrid Data Integration in Three Simple Steps using Oracle Data Integration Platform Cloud (DIPC)

Background We are currently living in a real-time world and data movement is expected to happen real-time across the various enterprise systems. Enterprises are having serious challenges in data integration between on-premises systems and cloud systems. Integration is never an easy problem to solve considering the complexity in handling the knowledge in various complex applications and also managing various data risks for the organization across these hybrid systems. Oracle Data Integration Platform Cloud  (DIPC) has simplified the complexity of data integration with a simplified concept of using an on-premises agent to integrate to any type of data sources or targets. An on-premises DIPC Agent can be broadly used in the following two data integration scenarios: a. On-Premises to Cloud Data Integration and Vice versa ​ In this integration, a DIPC agent is installed in the On-premises data center which communicates to DIPC Host on the Oracle cloud. In this scenario, the data gets transferred from DIPC Agent to the DIPC Host and then to the Oracle Database Cloud Service.  b. On-Premises to On-Premises Data Integration In this integration, a DIPC agent is installed in the On-premises data center which communicates to DIPC Host on the Oracle cloud only for manageability aspects. In this scenario, the data gets transferred from Source Oracle database to Target Oracle database through the DIPC Agent. No customer data will be transferred out of on-premises to Oracle Cloud. 3 Simple Steps for Hybrid Data Integration  In this section, you will learn how to a) Download and Install, b) Configure and c) Run a DIPC agent on your on-premises system to integrate across On-Premises or Hybrid systems. Step 1: Download and Install DIPC Agent i. Log on to Data Integration Platform Cloud. ii. From the left navigation pane, click Agents. iii. From the Download Installer menu, select the available agent package based on the operating system. iv. In the Download Agent Installer window, click OK. v. Once the agent is downloaded, you would need to unzip into directory where you are expecting the agent to run. Note: This location will be called as  ${AGENT_UNZIP_LOC} in the further section vi. This completes your DIPC Agent download and installation step. Step 2 : Configuring On-Premises Agent i. You can find the agent configuration script in the following directory Command> cd [Agent_unzip_dir]/dicloud You will  find scripts dicloudConfigureAgent.bat (on Windows ) and dicloudConfigureAgent.sh (on Unix) ii. You may configure the agent in two ways : a) Basic AuthType  : For Configuring in Basic AuthType, run the following  Command> ./dicloudConfigureAgent.sh [agent_name] -dipchost=[dipc.example.host.com] -dipcport=[port_no] -user=[diuser] -password=[dipassword] -authType=BASIC eg: dicloudConfigureAgent.bat dipcagent001  -recreate -dipchost=135.169.165.107 -dipcport=9073 -user=weblogic -password=password123 -authType=BASIC b) OAuth Type: For Configuring in OAuth Type, run the following Command> ./dicloudConfigureAgent.sh [agentInstanceDirectory] -recreate -debug -dipchost=[dipc.example.host.com] -dipcport=[port] -user=[diuser] -password=[dipassword] -authType=[BASIC/OAUTH2] -idcsServerUrl=[idcs_server url] -agentIdcsScope=[agent_IDCS_Client_Scope] -agentClientId=[Agent_IDCS_clientID] -agentClientSecret=[Agent_IDCS_clientSecret] Eg: ./dicloudConfigureAgent.sh -dipchost=dipcsfungerc4-dipcsv2meteri01.uscom-central-1.c9dev1.oc9qadev.com -dipcport=443 -idcsServerUrl=https://idcs-b8bdf957678a4d91b80801964c406828.identity.c9dev1.oc9qadev.com -agentIdcsScope=https://235C1F73C5B54068A7C02A202D6B2B42.uscom-central-1.c9dev1.oc9qadev.com:443external  -user=firstname.lastname@ORACLE.COM -password=Password123# -agentClientId=fb035f53b22a450982ec22551cccfdd6 -agentClientSecret=62eda7dc-1991-47ca-8205-fde062c62ab8  Note a: You would need to get the OAuth tokens from Oracle Identity Cloud Service (IDCS) prior to configuring DIPC Agent in OAuth mode. Refer to blog  https://blogs.oracle.com/dataintegration/how-to-get-idcs-oauth-details for step-by-step details. Note b: If you need to configure Remote agent with Autonomous DIPC, then you would need to create a Trusted Application using the Oracle IDCS Admin Console and assign "OCMS App" in allowed Scope. iii. This completes DIPC agent configuration  Step 3 : Starting the Agent i. Go to the DIPC agent instance directory Command> cd [AGENT_UNZIP_LOC]/dicloud/agent/[agent_name]/bin/ ii. To Start DIPC Agent:  Command> ./startAgentInstance.sh  iii. If you need to Stop DIPC Agent for any maintenance, you can run the following Command> ./stopAgentInstance.sh Note: If you are using Autonomous DIPC, the you would need to import the certificate from browser into cacerts present in DIPC Agent's JAVA_HOME directory and then run keytool import command. For eg: $JAVA_HOME/bin/keytool -import -alias ocptest -keystore  /u01/JDK/jdk1.8.0_171/jre/lib/security/cacerts -file /scratch/ocp_test.crt Reference and Additional Information DIPC certification matrix to know the supported versions of Operating Systems and Source/Target Applications. DIPC Documentation Homepage Setting up DIPC Agents to understand what each configuration parameter means or to understand how to manually edit the configuration file to change port number or heartbeat interval

Background We are currently living in a real-time world and data movement is expected to happen real-time across the various enterprise systems. Enterprises are having serious challenges in...

GoldenGate Solutions and News

Oracle GoldenGate Studio 12.2.1.3+ new enhancements

In series of blogs around Oracle GoldenGate Foundation Suite (OGFS) products, last time, I mentioned the features launched for GoldenGate Veridata. I am happy to announce that we have released the latest Oracle GoldenGate Studio Bundle patch(12.2.1.3.180125) in last week. The GoldenGate Studio Bundle patch contains the certification of Oracle GoldenGate 12.3 Classic architecture and new CDC capture for SQL Server. Now you can design and deploy your GoldenGate artifacts to GoldenGate 12.3 Classic architecture. The GoldenGate Studio will discover the OGG 12.3 Classic Instance automatically, and you can deploy your existing designs to OGG 12.3 Classic Instances. GoldenGate Studio can help you upgrade from older OGG versions like 12.1, 12.2 to OGG 12.3 Classic. GoldenGate Studio can also help you build OGG 12.3 Classic Instances using Reverse engineering feature. It allows you to simulate your existing replication environment into GoldenGate Studio. For more information about Reverse engineering feature, please read my blog. Oracle GoldenGate for SQL Server team has recently released the new CDC capture for SQL Server. The Oracle GoldenGate Studio supports this new CDC capture for SQL Server, and it will allow you to design, deploy GoldenGate SQL Server including new CDC capture artifacts in Studio. Lastly, this bundle patch contains support altering extract and replicat processes using GoldenGate ALTER commands. Let me know if you have any questions. 

In series of blogs around Oracle GoldenGate Foundation Suite (OGFS) products, last time, I mentioned the features launched for GoldenGate Veridata. I am happy to announce that we have released the...

Data Integration

How to Succeed with Data Ingestion: 5 Best Practices

Data is the starting point of every smart business decision. Virtually every improvement a business makes these days – anything from sales strategies to IT processes to web content and so on – is backed by data that tells a story. With the emergence of big data, businesses have access to huge volumes of structured and unstructured data. The challenge is learning how to use this data to its full potential. In order for data to be used to its full potential, data must always be available in any format, at any time, in any place you need it and it must be governed to ensure trust and quality. Data must be ingested properly then go through an ETL (Extract, Transform, and Load) Pipeline in order to be trustworthy and, if any mistakes are made in the initial stages, they pose a major threat to the quality and the integrity of the output at the end of the process. Data ingestion is the first step in the Data Pipeline. It is the process of moving data from its original location into a place where it can be safely stored, analyzed, and managed – one example is through Hadoop. As companies adjust to big data and the Internet of Thing (IoT), they must learn to grapple with increasingly large amounts of data and varied sources, which make data ingestion a more complex process.  It is important not only be prepared for the present state of data ingestion, but to look ahead as things change quickly. Here are a few best practices to consider as you reflect on your data ingestion strategy:   1. Determine whether you need batch streaming, real time streaming, or both. Data can be ingested via batch data source, streaming data source, or a hybrid. Some businesses have to remain on premise due to regulatory or business requirements, while others are able to take advantage of the cloud. You can learn more about how to ingest data with Oracle Data Integrator and Oracle GoldenGate here.   2. Before you begin the ingest process, determine where your data will go and set up the proper hubs. Data lakes, data marts, and data warehouses are all separate entities, which people can sometimes confuse for one another. Before you begin to ingest your data into one or more of these spaces, consider the business problem you are trying to address. Based on your purpose, decide where the data will best live. To clear up any confusion: data warehouses ingest data via ETL process and are given a pre-assigned schema to handle analytical processing. Data marts are similar to warehouses, but they handle more restricted, specific groupings of data – data marts are sometimes used to process smaller subsets of data that complement what is stored in a warehouse. Finally, data lakes are optimal for massive storage and analysis of both structured and unstructured data. Data lakes offer high availability, low costs, and increased flexibility ideal for managing big data.   3. Be patient and manage your time accordingly.  According to a study cited in Forbes, data scientists spend about 80% of their time preparing and managing data before analysis. This is time that would otherwise be invested analyzing data; however, it is critical that when data is ingested, it is clean and prepared for analysis. IDC predicts that, by 2020, spending on data preparation tools will grow 2.5 times faster than tradition IT tools for similar purposes – these tools will be built with the intention of simplifying and speeding up the data ingestion process.   4. Get on board with machine learning. Not too long ago, data ingestion could be performed manually. One person could define a global schema and simply assign a programmer to each data source. The programmers could then map and cleanse the data into the global schema. Now, the volume of data and the number of sources are too large for programmers to follow the same plan. Artificial Intelligence (AI) is quickly changing the way in which we process large amounts of information. It is helping us work more efficiently and is identifying patterns, definitions, and grouping that are often lost to human error and oversight.   5. Data governance is key.   Data quality and security are extremely important for ensuring the accuracy and reliability of the insights and analysis the data reveals. It is one thing to initially cleanse data, but it takes a continued effort to maintain it. A data steward is responsible for maintaining the quality of each data source after the initial ingestion point. A data steward determines the schema and cleansing rule, decides which data should be injected into each data source, and manages the cleansing of dirty data. Of course, data governance is not limited to clean data – it also includes security and regulation compliance. Keep in mind that data is cleansed and governed after ingestion to preserve its original context and to allow for further discovery opportunities. It is, however, important to plan the governance process ahead of time, so it is ready to manage data after ingestion.   Conclusion Data ingestion is the first step in the data pipeline and it is hugely important. Businesses rely on good data to help them make smarter decisions, so it is important to take good care of your data from Step 1. Start by identifying your business needs and the resources you will need to meet them. As you go through the data ingestion process, be prepared for setbacks and always plan ahead. For more information, check out Oracle Data Integration Platform Cloud (DIPC) for a comprehensive data integration platform. DIPC provides all the functional components necessary to fulfill raw ingestion in batch or real-time, as well as the secondary phase of processing and operationalizing transformations and data quality routines.    

Data is the starting point of every smart business decision. Virtually every improvement a business makes these days – anything from sales strategies to IT processes to web content and so on –...

Big Data Success with Oracle Data Integration

Everyday Data Integration It is not every day that the average citizen wakes up thinking “Let me integrate some data today,” but that is precisely what they end up doing in many ways. From waking up to catching a favorite morning program on cable TV, to grabbing that cup of coffee from a major coffee retailer around the block, to binge-watching a favorite series on one of the leading subscription entertainment services, we come in daily contact with the business end of data integration. Of course, there is a nice human interface that contextualizes and enhances our experiences delivered through our mobile phones and web browsers on-top of the data that we interact with. Insightful businesses and business leaders want to provide the best services and they also look to delight and enhance customer experiences by providing personalized content and offerings. For this, deep insights and a complete understanding of customers is required. Data provides the raw information for these insights. Data also serves as the key ingredient, which when combined with other related data, bubbles up key recommendations that go into many every day and strategic business decisions that allow companies to differentiate themselves by providing customized and excellent capabilities. Many times, businesses tend to overlook the amount of planning and foundational work that goes into constructing the right data integration foundation for many transformation practices. In this series of blogs, I hope to bring out the role of data integration through various stories where Oracle Data Integration plays a critical role. LinkedIn LinkedIn is a great example of how Oracle Data Integration works behind the scenes to keep user experiences seamless. From ensuring that user profile data is accessible across the globe to making sure that user updates flow through instantaneously at the click of a refresh button, Oracle Data Integration underpins a well thought out architecture for one of the largest networking platforms. Here is a quick overview of how LinkedIn uses Oracle Data Integration, specifically Oracle GoldenGate for Big Data, to pass on the benefits of real-time updates and synchronization to its user community. A Deeper Dive On How LinkedIn Achieved Real Time Glory Architecture The primary goal was to ensure that data is efficiently moved across a number of online data stores (Oracle and MySQL) to downstream data stores, eliminating bulky data dumps, multiple hops, and standardizing the data formats. Oracle GoldenGate, the real-time replication engine that captures data changes and streams them to any required destination instantaneously, is at the core of this implementation. Oracle GoldenGate comes with off-the-shelf integration with many sources and targets making it easier to implement and integrate with existing systems that were being used. This allows greater compatibility with Oracle and MySQL databases also eliminating the multiple hops that the data used to go through to reach its final data store. Another critical criterion to select Oracle GoldenGate was the low impact on the source data base when capturing data to be streamed. Oracle GoldenGate for Big Data has a very light footprint and has minimal impact on business-critical source systems, an important consideration when fiddling around with carefully tuned applications and databases. Big Data Considerations – Kafka\HDFS Big Data platforms provide a cost-effective alternative to specialized hardware for both storage and computing for many scenarios. While data warehouses and high-speed analytics still benefit from optimized hardware, big data has gained tremendous viability for deep data storage used for machine learning and artificial intelligence requirements. Oracle Data Integration has always recognized this need for heterogeneity, a factor that played into the decision to use Oracle GoldenGate for Big Data for this project. Oracle GoldenGate for Big Data has handlers (a handler being a prebuilt bit of technology to integrate with specific systems) for many big data standards and technologies, among others, Kafka and Hadoop Distributed File System (HDFS). Both came in useful in pilot stages when determining which Big Data technology to adopt. Take a look at this datasheet for a more exhaustive capability list for Oracle GoldenGate for Big Data. But in short, this is how it can look: The Oracle Data Integration Platform LinkedIn is one of many customers who is innovating in their core businesses with cutting edge technology from Oracle. Oracle Data Integration, in turn, is pushing the boundaries of data integration to enable customers solve data challenges and turn them into opportunities for excellence. Oracle Data Integration’s products and services are evolving into a comprehensive and unified cloud service that brings together all the capabilities required for a data integration solution. The new platform, Oracle Data Integration Platform Cloud (DIPC), combines rich capabilities, a wide breadth of features, persona-based user experience, simple pricing, and packaging to make our customers’ data integration journey easier.   Learn more about Oracle Data Integration Platform Cloud here. Learn more about Oracle Data Integration here. Learn more about Oracle GoldenGate for Big Data here.

Everyday Data Integration It is not every day that the average citizen wakes up thinking “Let me integrate some data today,” but that is precisely what they end up doing in many ways. From waking up to...

Oracle Data Integrator (ODI) to Extract Data from Fusion Application (Oracle ERP Cloud)

Guest Author:  Srinidhi Koushik, Senior Sales Consultant at Oracle Oracle Data Integrator (ODI) to extract data from Fusion Application (Oracle ERP Cloud) In continuation to the previous blog wherein we discussed about loading data onto ERP Cloud, we will now discuss how to extract data from ERP Cloud using Oracle Data Integrator (ODI). ODI is available on-premises as well as in the Oracle Cloud with Data Integration Platform Cloud and ODI Cloud Service. Overview of the integration process In this blog, we will discuss the steps involved in downloading data from ERP Cloud. The steps involved are listed below The construct of data extraction is executed via the Web service call which in turn invokes the BI publisher report associated with the web service call The response of the Web Service call is a BASE64 construct string against “ReportBytes” tag Command: sed '/<ns2:reportBytes>/!d;s//\n/;s/[^\n]*\n//;:a;$!{/<\/ns2:reportBytes>/!N;//!ba};y/\n/ /;s/<\/ns2:reportBytes>/\n/;P;D' /home/oracle/Elogs/Report2_Resp.xml | tr ' ' '\n' c. The Java program is used to decode the BASE64 onto required format of file Java Construct of Program: import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import java.io.BufferedReader; import java.io.FileReader;   import org.apache.commons.codec.binary.Base64;   public class DecryptBaseFile {     public DecryptBaseFile() {         super();     }       public static void main(String args[]) {         BufferedReader br = null;         FileReader fr = null;           try {             fr = new FileReader("/home/oracle/Elogs/ReportBytes.txt");             br = new BufferedReader(fr);               StringBuffer fileData = new StringBuffer();             String sCurrentLine;               while ((sCurrentLine = br.readLine()) != null) {                 fileData.append(sCurrentLine);             }             System.out.println("Reading data done from file /home/oracle/Elogs/ReportBytes.txt");               // Run the api to perform the decoding             byte[] rbytes = Base64.decodeBase64(fileData.toString().getBytes());               FileOutputStream os = new FileOutputStream("/home/oracle/TgtFiles/Customer.csv");             os.write(rbytes);             os.close();             System.out.println("File Customer.csv created successfully at /home/oracle/TgtFiles");         } catch (Exception e) {             e.printStackTrace();             System.out.println("Error:Kindly contact admin");             System.out.println(e.getMessage());         }     } } d. The created file is then copied onto source directory e. The file is then loaded onto necessary tables The ‘OdiInvokeWebService’ construct provided below Please note: OWSM rule that needs to be selected is oracle/wss_username_token_over_ssl_client_policy Requirement for this to work is to use ODI Enterprise configured with an Agent deployed in WebLogic Server The Topology associated with the report generation can be found below Inputs Sought by Web service: Conclusion: The features available in ODI make it relatively simple to generate a file, compress / zip the file and call specific Java / Linux scripts / functions and invoking necessary web services on ERP cloud to download the necessary data onto any specific format like csv, html, txt, excel, etc.

Guest Author:  Srinidhi Koushik, Senior Sales Consultant at Oracle Oracle Data Integrator (ODI) to extract data from Fusion Application (Oracle ERP Cloud) In continuation to the previous blog wherein we...

Data Integration

Oracle Data Integrator (ODI) to Load into Fusion Applications (Oracle ERP Cloud)

Guest Author:  Srinidhi Koushik, Senior Sales Consultant at Oracle Oracle Data Integrator (ODI) is the best tool to extract and load onto Oracle ERP Cloud given its dexterity to work with complex transformation as well as support for heterogenous source / target format. ODI is available on-premises as well as in the Oracle Cloud with Data Integration Platform Cloud and ODI Cloud Service (ODI-CS). In this article we will walk you through how to upload data into Oracle ERP Cloud Overview of the integration process In this series of blogs we discuss the two facets of Integration: Extraction and Loading. In this specific article we will concentrate on Loading data onto ERP Cloud using the following process: a) Create a data file based on the standard Template (list of Mandatory columns)     required by the table within Oracle ERP cloud         List of attributes or mandatory columns needs to be derived and a template      created. List of columns can be seen from the link below         https://docs.oracle.com/en/cloud/saas/financials/r13-update17d/oefbf/Customer-Import-30693307-fbdi-9.html         The file name in this case containing customer data has to follow a given       naming convention as per above documentation eg: HzImpPartiesT.csv b) Zip the content of the file     The above file created as per ELT / ETL process needs to be zipped as per     upload requirement as well as to compress the data     OdiZip can be used alternatively a script can also be written to zip the file c) Convert the zip file onto BASE64 string using java program or alternate     programs    Java Program construct: utilEncodeBase.java file content import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import org.apache.commons.codec.binary.Base64;   public class utilEncodeBase {     public utilEncodeBase() { super(); }   public static void main(String[] a) throws Exception {   // Enter the filename as input     File br = new File("/home/oracle/movie/moviework/odi/CustomerImport.zip"); // Convert the file into Byte byte[] bytes = loadFile(br);   // Call the api for Base64 encoding byte[] encoded = Base64.encodeBase64(bytes); String encStr = new String(encoded); // Print the file System.out.println(encStr);   }                      private static byte[] getByteArray(String fileName) {                     File file = new File(fileName); FileInputStream is = null; ByteArrayOutputStream buffer = new ByteArrayOutputStream(); int nRead; byte[] data = new byte[16384]; try { is = new FileInputStream(file); while ((nRead = is.read(data, 0, data.length)) != -1) {     buffer.write(data, 0, nRead); } buffer.flush(); } catch (IOException e) { System.out.println("In getByteArray:IO Exception"); e.printStackTrace(); } return buffer.toByteArray(); }                      private static byte[] loadFile(File file) throws IOException { InputStream is = new FileInputStream(file);   long length = file.length(); if (length > Integer.MAX_VALUE) { // File is too large } byte[] bytes = new byte[(int)length];   int offset = 0; int numRead = 0; while (offset < bytes.length && (numRead = is.read(bytes, offset, bytes.length - offset)) >= 0) { offset += numRead; } if (offset < bytes.length) { throw new IOException("Could not completely read file " + file.getName()); } is.close(); return bytes; } } d) Create a XML file to be passed via the Webservices to load the file. We have     java program to load the BASE64 string to XML and in turn use this XML as     the request file. The below program helps generates a XML with input     string (BASE64) Java Program construct: genxml.java file content import java.io.BufferedReader; import java.io.FileOutputStream; import java.io.FileReader; import org.jdom.Document; import org.jdom.Element; import org.jdom.Namespace; import org.jdom.output.Format; import org.jdom.output.XMLOutputter; public class GenerateXML {   public GenerateXML() {}     public static void main(String[] paramArrayOfString)   {     try     {       BufferedReader localBufferedReader = null;       FileReader localFileReader = null;               localFileReader = new FileReader("/home/oracle/Elogs/Conv_Out.txt");       localBufferedReader = new BufferedReader(localFileReader);             StringBuffer localStringBuffer = new StringBuffer();             String str;       while ((str = localBufferedReader.readLine()) != null) {         localStringBuffer.append(str);       }         Document localDocument = new Document();         localDocument.setRootElement(new Element("Envelope", "soapenv", "http://schemas.xmlsoap.org/soap/envelope/"));       localDocument.getRootElement().addNamespaceDeclaration(Namespace.getNamespace("typ", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/"));       localDocument.getRootElement().addNamespaceDeclaration(Namespace.getNamespace("erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/"));        Element localElement1 = new Element("Header", "soapenv", "http://schemas.xmlsoap.org/soap/envelope/");       localDocument.getRootElement().addContent(localElement1);          Element localElement2 = new Element("Body", "soapenv", "http://schemas.xmlsoap.org/soap/envelope/");       Element localElement3 = new Element("uploadFileToUcm", "typ", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/");       Element localElement4 = new Element("document", "typ", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/");       Element localElement5 = new Element("Content", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement5.addContent(localStringBuffer.toString());       localElement4.addContent(localElement5);        Element localElement6 = new Element("FileName", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement6);       Element localElement7 = new Element("ContentType", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement7);         Element localElement8 = new Element("DocumentTitle", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement8);       Element localElement9 = new Element("DocumentAuthor", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement9);       Element localElement10 = new Element("DocumentSecurityGroup", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement10);       Element localElement11 = new Element("DocumentAccount", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement11);         Element localElement12 = new Element("DocumentName", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement12);        Element localElement13 = new Element("DocumentId", "erp", "http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/");       localElement4.addContent(localElement13);       localElement3.addContent(localElement4);       localElement2.addContent(localElement3);       localDocument.getRootElement().addContent(localElement2);          XMLOutputter localXMLOutputter = new XMLOutputter(Format.getPrettyFormat());       localXMLOutputter.output(localDocument, new FileOutputStream("/home/oracle/TgtFiles/uploadfiletoucm_payload.xml"));     }     catch (Exception localException) {       localException.printStackTrace();     }   } }  e) Upload the file to UCM-Cloud using the appropriate web service call with      BASE64 as an input value against "Content" tag on XML Request file: Response File: (need to capture value against tag “result” f) Extract the value of result and capture the result onto a Variable grep -oE '<result xmlns="http://xmlns.oracle.com/apps/financials/commonModules/shared/model/erpIntegrationService/types/">[^<]*</result>' /home/oracle/Elogs/Con_Resp.xml | awk -F"[<>]" '{print $3}' g) Assign this value onto a Variable to be used for subsequent webservice call h) Invoke ERP-Cloud to trigger the import process, using the appropriate web     service call. Here is an example of this process implemented in an ODI Package: The ‘OdiInvokeWebService’ construct and the topology created for this requirement provided below Please note: OWSM rule that needs to be selected is oracle/wss_username_token_over_ssl_client_policy Requirement for this to work is to use ODI Enterprise configured with an Agent deployed in WebLogic Server Inputs Sought by Web service: Screenshot from ERP Cloud of loaded customers and statistics: Customers loaded in ERP Cloud from ODI: Conclusion: The features available in ODI make it relatively easy to generate a file, compress / zip the file and call specific Java / Linux scripts / functions and invoking necessary web services on ERP Cloud to aid uploading the necessary files onto specific ERP Cloud directory.

Guest Author:  Srinidhi Koushik, Senior Sales Consultant at Oracle Oracle Data Integrator (ODI) is the best tool to extract and load onto Oracle ERP Cloud given its dexterity to work with...

Data Integration

DI Connection: Where Machine Learning Meets Data Integration

Much of the data we manage day-to-day lives in silos and, while the data within each silo can be analyzed on its own, this produces limited value for businesses. The deeper value becomes apparent when the disparate data is connected and transformed to produce patterns, insights, and predictions. This undertaking requires a significant amount of time and labor only to be susceptible to human error. As businesses are faced with analyzing big data from heterogeneous sources as quickly as possible, data integration will increasingly look to automation and machine learning for the heavy lifting. Businesses are pulling data from CRM systems, social media, apps, and a swarm of other possible sources and this data is often unstructured. With so much information available and constantly updated in real time, it is impossible to keep up with all the data through human power alone. Therefore, machine learning becomes necessary when handling data across disparate sources. Artificial Intelligence continuously develops new and improved algorithms as it is fed with more data, becoming smarter and more agile in finding connections between and among various data points. Machine learning is on track to become a driving force within data integration systems. Here are a few ways in which machine learning is changing the game: Decision-making: AI-powered predictive analytics find insight in data beyond human capability. One of the greatest strengths an AI possesses is the ability to connect data points from heterogeneous sources and identify previously unrecognized connections and patterns. This creates a new level of transparency when making decisions and empowers businesses and leaders to make more thoughtful, better-informed choices to increase profits, to improve internal efficiency, to create more targeted brand messaging, and so on. Productivity: Machine learning helps automate data-related tasks that are too large or complex for manual processing or simply too tedious. For example, AI can categorize data and identify patterns, screen for data quality, and suggest next best actions. These capabilities save both time and money by reducing human labor and potential errors that would require additional time-consuming work to repair. Changing Workloads: AI cloud-based workloads are quickly growing and offer the benefits of automated scalability and elasticity as well as the speed and ability to integrate with a variety of other tools. Workloads change through the day, month, and year, and businesses need the flexibility and high availability to handle those changes. The underpinnings of machine learning are already visible in Oracle’s on premise Data Integration portfolio and Data Integration Platform Cloud offerings. Currently, this is most obvious when it comes to profiling in the catalog, which allows users to get more information about connections and metadata before actually beginning the integration process. Another project in progress is a tool that can take in unstructured data and normalize it to then create data sets within a data lake. In the future, this will lead to better predicting how we want to use data – whether it is better for querying or processing and how to move it, whether in real time or in batch. Long term, we plan to integrate machine learning and AI on multiple levels of our data integration offerings. A couple areas in particular will include 1) AI-fueled monitoring to predict outages and lags and 2) building out an intelligent designer, which will eventually make data mappings almost entirely automatic and offer relationship and transformation recommendations. Machine learning will surely become an integral part of every strong, adaptive data integration solution in the near future.  Ultimately, the role of artificial intelligence in data integration is to increase automation and to strengthen intelligence via machine learning in order to decrease integration costs, human labor, and human error, while producing valuable business intelligence that leads to more strategic decision-making, better productivity, and more accurate, targeted marketing, among other possible benefits.                                                                                                                                                  

Much of the data we manage day-to-day lives in silos and, while the data within each silo can be analyzed on its own, this produces limited value for businesses. The deeper value becomes apparent when...

GoldenGate Solutions and News

Oracle GoldenGate Veridata 12.2+ new enhancements

In series of blogs around Oracle GoldenGate Foundation Suite (OGFS) products, I mentioned the new features for OGG Monitor in my previous blog. Today, I am pleased to announce that we have released the latest OGG Veridata Bundle patch(12.2.1.2.171215) end of December. In this Veridata Bundle patch, we have provided support for Hive (Big data), and automatic key column mapping. Let me first write about automatic key column mapping. In the absence of unique identifiers, such as the primary keys and unique indexes, which can uniquely identify each row, you need to manually define any column or columns to be used as a unique identifier. If you have hundreds of such tables, then it is an excessive work to select PKey for each table in the user interface (UI). However, there was a workaround to use the Veridata GoldenGate Parameter Processing (VGPP) and create the compare pair. It was still requires some work to be done from your side. Now with this new bundle patch, Veridata will select key columns automatically for you based on specific inputs provided while creating compare pairs or connections. If you have enabled automatic key column mapping at the Connection Level, then any group using this Connection has the feature enabled. You can override automatic key column mapping at the Compare Pair Generation stage. The connection level has mainly two new options. Use Source or Target Columns as Key Columns When Generating Compare Pairs: If the source table has primary keys or unique index defined, but the target does not, then Oracle GoldenGate Veridata uses the same columns (as the source) as unique identifiers for the target. Similarly, if the source does not have the primary key or index, but the target has them, then key columns of target are used for source. If either (source or target) of the columns of the Primary Key or index is not present on the target side, then the primary key or index is not considered. Use All Columns as Key Columns When Generating Compare Pairs: Enables Automatic mapping for the source and target connections. If this option is disabled either at the source or the target connection, then the automatic mapping is also disabled for that group. If you enable this option to map all columns from source and target, then the mapping is considered only when both source and target table do not have primary or unique keys. If you do not want to specify "automatic key selection" options at the connection level, you can do so while configuring compare pair as well. The Manual and Pattern compare pair configuration has following new options:- Use Source Key Columns for Target: Enables automatic mapping for the target table. If you disable the option for the target side, and if there is no primary key or unique index on the target side and the source has primary key or unique index, then only the mapping of columns happen but, automatic mapping does not happen on the target table. Use Target Key Columns for Source: Enables automatic mapping for the source table. If you disable the option for Source side, and if there is no primary key or unique index on the source side and target has primary key or unique index, then only the mapping of columns happen but, automatic mapping does not happen on the source table. Use All Columns as Key Columns When Generating Compare Pairs: Enables automatic mapping for all columns. Ensure that this option is selected both at the source as well as the target groups. If the option is disabled either at the source or the target levels, then the automatic mapping is disabled for the corresponding groups. Once you have provided your inputs and compare pair is created, you can see what Veridata has selected for each column in "Key Mapping Method" as mentioned below in the figure.  The other important feature was released in the bundle patch is supporting Big Data Hive. We know that you must be having your data replicated in big data targets. However, how will you make sure that the replicated data is correct? Veridata has always provided validation support for Databases. Now we have started and provided support for our first Big data target Hive. You can compare data between your source relational databases (Oracle, MSSQL, DB2 LUW, Sybase, Informix, etc.) and Big Data hive (2.1.1+). Even you can do Hive to Hive comparison too. The Hive comparison need not require any different options to be selected; you can make the Hive comparison just like you do Oracle to Oracle comparison. You would need to create the Hive connection (for your source or target) and create groups which will contain compare pairs from the source database and Hive database. Once all the compare pairs and options for primary key selection is selected, you can run the job. The Hive feature supports Delta comparison and raw partition. However, it does not support Hive authentication and Repair feature as of now.   Let me know if you have any questions.    

In series of blogs around Oracle GoldenGate Foundation Suite (OGFS) products, I mentioned the new features for OGG Monitor in my previous blog. Today, I am pleased to announce that we have released...

Data Integration

Data Integration 2017 Wrap-Up

We are wrapping up 2017 with a summary of all the latest announcements, product features, and happenings in our Data Integration space. I sat down with one of Oracle’s best Product Management experts in the DI space, Julien Testut, Senior Principal Product Manager, to get his insight into the latest happenings. Q: Can you summarize the major accomplishments achieved in these past few months? Without a doubt, it is the recent launch of the Oracle Data Integration Platform Cloud (DIPC). This is extremely exciting as it unifies everything we have been doing for years in the Oracle Data Integration team into a single unified Cloud offering: data movement, data preparation, data transformation and data governance. We are off to a great start and are already pushing our second update this month with release 17.4.5. It includes a brand new user experience with the DIPC Console, a central data catalog with powerful search capabilities, an innovative way to solve data synchronization problems through a powerful yet easy to use Task as well as Remote Agents that can be installed anywhere and give us access to any data. In addition, DIPC comes with full access to Enterprise Data Quality, Data Integrator, and GoldenGate so it can truly address any data integration requirement. Q: Were there any great customer stories that you would like to share with our readers? I had the pleasure to get on stage with ARRIS at OpenWorld this year and I think their story is a great example of what innovative customers can do with our Oracle Cloud technologies. ARRIS is a world leader in video and broadband technology and they have started to use our Oracle Cloud Applications and Platform including Oracle Data Integrator Cloud Service or Oracle Integration Cloud Service to provide a unified, modern business platform, improve agility and reduce costs. ARRIS was able to go from Development to Production in a matter of weeks with Data Integrator Cloud Service and the whole project went smoothly. I am very excited about our partnership with ARRIS and how we can help them go through their digital transformation now and in the future. Q:  What are some key data integration challenges that companies should be thinking about today? One key challenge is that data is now everywhere and how companies can harness value from that data is critical to their success. The great news is that most companies have realized that data is a core asset but we also know that creating value from data is hard. Not only is there a proliferation of data as I mentioned but it also now comes in different shapes and formats and needs to be processed in more flexible ways than ever before. This will keep data integration professionals busy for years to come! This requires using solutions that can deal with any data format and shape, can help integrate it with newer emerging technologies in batch or real-time as well as help govern it. Another critical challenge is how to enable business users to access the data they need in the shape they want and when they require it. As you know, the business demands faster data access in order to derive more value from it. This requires providing self-service data access to anyone who needs to consume data as well as more agile and flexible ways to process that data. Obviously, this raises questions around data governance as you still need to know who did what and when with the data and be able to report on it when required. Finally, and I think it is related to the challenges mentioned above, companies have to start thinking and implementing more automated data management processes using machine learning techniques. Since there is so much data out there and because data integration is so complex, companies will have to automate as much activities as possible in order to keep up. There are many data processing and data ingestions areas that require doing the same tasks repeatedly. Companies can benefit tremendously from some automation and recommendations in order to make their users even more productive. Obviously, we aim at helping companies tackle these challenges with our Data Integration Platform Cloud offering. You can watch this video for an overview tour of DIPC: Q: What else should people know about Oracle’s Data Integration offerings? There would be so much to talk about but let me point out a couple of things. First, Data Integration at Oracle is unique. For example, we were the first vendor to provide enterprise-class data replication capabilities from relational systems into Big Data with GoldenGate for Big Data. Oracle Data Integration is also highly differentiated we have been providing best-of-breed pushdown data processing capabilities with Oracle Data Integrator before it became mainstream at a time when most vendors still relied on separate hardware to handle transformations. Finally, Oracle delivers truly innovative functionality in that space. Data Integration Platform Cloud is a great example of that as no other vendor can provide the same level of functionality into a single offering and this is only the beginning of the DIPC journey for us! Q: What can our customers look forward to in the data integration space with Oracle in 2018? We have a very exciting line up in 2018 with several new releases coming up both in the Cloud and on-premises. Customers can expect us to accelerate the pace of innovation throughout our offerings and to continue to deliver the functionality they need to design their data-driven solutions and solve their challenges. You should also see many synergies between the various components of the Oracle Cloud platform. This is going to be critical for customers as they adopt and standardize on our Cloud Platform and want, for example, to reuse Data Integration processes in their Analytics or Application Integration solutions. Stay tuned for more announcements as we move forward! To learn more about Oracle’s Data Integration Platform Cloud, watch the webcast ‘Supercharge Your Data Integration.’ Happy Holidays from all of us at Oracle!

We are wrapping up 2017 with a summary of all the latest announcements, product features, and happenings in our Data Integration space. I sat down with one of Oracle’s best Product Management experts...

Data Integration Platform Cloud (DIPC) 17.4.5 is available!

Data Integration Platform Cloud (DIPC) was released about a month ago (Unveiling Data Integration Platform Cloud) … and already, we have more to tell you about! Data Integration Platform Cloud (DIPC) 17.4.5 is now available! Here are some of the highlights: DIPC boasts a new user experience, including a new landing page.  You will find shortcuts guiding you towards typical activities such as creating new Connections, new Tasks or browsing the Data Catalog.  The DIPC Catalog is now available, providing access to a unified metadata repository.  You will be able to browse metadata artifacts: Connections, Data Entities, and Tasks.  You will also have Advanced Search capabilities with inline results and property search, as well as dedicated pages for each object with additional information including a data viewer and profiling metrics for Data Entities. The first elevated task, Synchronize Data, has arrived.  Let DIPC take you through how easy it is to synchronize data between a source and a target schema!  The process is easy to configure and run.  This Task will do an initial load using Oracle Data Integrator (ODI).  Then, changed data is captured and delivered using Oracle GoldenGate (OGG).  Synchronizing OGG and ODI together ensures the best performance and zero data loss! Monitoring is key, right?  DIPC includes powerful monitoring capabilities where every Job execution is displayed along with the corresponding metrics and errors.  You are able to filter and search job instances, or drill-down into detailed runtime information.  You can also set up Policies and receive Notifications. This release of DIPC also introduces the Remote Agent functionality. The Remote Agents can be installed anywhere and help DIPC move data from on-premises systems to the cloud and vice-versa. Are you doing on-premises to on-premises work?  DIPC can still help!  Remote Agents give you the ability to create custom replication processes that will move data between two on-premises systems without routing any data to the cloud! Want to learn more?  Visit the DIPC site and view the Webcast: Supercharge Your Data Integration.

Data Integration Platform Cloud (DIPC) was released about a month ago (Unveiling Data Integration Platform Cloud) … and already, we have more to tell you about! Data Integration Platform Cloud (DIPC)...

Data Integration

Data Integration Corner: Catching up with Chai Pydimukkala, Sr. Director, Product Management

With Oracle’s new Data Integration Platform Cloud (DIPC), our customers have access to a truly unified, streamlined way to manage and transform their data to maximize its business value. For the final chapter in the Data Integration Corner blog series, I sat down with Chai Pydimukkala, Senior Director of Product Management, to get a deeper understanding of the value of data and the unique offerings of DIPC.   Q: Hi Chai, to start, please tell me a bit about yourself and what you do here at Oracle. A: I joined Oracle almost 4 years ago but I have been in the data integration and data management space for around 17 years now. At Oracle, I focus on data replication and GoldenGate technologies on the product management side, and I also oversee and help with the overall product strategy for data integration as a whole. Q: How is DIPC changing how companies approach data integration? A: Data integration is not a new area. We have seen a lot of companies come and go and most of the technologies that have been built in this space have been trying to address one problem: just ETL, just replication, or just data quality, for example. All of these companies were trying to solve data integration problems in a siloed fashion. Oracle has all the individual tools, as well, but we recognized the unique opportunity to combine all our data integration features and functionalities. We made sure to keep the individual tools intact for our tens of thousands of existing customers. But then, we were able to distill and make all these same functionalities available to users in a fluid, cohesive way without making them choose among a set of separate tools. With this platform, they get extensive capabilities related to data integration and it helps customers adopt a streamlined way to manage their data and to meet certain needs before they may be aware they even need certain functionalities. There is no other company that can provide a comprehensive data integration cloud suite available in the market today. We look at metadata management with a task-driven approach: if you look at enterprises, they start with a business problem. For example, “I want to migrate my business data from on premise to the cloud. How can I do that?” or “I want to create a reporting system in the cloud.” In the operational world, you have to break down the available tools and do a lot of granular level work to figure out how to get what you need. We are making it easy for our customers: you come into our system, we walk you through a few user-friendly steps so you can say “I want to go from Point A to Point B” and then you implement the task without all the messy work in between. When this happens, you can go back and audit your data to understand on a more broken-down level what happened to your data. That’s really the unique value proposition of DIPC. Q: How are data integration and app integration different? A: Integration is a problem everyone faces. At the core, integration is nothing but exchanging information between various systems. Of course, that information comes in many different formats and from different places and that is where it becomes difficult. In data integration, we are not really looking at the business context – we don’t differentiate between a customer, an employee, or a service ticket, for example. Everything is done via tables. We are not doing even-space integration. For example, say you added a new employee and their joining your company is marked as an event in your HR system. As soon as the employee is added, you have to make sure his information is uploaded. Let’s say you are doing your payroll through a third party and need to alert that third party. That information exchange is application integration. Whereas in a data integration issue, let’s say you are looking to find patterns in your employees’ salaries and want to find the average, the maximum, and the minimum you are paying. You end up pulling all the data from the HR system and creating a report. This is an example of a data integration problem. Data integrations happens when you have large amounts of data you want to transform in order to find new insights. Q: Why is data so important? A: Customers have realized that data is the biggest asset companies have. For example, I used to receive a huge booklet from one retailer years ago – it covered all of their products, whether they were relevant to me or not, and the booklets were costly to produce and to distribute. They did this because they did not have enough data about their customers, so their strategy was to cover all the bases as a precaution. Once they realized the negative impacts of this strategy, they built a Customer 360 system using data from social media, directly from customer transactions, and from other sources. This allowed them to understand what each customer was looking for and to then create very specific, targeted advertisements. The booklets became much more concise and relevant to my purchase history and were supplemented with targeted digital ads. Targeted marketing campaigns have exploded the last few years as companies have come to realize the time and resources the can save and the revenue boost they can gain by treating customers as individuals. None of that would be possible without data. The only way you can grow your company’s revenue is if you know your customers completely and you know where they will invest their money.     As the holidays and New Year approach, the “Data Integration Corner” series comes to a close with this blog. It has been a pleasure to pick the brains of several members of Oracle’s Data Integration product management team. I encourage you to go back and catch up on the previous posts in this series with Nishit Rao, Denis Gray, and Sandrine Riley.   To learn more about DIPC, take a look at a recent article from DBTA:  Oracle Announces Comprehensive Data Integration Platform Cloud and check out the eBook & data sheet.  

With Oracle’s new Data Integration Platform Cloud (DIPC), our customers have access to a truly unified, streamlined way to manage and transform their data to maximize its business value. For the final...

GoldenGate Solutions and News

Oracle GoldenGate Monitor 12.2 new enhancements

Data Integration needs are growing day by day, and especially your replication needs are also increasing as your business evolves. Hence we at Oracle are strived to provide the new enhancements for the products around Oracle GoldenGate to make your experience better. I am talking about Oracle GoldenGate Foundation Suite (OGFS) which is a broader umbrella covers Oracle GoldenGate Studio and Oracle GoldenGate Management Pack (Veridata, GoldenGate Monitor, Director, REST-based Interface and OEM Plug-In). In series of blogs, I will be updating you about the new features that we have recently added for OGFS products. Let's start with the GoldenGate (OGG) Monitor in this blog. OGG Monitor Bundle Patch (12.2.1.2.171115) was released end of November. Mainly, we have provided three enhancements and few improvements around the product. When you are using Oracle Big Data Adapter, it requires you to provide the .properties configuration file for each Big data targets. These .properties files are needed to store the big data target related metadata information. Now with this release, you can edit, save the OGG for Big Data parameter file and .properties file from Monitor editor. All the configurations can be viewed/ modified/saved from Monitor centrally. The other enhancement was around improving the purging history table. GoldenGate Monitor stores historical data in the repository database, and you can use Oracle or SQL Server databases for the same. In this release, we improved the performance of the purging historical data for Oracle database repository. When your historical information grows further, purging and maintaining the massive amount of data was time-consuming. Every month, OGG Monitor will create the new partition and stores all the data into the new partition. When you purge the historical data, the partition will allow Monitor to purge the data faster. If you are an existing user and want to get the benefit of purging data improvement, you can partition your current history table by executing the oracle_partitioning.sql script after an upgrade. If you choose not to execute the oracle_partition.sql script after the upgrade, your repository table will not be changed, and you would continue running the repositories as you would earlier. Oracle GoldenGate has released the major release in last month, OGG 12.3. In this version, Oracle GoldenGate supports two modes, Classic and Microservices architecture. The Microservices architecture is a new age of Cloud ready, REST services-based architecture, empowers you to design, build, embed, and automate all aspects of your replication environment on premise and in the cloud. You can find more details around the OGG 12.3 from the blog written by my colleague Thomas Vengal. Release Announcement for Oracle GoldenGate 12.3. OGG Monitor certifies the OGG 12.3 Classic mode. Now you can monitor the OGG 12.3 Classic mode Instances from OGG Monitor. To Monitor the OGG 12.3 heterogeneous platforms using classic mode, please make sure that you have a more recent patch installed for OGG. In future articles, I will write about OGG Studio, OGG OEM PlugIn and Veridata.

Data Integration needs are growing day by day, and especially your replication needs are also increasing as your business evolves. Hence we at Oracle are strived to provide the new enhancements for...

Data Integration

Data Integration Corner: Catching up with Sandrine Riley, Principal Product Manager

In her role as Principal Product Manager in a team focused on data integration, Sandrine Riley has a unique, overarching perspective on the field. I recently spent some time with Sandrine learning about her role, how Oracle’s new Data Integration Platform Cloud fits into the current landscape, and how that landscape is evolving. Take a look and make sure to register for our webcast on December 7 for an in-depth look at the Data Integration Platform Cloud. Q: Hi Sandrine, tell me about yourself and your role here at Oracle. A: I am one of the product managers in the data integration area and I have a somewhat unique role where I don't necessarily interface with development on a day-to-day basis like many other product managers do. I work a lot more directly with our Customer Advisory Board and on many of our events that are data integration heavy, so I have the opportunity to really hear from our customers on a first-hand basis and understand their needs and experiences on a more personal level. Q:  What does a typical day as a data integration PM look like for you? A: It really varies because I work across a broad range of solutions including GoldenGate, Enterprise Data Quality, Enterprise Metadata Management, Oracle Data Integrator, and of course, Data Integration Platform Cloud. A lot of it has to do with interacting with customers - evangelical work where we are really trying to make sure our customers understand our new products and we understand their needs.  Q: What is one word that comes to mind for "data integration"? A: The first word that comes to mind is "plumbing." The importance of data integration is not always clear at face value but it's the foundation below entire enterprises. Q: What kind of customer is Data Integration Platform Cloud built for? A: What's really unique about DIPC is the fact that we're combining so many varying personas within an organization. It's the first time you'll see so many functionalities wrapped into one platform. Realistically, this addresses all customers because it is a need all customers have. If the data integration part of your business is not done right, there are many other sacrifices that are made at higher levels that become more obvious – that really ties back to my ‘plumbing’ comment. That's why it's so important to get data integration right, so that you can make valuable, insightful, and appropriate business decisions. In the end, DIPC is applicable across the board to all industries of all sizes, because data matters to everyone. Q: A lot of the data companies use resides outside the company. Why does that matter? A: We don't always have control over what kind of data we get. It may be in a format we don't understand or have not dealt with before. Being able to transform that into data we can process and incorporate it with other data is critical because data is one of our most valuable assets. If you don' have an appropriate data integration layer, it is really difficult to trust and value the insights that you think you are getting out of your data. Q: Where do you see the data integration field going in the next couple of years? A: I think we will continue to see more and more synergy in how we approach data integration. People need to do more with less and they are really interested in a better user experience. Our goal is to capture that and make a great user experience that is efficient and drives business value. As data sources become increasingly diverse, we are also seeing a need for simplicity in the ‘plumbing’ of our data integration tools. Streamlined processes and trustworthy data are and will continue to be two critical pillars of data integration. Data and its sources can be unpredictable and ever-changing, which is why the tools we use to make sense of it are so critical. I appreciated Sandrine’s perspective here – after working closely with so many of our customers, she as seen the direct impact tools like GoldenGate, Oracle Data Integrator, and others have had on our customers’ business value and she has also picked up on the even greater impact weaving these tools together has had. Research like this brought us to the Data Integration Platform Cloud, as we move forward in our goal of making data integration simple, reliable, and effective for all our customers.               

In her role as Principal Product Manager in a team focused on data integration, Sandrine Riley has a unique, overarching perspective on the field. I recently spent some time with Sandrine learning...

Data Integration

Walkthrough: Oracle Data Integration Platform Cloud Provisioning

We recently launched Oracle Data Integration Platform Cloud (DIPC), a brand new cloud-based platform for data transformation, integration, replication and governance. In case you missed it, you can get more information about it in Unveiling Data Integration Platform Cloud. In this article, I will focus on the provisioning process and walk you through how to provision a Data Integration Platform Cloud instance in the Oracle Cloud. First, you will need to access your Oracle Cloud Dashboard. You can do so by following the link you received after subscribing to the Oracle Cloud or you can go to https://cloud.oracle.com and Sign In from there. We will start with the creation of a new Database Cloud Service (DBCS) instance. This is the only pre-requisite before you can create a Data Integration Platform Cloud (DIPC) instance as DBCS is used as a repository for the DIPC components. From the Dashboard, select Create Instance: Then click on Create next to Database: Follow the instructions located in the DBCS documentation to create your Database Cloud Service instance: Creating a Customized Database Deployment NOTE: Make sure you create a Custom instance with Database Backups enabled (select ‘Both Cloud Storage and Local Storage’ under Backup Destination) when creating it to make sure it will be compatible with DIPC. When your Database Cloud Service (DBCS) instance is provisioned, you can move forward with the provisioning of your Data Integration Platform Cloud (DIPC) instance. Go back to the Dashboard, click on Create Instance, click on All Services and scroll down to find Data Integration Platform under Integration. Click on Create next to it. This will get you to the DIPC Service Console page: Click on Create Service to navigate to the provisioning screens. In the Service screen, enter a Service Name, the size of the cluster and the Service Edition. In this example, I have selected the Governance Edition that includes Data Governance capabilities in addition to everything included with DIPC Standard and Enterprise Editions.   Click Next to access the Details page. There you can: Select the Database Cloud Service (DBCS) instance you have previously created and enter its details under Database Configuration Specify your Storage Classic account and its details under Backup and Recovery Configuration Define the WebLogic Server Configuration for this DIPC instance. WebLogic is used to host components such as the DIPC Console, Enterprise Data Quality (EDQ) Director or Oracle Data Integrator (ODI) Console: When done, click Next to review the configuration summary: Finally click Create to start the DIPC instance creation. Next, we can track the progress of the Data Integration Platform Cloud (DIPC) instance creation in the DIPC Service Console. To go there we first need to show DIPC in the Oracle Cloud Dashboard. From the Dashboard, click on Customize Dashboard: Scroll down in the list and click Show under Data Integration Platform Cloud: Data Integration Platform Cloud (DIPC) will then start appearing on the Dashboard: Click the Data Integration Platform Cloud (DIPC) tile to see more details about your subscription and click Service Console to view your DIPC instances: You will then see your DIPC instance in the process of being created with the ‘Creating service…’ status: You can click on the instance name to get more details about it and the creation process (under In-Progress Operation Messages): When the provisioning process is over, the DIPC instance will show as ‘Ready’: That’s it! Congratulations! We now have a new Data Integration Platform Cloud (DIPC) instance to work with. You can get more information about it on the product page: Data Integration Platform. In future articles, we will cover how to start using the various components included with DIPC. Do not miss our webcast Supercharge Your Data Integration on December 7th to learn more about Oracle Data Integration Platform Cloud (DIPC).

We recently launched Oracle Data Integration Platform Cloud (DIPC), a brand new cloud-based platform for data transformation, integration, replication and governance. In case you missed it, you can get...

Data Integration

Data Integration Corner: Catching up with Denis Gray, Sr. Director, Product Management

Denis Gray, Senior Director of Data Integration Technology, has dedicated his career to the data integration space. I recently had the opportunity to sit down with Denis and pick his brain about all things data integration.   My conversation with Denis was eye opening. In just the last few years, customers’ data integration needs have experienced an overhaul. In particular, the need to process big data and merge data across disparate environments has meant customers need more powerful, more versatile, and faster tools. I was also reminded that we never truly know what our data is capable of telling us, we can never fully predict how it can move our businesses forward, until we are presented with a tool that can show us what it’s capable of.   Q: Hi Denis, tell me about your background with data integration and your role at Oracle. A:  I have been in data integration since 2000, when I began working with Hyperion Solutions. After Oracle acquired Hyperion in 2007, I started working with Jeff Pollock and found his team to be a really great fit for me.  Over the years, Oracle has added on capabilities thanks to the Golden Gate acquisition, various data quality acquisitions, and others. As we moved into the cloud, we saw an opportunity to bring all these capabilities together, which was the inception of the Data Integration Platform Cloud (DIPC).   Q: What are the stand-out features of DIPC? A:  In a nutshell, having a cloud platform for data integration allowed us to bring together best of breed data integration engines that we had for on-premise use and transform them for the cloud. So we looked at bulk-type transformations of data, real-time data integration, and being able to provide data governance, data lineage, and business impact along with metadata quality and enterprise management. Moving to the cloud allowed us to bring these products together and also to build additional functionality on top of that. Our core goal with DIPC and with all the technologies available before has been to help our customers derive more value from their data. We brought together this set of products to allow the DVA, the ETL developer, and the Data Steward to be more collaborative and to more efficiently meet their data integration needs. Q: What is the role of big data in data integration? A:  Big data has definitely brought integration tools a long way. The thing is, we can look at big data in relational databases without a platform but there are so many other elements that come into play like IoT, difference devices, and just generating massive amounts of data and being able to intertwine all that data. Some of the data our customers work with lives in relational databases but plenty of other data lives in big data environments, whether they’ve been pushed out to Spark or back to Hadoop itself, that’s where we’ve been seeing our customers struggle to connect data over the years.  All the new data integration tools available now are helping to bridge the gap between big data in disparate environments. Now we are empowering IT users and ETL developers to use machine learning to be able to use Spark libraries on their data sets or to use the massive parallelism within their big data and Hadoop environments to provide their transforms.   Q: How is Oracle addressing the diverse needs of its customers? A: We know that every Oracle customer out there has different types of relational databases as well as different types of sources and targets, so we made the decision early on that Oracle’s data integration solutions would support all of our customers’ needs.  When we first came out with Oracle Data Integrator (ODI) for example, we made sure it wasn’t just the best for Oracle, but it was the best for any type of relational database out there.  We are not locking any of our customers into a specific set of underlying technologies – we want to continue to be open and are committed to continuing to grow our ability to support as many other services as possible.  On top of that, we want to run not only in the cloud but also in hybrid mode so we can have our agents run on premise, in third-party data centers, and really with any type of source, any type of target, and any type of scale.   Q: Most companies are using one analytics software or another. Don’t those technologies already have elements of data integration built into them? A: Not really. If we were to look at the top analytics technologies, they have some limited capabilities that you may be ok with if you’re looking for high-level information once a month and are willing to wait 20 minutes for your report to run. If you want to end up with data in a format that many users can access that’s already been aggregated, transformed, and in a dimensional model, that’s where data integration tools need to come in.     To learn more about Data Integration, I invite you to join us on December 7 at 9am PST for a data integration deep dive webcast with Denis and Madhu Nair, Director of Marketing, Data Integration. Register now!

Denis Gray, Senior Director of Data Integration Technology, has dedicated his career to the data integration space. I recently had the opportunity to sit down with Denis and pick his brain about...

Data Integration

Webcast: Supercharge Your Data Integration

Join us on December 7 at 9am PST for a webcast with Denis Gray, Senior Director of Product Management, and Madhu Nair, Principal Product Marketing Director, as they dive deep into the Oracle Data Integration Platform Cloud, which we premiered just last month at Oracle OpenWorld. Oracle is taking Data Integration to the next level with the recent launch of its Data Integration Platform Cloud (DIPC). DIPC combines data movement, data replication, and data governance into a single and unified platform, providing a powerful and intelligent foundation for businesses to take advantage of data for better decision-making.   Join us on this webcast as we discuss how Oracle Data Integration Platform Cloud: Provides a unified and comprehensive cloud platform for data integration and governance Supercharges heterogeneous data integration across hybrid environments Enhances big data and streaming capabilities with both real-time data streaming and batch data processing Oracle DIPC combines data movement, data replication, and data governance into a single and unified platform, providing a powerful and intelligent foundation for businesses to take advantage of data for better decision-making. You do not want to miss this webcast. Register now! In the meantime, take a look at how our customers our using data integration in the cloud in this ebook.

Join us on December 7 at 9am PST for a webcast with Denis Gray, Senior Director of Product Management, and Madhu Nair, Principal Product Marketing Director, as they dive deep into the Oracle Data Integ...

Data Integration

Data Integration Corner: Catching up with Nishit Rao, Sr. Director, Product Management

The recent launch of Oracle’ Data Integration Platform Cloud has sparked many valuable conversations about data integration and its role in today’s integration landscape. I recently had a chance to sit down with Nishit Rao, Senior Director, Product Management, to discuss why data integration matters and how it has changed over time. Take a look: Q: Hi Nishit, to begin, tell me about yourself and your role here at Oracle. A: I’m in the data integration team and, within this team, my main focus is providing leadership for everything related to SaaS. So far, the data integration team has focused on databases and data warehouses. One of the new things we are focusing on as we move into the Cloud is the value of SaaS applications in data integration. This is because SaaS is such an important part of Oracle and so important to our customers, even if they don’t have Oracle SaaS. Q: Recently, Oracle announced the availability of a brand new data integration platform that has been well-received. Can you give me an overview of what this platform does? A: Data Integration Platform Cloud brings together all the data services that customer needs in the context of real time data replication and batch data integration. On top of this, you can layer in enterprise data quality and governance. In essence, it’s everything customers typically do with data in one place: replicating, cleansing, merging, and integrating data. Q: What is data integration? A: Data integration pulls together data from different sources and cleanses them, merges them, and replicates them across any number of target sources. For example, that could mean moving data from an Oracle database to another Oracle database, or from an Oracle database to an SQL server database, or moving data from an Oracle database to a data warehouse. Q:  Why is data integration important? A: In the past, data integration was important because 80% or 90% of the world’s transactional data resided in databases. Customers needed access to that data. Now, it is even more important because there is data that is not just inside traditional databases but also in unstructured areas like file systems, log files, digital channels, and data lakes, among other sources. Managing this data, which is outside the relational databases, is becoming tougher for customers. Data now resides not just on premise but also in the cloud, so customers need a single set of tools to management traditional data, unstructured data, big data, data that lives on premise, and data that lives on the cloud. Q: How is data integration different today than it was two years ago? A: The first big change that has happened is the arrival of data outside the database. It could be coming from web-based properties – things like Pinterest or Instagram, for example. All of that is unstructured data and customers need to know about it. Second, customers are now dealing with data both on premise and in the cloud and need to figure out how to integrate data from more sources. Third, in the past, data was handled “after the fact.” For example, you purchased something and people wanted to see that data but they may want to see it a day, a week, or even a month later. There’s a massive compression in time for everything we do today – everything happens instantly. Everything is changing because everything that used to happen in batch now must also happen in real-time. My conversation with Nishit served as an introduction into the complex world of data integration. This is an incredibly multi-faceted and quickly evolving field. Customers’ data integration needs have experienced an overhaul in the last few years and Oracle is taking a bold approach in meeting those new needs while also anticipating the future. This conversation left me with a curiosity to dive deeper on data integration topics that I will explore in further interviews in this series. You can learn more about Oracle Data Integration here. For a deep dive into data integration, join us on December 7 at 9am PST for a webcast with Denis Gray, Senior Director of Data Integration Technology, and Madhu Nair, Director of Marketing, Data Integration. Register now!

The recent launch of Oracle’ Data Integration Platform Cloud has sparked many valuable conversations about data integration and its role in today’s integration landscape. I recently had a chance to...

Unveiling Data Integration Platform Cloud

We are pleased to announce Oracle Data Integration Platform Cloud (DIPC), a cloud-based platform for data transformation, integration, replication and governance.  DIPC provides seamless batch and real-time data integration among cloud and on-premises data environments, maintaining data consistency with fault tolerance and resiliency.  DIPC brings together the best of breed Oracle Data Integration products; Oracle GoldenGate, Oracle Data Integrator and Oracle Enterprise Data Quality within one unified cloud platform.  DIPC enables customers to achieve their goals of maximizing value from their data, enforcing the mantra: “Data is the new Capital”.  DIPC's unified platform transforms, replicates, governs and monitors data at any scale, all while being faster, easier and smarter than any data platform.  Unlike other data integration tools, DIPC includes data governance as a key pillar of its core foundation.  As such, data stewards can ensure that their enterprise data can be trusted by using DIPC to profile and audit data enforcing data integrity.  They can easily create data quality dashboards, as well as set up policies to enforce business rules.  Oracle Data Integration Platform Cloud provides a comprehensive cloud-based solution for all data integration and data governance needs!  Learn more HERE! Can you tell we are excited?  Data Integration Platform Cloud was announced during Oracle OpenWorld, read the press article:  Oracle Cloud Platform Innovates to Power Big Data at Scale.  Take a look at a recent article from DBTA as well:  Oracle Announces Comprehensive Data Integration Platform Cloud. This is truly a unified Data Integration Platform built for Cloud!  Check out the eBook & data sheet. Oracle Data Integration Platform Cloud (DIPC) simplifies and accelerates the delivery of your data integration projects by seamlessly working with on-premises and cloud data sources and supporting data of any shape or format. DIPC is easy to configure and use through simplified provisioning, management and administration features and functionality.  DIPC is a unified platform, built on Oracle GoldenGate, Oracle Data Integrator and Oracle Enterprise Data Quality.  This comprehensive approach in the Cloud provides breakthrough efficiency for IT as well as LOB users.  Native integration with Oracle Platform as a Service (PaaS) offerings such as Oracle Database Cloud Service, Oracle Database Exadata Cloud Service, Oracle Big Data Cloud Service and Oracle Java Cloud Service provides an expanded ecosystem. With Oracle Data Integration Platform Cloud, you can easily and quickly: •            Access and manipulate hundreds of data sources, anywhere, in any format •            Eliminate downtime in the cloud as well as on-premises, ensuring data availability •            Perform cloud onboarding •            Extract, load, and transform (ELT) data at scale and speed in the cloud as well as on-premises •            Replicate selected data sources •            Trust your data by maintaining and governing data quality with prebuilt processes •            Simplify IT and provide self-service for the business •            Be more agile and react faster than your competition Want to take a closer look?  Click to view the video on integration, transformation, streaming, analytics and data quality management from a single platform:  Oracle Data Integration Platform Cloud There are many solution we help address – which is most important to you?  

We are pleased to announce Oracle Data Integration Platform Cloud (DIPC), a cloud-based platform for data transformation, integration, replication and governance.  DIPC provides seamless batch and...

What is new in Oracle GoldenGate Studio 12.2.1.3.0?

Just recently we released the new Oracle GoldenGate Studio 12.2.1.3.0 version. I am going to write about new features User-interface Scalability, Reverse Engineering, and Heterogeneity provided in this release.   User-interface Scalability, helps you design your highly complex replication environment with a very intuitive design provided in GoldenGate Studio. Complex deployment diagrams having hundreds of nodes are now easy to navigate, view, edit, and build using the new scalable user interface. Earlier all the nodes, databases were shown on single deployment diagram, which was difficult to follow and edit if you have hundreds of such nodes. In your replication environment, you have many replication paths that drive your data from point A to B, and mostly you would be working on single replication path at any point in time. You will not work on all the replication path at a given time. Your focus is the current replication path that you would be interested in viewing, editing, or navigate. Hence now with this new design, you can select individual replication path from "Combo Box" shown in the below figure and work upon it. All your replication path is shown in the combo box, later in the future release, you will be able to search the combo box items as well. All the time you would see uni-directional replication path on the screen depending upon what you have selected in a combo box. However, If you have designed the bi-directional topology, and you were selecting the concerned replication path in a combo box, you would see the bi-directional replication path diagram on the screen.   Reverse Engineering is the new feature that allows you to simulate your existing replication environment into GoldenGate Studio. You can use this feature if you have current GoldenGate environment and you want to further enhance your designing and deployment experience to other GoldenGate Instances in a quick manner. You can upload your existing extract, replicat, mgr, and GLOBALS parameter files as shown in the figure below. GoldenGate Studio then will select all the parameters in the properties inspector. From there, you can enhance your designing to other OGG and Database Instances as per your business need.     Solution Templates and Heterogeneity includes new solution templates consolidation and distribution, and IBM DB2 LUW heterogeneous database support.     You can refer GoldenGate Studio 12.2.1.3 Data Sheet for brief information about Studio, new features, and benefits. Please refer GoldenGate OTN Page, GoldenGate Studio User Guide for more information.   Feel free to reach out to me for your queries by posting in this blog.

Just recently we released the new Oracle GoldenGate Studio 12.2.1.3.0 version. I am going to write about new features User-interface Scalability, Reverse Engineering, and Heterogeneity provided in this...

#OOW17 was the #bestoneyet!

Author: Rimi S. Bewtra, Sr. Director, Oracle Cloud Business Group … here are just a few of my highlights! WOW … I am not even sure where to begin … this Oracle Open World was AWESOME!!!!! And if by chance you missed it then catch one or more of the replays: (Larry Ellison, Mark Hurd, Thomas Kurian, Dave Donatelli) This year was all about Innovation – across areas like AI, Chatbots, App and Data Integration, APIs, Robotic Process Automation, IoT and Content and Experience Management … and few other things like Blockchain, Autonomous Database and Security and Management in the Cloud also got some attention J but I will keep my focus and highlights on the areas I manage and am most familiar with and let my colleagues share their own highlights. So much going on this year, here are just a few of my highlights! No matter where you went, you probablhy heard the buzz and hopefully caught one or more of our sessions on:  #AI, #APIs, #BigData #Chatbots, Content & #ExperienceMgmt Integration (App and Data), #IoT, and #RoboticProcessAutomation Leaping off of our global Chatbots launch, our Intelligent Bots were everywhere! Across all major keynotes the attention was on AI powered Chatbots as the new engagement channel to enhance customer and employee experiences. There were press releases on Oracle’s AI –powered Intelligent Bots and strategic partnerships with both Chatbox and Slack.    Oracle Introduces AI-Powered Intelligent Bots to Help Enterprises Engage Customers and Employees Oracle and Chatbox Collaborate to Bring Instant Apps to AI-powered Oracle Intelligent Chatbots Reuters: Slack locks down Oracle partnership targeting enterprises CNBC: Slack is partnering with Oracle to offer new in-app business bots ComputerWorld: Slack and Oracle move to collaborate on business app chatbots  You are going to continue to hear a lot more so if you have taken 30 minutes to watch our Launch Webcast and visit www.oracle.com/bots - it is well worth the time. More than anything #OOW17 was about innovation, sharing our customer successes and Oracle’s strategy and vision! I was inspired by the kids from Design Tech High School – these students are teaching Silicon Valley a few things. Clearly there is a lot to look forward to and this next generation of entrepreneurs have already started to lead the way.  The pace of innovation is not slowing down. But one thing is certain, with Oracle Cloud, we are committed to supporting our customers throughout their Cloud journey, no matter where they are. Our Cloud Platform innovations:   Oracle Expands Open and Integrated Cloud Platform with Innovative Technologies Oracle Cloud Platform Innovates to Power Big Data at Scale More than anything across all areas it was our customers and partners that showcased the power of the Oracle Cloud Platform. Organizations, our customers, big and small and across industries, like Australia eBay, Exelon, Financial Group, CoreLogic, National Pharmacies, Orb Financial, Paysafe, Sinclair, Subaru of America, The Factory, Trek Bicycles, Trunk Club, Turning Point and so many others, shared their stories about how they have embraced their Oracle Cloud to lead their industry, and embark on their own digital transformation journeys. Customers shared how they are leveraging Oracle Integration Cloud to connect their SaaS and on-premises applications and orchestrate end to end process automation and leveraging 100+ out-of-the-box cloud adapters, and tooling for visual application development. With Oracle Data Integration Platform Cloud organizations are able to easily integrate new sources of data in any format to help eliminate costly downtime, accelerate data integration for analytics or big data, and improve data governance. And whether you were at one of our largest Developer focused Oracle Code events which drew over 1500+ attendees or one of our standing room only general and conference sessions, you probably got a glimpse of our most complete API management cloud solution to design, govern, manage, analyze, monetize, and secure APIs in a true hybrid deployment. Oracle’s API Platform Cloud now includes Apiary and provides developers the ability to prototype, test, document and manage their APIs with industry standards such as OpenAPI and API Blueprint. Now I could go on and on, but let’s save something for future blogs -- before I end I must call out our partners. They drove the customer successes by helping our customers embark on their journeys – here are just a few of the ones that I had the pleasure of working with -- Auraplayer, Avio,  Cognizant, Fishbowl, Rubicon Red, SunetraTech, Sofbang, TetraTech along with of course Deloitte, Intel, PWC, Tata Consultancy Services, Accenture, Fujitsu, Infosys, and Wipro. Hopefully all of you had a chance to experience #OOW17 the #bestoneyet! Check out the Keynote replays (Larry Ellison, Mark Hurd, Thomas Kurian, Dave Donatelli) and get ready to hear much more from Oracle – this year’s #OOW17 #notwahatyouexpected from Oracle. With the help from our customers, partners we are able to showcase how technology in transforming companies and industries – we were able to   Explore Tomorrow, Today! P.S. Mark your calendars, #OOW18 promises to be #evenbetter!

Author: Rimi S. Bewtra, Sr. Director, Oracle Cloud Business Group … here are just a few of my highlights! WOW … I am not even sure where to begin … this Oracle Open World was AWESOME!!!!! And if by...

Bring it All Together Integrate and Automate …. More Important than ever before!

Posting for Original Author: Rimi Bewtra, Sr. Director, Oracle Cloud Business Group Let’s take a moment and think about what you did this morning … Did you check your email, log into your employee portal, review a document, a PowerPoint, or a video – Chance are you were reviewing data that was sitting within an application – Much of that data was part of one or more backend systems, social apps, enterprise apps hosted in the cloud or on-premises somewhere or on 3rd party systems.    Today’s economy is based on information – Data is the single biggest asset that companies own, share, enrich, manage, and govern. Yet data is also very hard to deal with – consider this: 72% of big data projects have issues with data integration reliability and typical data outages last 86 minutes, totaling an average cost of $690,200. Any wonder then that companies spend $400B every year just connecting systems? So whether you are talking about #AI, #Chatbots, #BlockChain or #Digital Economy --- what makes all of this come together is your ability to integrate and automate quickly and efficiently --  integrate and automate your data, integrate and automate your apps, and integrate and automate your devices. At #OOW17, here are 5 sessions that will help you SEE, LEARN and EXPERIENCE how Oracle Cloud Platform for Integration is bringing them all together. Oracle Integration, API, and Process Strategy Monday, Oct 02, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3005 Oracle Data Integration Platform Cloud Strategy and Roadmap Monday, Oct 02, 12:15 p.m. - 1:00 p.m. | Moscone West - Room 3024 Oracle API Platform Cloud Service: Roadmap, Vision, and Demo Tuesday, Oct 03, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 3005 Oracle GoldenGate Product Update and Strategy Tuesday, Oct 03, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3003 Differentiate with SaaS Applications Using Rapid Process Automation Wednesday, Oct 04, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3005 Hope I see you at #OOW17 - this year is going to better than the years past ... and I am not just saying that because I happen to head up product marketing for some of Oracle's coolest innovations; but seriously we have a lot of smart developers and engineers and they have been busy and we are ready to show off! See you soon.

Posting for Original Author: Rimi Bewtra, Sr. Director, Oracle Cloud Business Group Let’s take a moment and think about what you did this morning … Did you check your email, log into your employee...

OOW17 - A Data Integration Focus

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; color: #454545} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Helvetica Neue'; color: #454545; min-height: 14.0px} span.s1 {text-decoration: underline ; color: #e4af0a} OOW17 is just around the corner. It is always challenging to pick from the dazzling array of sessions spanning product releases, industry updates, hands-on labs and other offerings when at the conference. After all, it is a deluge of information.   If you are focused on Data Integration, here is a complete list of data integration sessions on offer at this open world.    To also help you narrow down the list further here are 5 key sessions that you might want to catch when at the conference.   Oracle Data Integration Platform Cloud Strategy and Roadmap [CON6646]   Monday, Oct 02, 12:15 p.m. - 1:00 p.m. | Moscone West - Room 3024 Jeff Pollock, Vice President of Product , Oracle  Christian Cachee, VP Data Management & Analytics, Paysafe Group PLC Connie Yang, Principal MTS, Software Engineer/Architect, eBay Inc   This session covers the range of solutions in Oracle Data Integration Platform Cloud. Attendees get an overview and roadmap, including Oracle Data Integrator, Oracle GoldenGate, Oracle Metadata Management, and Oracle Enterprise Data Quality. The session also covers how each solution plays a role in the important cloud and big data trends.   Oracle Data Integration Platform Cloud Deep Dive [CON6651]   Monday, Oct 02, 5:45 p.m. - 6:30 p.m. | Marriott Marquis (Golden Gate Level) - Golden Gate C1/C2 Ashish Bokil, Vice President Analytics & Co-Founder, BIAS Corporation Denis Gray, Oracle   The rapid adoption of enterprise cloud-based solutions brings with it a new set of challenges. Data integration is one of the greatest challenges of any enterprise cloud-based solution. As customers ascend more and more of their enterprise applications to the cloud they realize that a cloud-based enterprise data integration platform is key to their cloud success. Join this deep dive and see a live demo of the power and simplicity of Oracle Data Integration Platform Cloud led by Oracle Product Management. See how Oracle Data Integration Platform Cloud simplifies the end-to-end creation/execution of the historically arduous tasks of instantiating, loading, and synchronizing a cloud database from an on-premises database.   Oracle Data Integrator Product Update and Strategy [CON6654]   Tuesday, Oct 03, 11:30 a.m. - 12:15 p.m. | Marriott Marquis (Yerba Buena Level) - Nob Hill C/D Julien Testut, Senior Principal Product Manager, Oracle Gaurav Singh, Data Warehouse, and Big Data Solution Architect, Energy Australia Adrian Mathys, Integration Architect, Helsana Versicherungen AG   This session showcases Oracle Data Integrator, Oracle’s strategic product for batch data integration via ELT on-premises or in the cloud. Oracle’s Product Management provides a product overview and discusses current and future product strategy. Learn about the latest big data capabilities with improved support for big data technologies including Spark and Spark Streaming, as well as enhancements around cloud technologies and deployments, lifecycle management, and a preview of the future product roadmap. In addition, hear from a customer on how Oracle Data Integrator is being used in mission-critical data integration processes.   Oracle GoldenGate Product Update and Strategy [CON6897]   Tuesday, Oct 03, 5:45 p.m. - 6:30 p.m. | Moscone West - Room 3003 Chai Pydimukkala, Senior Director, Oracle Sai Devabhaktuni, VP Core Data Platform, PayPal Bobby Curtis, Oracle Oracle GoldenGate is the market leader for data replication and real-time data integration. Customers have leveraged Oracle GoldenGate to solve use cases ranging from active-active, real-time data warehouses, and big data integration and migration. In this session, Oracle Product Management discusses the key new features of Oracle GoldenGate including overall platform, heterogeneity, and big data and cloud-related features. Get a glimpse into the future product strategy and roadmap and learn what’s new in Oracle GoldenGate. This session also includes a real-world case study from a leading customer.   Oracle Data Integration Platform Cloud, Governance [CON6652]   Wednesday, Oct 04, 5:30 p.m. - 6:15 p.m. | Moscone West - Room 3024  Mike Matthews, Oracle Neha Kaptan, Enterprise Data Governance Leader, Cummins Inc   Understand Oracle’s strategic direction to support chief data officers and others responsible for enterprise data management in their journey to treat data as a capital asset. See how establishing trust in business-critical data can ensure the success of key initiatives such as migrating applications to the cloud, analytics, and master data management. In this session learn how Oracle’s data integration platform is built for effective and scalable data governance, powered by the market-leading data quality tool for business users, Oracle Enterprise Data Quality. See brand new data governance and advanced data stewardship features, such as the data catalog, business glossary and business rules, data quality monitoring, issue management, and data remediation.    

OOW17 is just around the corner. It is always challenging to pick from the dazzling array of sessions spanning product releases, industry updates, hands-on labs and other offerings when at the...

Data Integration

Oracle Metadata Management 12.2.1.2.0 is Now Available

Oracle Metadata Management harvests and catalogs metadata from virtually any metadata provider, including relational, Hadoop, ETL, BI, data modeling, and many more to allows for interactive searching and browsing of the metadata as well as providing data lineage, impact analysis, semantic definition and semantic usage analysis for any metadata asset.  Oracle Metadata Management is vital in solving a wide variety of critical business and technical challenges around data governance, transparency, and overall data control.  Would you want to know how those report numbers are calculated, or how any particular change made to any data element affects following tasks and reporting?  Oracle Metadata Management answers these pressing questions for customers in a lightweight browser-based interface. Oracle Metadata Management 12.2.1.2.0 includes a number of key updates that revolve around change detection, model comparisons and merge, while continuing to enhance existing harvesting bridges as well as adding new ones.  For those of you already working with Oracle Metadata Management, here is a quick view into some of the new functionality we are excited about: The Business Explorer UI becomes even more beneficial and helpful as it incorporates more functionality (such as Glossary workflow). Business users will be pleased! Custom Attributes are now applicable to all models. This is additional convenient in cataloging everything by crowd sourcing knowledge available in an Enterprise. A first for Oracle Metadata Management are REST APIs. Check out the online demo & screenshot below available with the tool for some insight. More harvesting bridges expanding choice and opportunity within Big Data (refer to the overall new features document linked below, and also in the  certification matrix). Lastly, remote harvesting bridges can now run over the web.  Thinking about transitioning to the Cloud – this is handy! For more information, please view the Oracle Enterprise Metadata Management 12c New Features Overview.  Enjoy!

Oracle Metadata Management harvests and catalogs metadata from virtually any metadata provider, including relational, Hadoop, ETL, BI, data modeling, and many more to allows for interactive searching...

Data Integration

Release Announcement for Oracle GoldenGate 12.3

What’s New in Oracle GoldenGate 12.3 Oracle is pleased to deliver the first Microservices enabled, real-time streaming and replication platform to simplify, manage, and monitor complex deployment environments in private, public and hybrid environments. Oracle GoldenGate 12c Microservices Architecture empowers customers to design, build, embed, and automate all aspects of their replication environment on premise and in the cloud.  Customers can now interact with their data and replication architecture from anywhere!  Please see below for more details: GoldenGate 12.3 Platform Features – All Platforms For the Oracle Database Microservices Architecture A new services based architecture simplifies large scale and cloud deployments.  This architecture enables remote and secure configuration, administration, and monitoring capabilities using comprehensive RESTFul interfaces and embedded HTML5 based UI. It enables Applications to embed, automate, and orchestrate GoldenGate across the enterprise. Parallel Replicat Highly scalable apply engine for the Oracle database that can automatically parallelize the apply workload taking into account dependencies between transactions.  Parallel Replicat provides all the benefits of Integrated Replicat performing the dependency computation and parallelism outside the database.  It parallelizes the reading and mapping of trail files and provides the ability to apply large transactions quickly. Automatic Conflict-Detection-Resolution (CDR) without application changes Quickly enable active/active replication while ensuring consistent conflict-detection-resolution (CDR) without modifying application or database structure.  With automatic CDR you can now configure and manage Oracle GoldenGate to automate conflict detection and resolution when it is configured in Oracle Database 12c Release 2 (12.2) and later. Procedural Replication to enable simpler application migrations and upgrades Procedural Replication in Oracle GoldenGate allows you to replicate Oracle-supplied PL/SQL procedures, avoiding the shipping and applying of high volume records usually generated by these operations. Database Sharding Oracle Sharding is a scalability and availability feature designed OLTP applications that enables distribution and replication of data across a pool of Oracle databases that share no hardware or software.  The pool of databases is presented to the application as a single logical database.  Data redundancy is managed by Oracle GoldenGate via Active-Active replication that is automatically configured through the database engine. For SQL Server Introducing a new, CDC based Capture Oracle GoldenGate 12.3 introduces a new Change Data Capture based Extract, which offers some new functional advantages over our existing transaction log based capture method. Benefits include the following: Capture from SQL Server 2016 Remote Capture Transparent Data Encryption (TDE) support Certification to capture, from an AlwaysOn Primary and/or readable synchronous Secondary database With an increase in uptake of our customers running their application critical databases in an AlwaysOn environment, Oracle GoldenGate 12.3 is the first version to certify capture from either the Primary database, or a read-only Synchronous Secondary database. Delivery to SQL Server 2016 For DB2 z/OS Remote Execution The new remote execution includes both remote capture and delivery for DB2 z/OS. Running Oracle GoldenGate off the z/OS server significantly reduces the MIPS consumption and allows the support of AES encryption and credential store management. For DB2 i Support for IBM i 7.3 Oracle GoldenGate supports the latest DB2 for i platform. For MySQL DDL replication between MySQL DBs With the DDL replication between MySQL databases, there is no need to stop Oracle GoldenGate replication when there are DDL changes on the source database. Join us in upcoming events: Stay up to date by visiting our Data Integration Blog for up to date news and articles. Save the Date!  Oracle OpenWorld is in early October, and we look to have a Customer Summit on Thursday afternoon, October 5th, as we have had in the past.  Mark your calendars and more to come on that soon.  

What’s New in Oracle GoldenGate 12.3 Oracle is pleased to deliver the first Microservices enabled, real-time streaming and replication platform to simplify, manage, and monitor complex deployment...

Data Integration

A Simple Guide to Hyperion Financial Management (HFM) Knowledge Modules for Oracle Data Integrator 12c

Author: Thejas B Shetty (Oracle SSI) Introduction Until Hyperion Financial Management (HFM) 11.1.2.3.x version, it was possible to use the pre-packaged Oracle Data Integrator (ODI) Knowledge Modules (KMs) to integrate data/metadata with HFM. These knowledge modules used the HFM drivers (HFMDriver.dll) and Visual Basic (VB) APIs to connect and communicate with HFM. APIs for the HFM in the 11.1.2.4 version were completely re-written using Java. The old VB APIs and HFMDriver.dll are obsolete and hence cannot be used to communicate with HFM 11.1.2.4. Oracle has not released a compatible HFM Knowledge Modules for the 11.1.2.4 version using the latest Java API and now recommends using alternative methods to integrate with HFM (tools like FDMEE etc.) Many customers, who have extensively used ODI in the past to integrate with HFM, would still want to use ODI with HFM 11.1.2.4. Hence, I have recreated the ODI KMs for 11.1.2.4 version using HFM Java API in the ODI 12c version. These KMs are presently not officially supported by Oracle and hence no SRs can be raised for the same. However, sharing them with a wider community, I encourage people to use them, modify them, and contribute their valuable inputs/feedback. You can find the KMs in the ODI Exchange within ODI Studio as well as on this website: link. These KMs have almost the same functionality and options as the previous KMs. However new capabilities have been added to make the integration process simpler yet robust. Pre-Requisites to use the Knowledge Modules Presently, the KM code is written in such a way that, it will work only if the ODI Agent or (Local Agent/No Agent) of ODI Studio is physically on the same machine where HFM is installed and configured. It may or may not work if the ODI agent is located on a different physical machine. The same has not been tested as of today. The KMs will work if HFM is installed on Exalytics. However, the ODI agent should also be installed in the same Exalytics host. In cases where HFM is clustered (load balanced) and ODI agent is installed on one of the nodes of the HFM cluster, it is not guaranteed that the KMs will work. This has not been tested as of today. Adding Drivers to the ODI Agent Standalone Agent / Standalone Collocated Agent: Set the environment variable ODI_ADDITIONAL_CLASSPATH to locate additional jars before starting the agent. 3 jar files as shown below is required for the HFM Knowledge Modules to work: ODI_ADDITIONAL_CLASSPATH=D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar:D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar:D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar Do one of the following Copy the 3 additional libraries to the DOMAIN_HOME/lib directory & restart the agent. The ODI standalone and standalone collocated agents will automatically add these libraries to the agent’s classpath Edit the DOMAIN_HOME/config/fmwconfig/components/ODI/Agent Name/bin/instance.sh/cmd command to add the libraries to the ODI_POST_CLASSPATH variable. The 3 additional libraries required are listed below. These files are found under the subdirectory of EPM_MIDDLEWARE_HOME. “D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar”; “D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar”; “D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar” ODI Studio (Local Agent / No Agent): On Windows operating systems place the jar files in:  \Users\<yourUserName>\AppData\Roaming\odi\oracledi\userlib Alternatively, instead of copying the 3 jar files into the userlib folder, edit the additional_path.txt file located inside the userlib folder and include the path of 3 jar files as shown below: On Linux/Unix operating systems place the jar files in:  $ODI_HOME/.odi/oracledi/userlib Alternatively, instead of copying the 3 jar files into the userlib folder, edit the additional_path.txt file located inside the userlib folder and include the path of 3 jar files as shown above. Close and re-open ODI Studio. Additional Setup before using the KMs Copy the reg.properties file In the server where HFM (and ODI Agent) is installed, copy the: $ORACLE_MIDDLEWARE/user_projects/config/foundation/11.1.2.0/reg.properties  file  to $ORACLE_MIDDLEWARE/user_projects/epmsystemX/config/foundation/11.1.2.0  folder. Modify epmsystemX as per your environment in the above path. Setting up the Topology for HFM Integration Under Topology -> Technology double click Hyperion Financial Management. Update the Naming Rules as mentioned in the below screenshot to represent EPMOracleInstance Update the Topology as below for HFM Data Server by entering the HFMCluster name in the Cluster (Data Server) field. Update the shared service Username / Password used for connecting to HFM application. Update physical schema, for HFM Data Server, to represent the Application and EPMOracleInstance of the HFM application/server. Using the Reverse Engineering KM (RKM) Use the RKM Hyperion Financial Management PS4 Knowledge Module to reverse engineer the HFM Datastores into the Models. By default, 2 Datastores are fetched from HFM: EnumMemberList is used for extracting members (without properties) from HFM. HFMData is used for loading/extracting data into HFM. I will post an updated RKM later to include more Datastores to integrate multi-period data, metadata (with properties), Journals etc. Using LKM Hyperion Financial Management PS4 Members To SQL Use the EnumMemberList Datastore as a source and connect it with a RDBMS Datastore on right hand side. To extract members into a file, first extract into a RDBMS staging table and then use IKM SQL to File to transfer contents into a text file. Click on the EnumMemberList_AP Access Point on the Physical tab and choose the LKM. On the LKM options, choose: Dimension Name: Name of the dimension for which members have to be extracted. Member List Name: Name of the member list. Eg: [Base],[Hierarchy], [Custom List] etc. Top Member: Specifies the top member. Members who are ancestors of this specified member are not extracted. (Optional) Using IKM SQL to Hyperion Financial Management PS4 Data Use any RDBMS Datastore that contains data as a source and connect it with a HFMData Datastore on right hand side. To load data from a file, first extract into a RDBMS staging table using LKM File to SQL to transfer contents of file into RDBMS table. The source data store need not be in the same format as that of HFM and can have different column names / column count. The IKM will apply the mappings defined before loading into HFM. Click on the HFMData in Target_Group on the Physical tab and choose the IKM. On the IKM options, choose: Import Mode: Specifies one of the following load options: DATALOAD_MERGE - to overwrite data in the application DATALOAD_REPLACE - to replace data in the application DATALOAD_ACCUMULATE - to add data to existing data in the application DATALOAD_REPLACEBYSECURITY - Replace By Security Accumulate within File: Flag to indicate whether to accumulate values in the load data. If set to Yes, it indicates that multiple values for the same cells in the load data must be added before loading them into the application. File Contains Share Data: Flag to indicate whether the data contains ownership data. If set to Yes, it indicates that the load data contains ownership data, such as shares owned. Consolidate After Load: Flag to indicate whether consolidate should be performed after the data load. If set to Yes, it indicates that consolidate will be performed after the data load with the parameters provided by option CONSOLIDATE_PARAMETERS. Consolidate Parameters: Specifies the parameters for consolidate operation as comma-separated values in the order Scenario, Year, Period, Parent. Entity and Type (represented by a letter as given below). Consolidation types and the letters that represent them are listed below:  I = Consolidate  D = Consolidate All with Data  A = Consolidate All  C = Calculate Contribution  F = Force Calculate Contribution e.g.  Actual,1999,2,EastRegion.EastSales,A. Log Enabled: Flag to indicate if logging should be performed during the load process. If set to Yes, logging would be done to the file specified by the LOG_FILE_NAME option. Log File Name: The fully qualified name of the file into which logging is to be done. Conclusion That concludes the first part and should get you up and running with the ODI Knowledge Module for HFM 11.1.2.4. In the next update, I will include more Knowledge Modules with features to load/extract Metadata/Journals. For any troubleshooting /queries/suggestions, do not hesitate to contact me: thejas.b.shetty@oracle.com

Author: Thejas B Shetty (Oracle SSI) Introduction Until Hyperion Financial Management (HFM) 11.1.2.3.x version, it was possible to use the pre-packaged Oracle Data Integrator (ODI) Knowledge Modules...

Data Integration

Looking for Cutting-Edge Data Integration & Governance Solutions: 2017 Cloud Platform Excellence Awards

It is nomination time!!!  This year's Oracle Excellence Awards: Oracle Cloud Platform Innovation will honor customers and partners who are on the cutting-edge, creatively and innovatively using various products across Oracle Cloud Platform to deliver unique business value.  Do you think your organization is unique and innovative and is Modernizing through Oracle Data Integration & Governance? We'd love to hear from you!  Please submit today in the Modernizing through Oracle Data Integration & Governance category. The deadline for the nomination is July 10, 2017.  Win a free pass to Oracle OpenWorld 2017!! Let’s reminisce a little…  For details on the 2016 Data Integration and Governance Winners:  Cummins, Inc., Helsana Gruppe, and Talas, check out this blog post. For details on the 2015 Big Data, Business Analytics and Data Integration Winners:  Amazon.com, CaixaBank, and Serta Simmons Bedding, check out this blog post. For details on the 2014 Data Integration Winners:  NET Serviços and Griffith University, check out this blog post. For details on the 2013 Data Integration Winners:  Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post. For details on the 2012 Data Integration Winners:  Raymond James Financial and Morrisons Supermarkets, check out this blog post. We hope to honor you! Click here to submit your nomination today! And just a reminder:  the deadline to submit a nomination is 5pm Pacific Time on July 10, 2017.

It is nomination time!!!  This year's Oracle Excellence Awards: Oracle Cloud Platform Innovation will honor customers and partners who are on the cutting-edge, creatively and innovatively using...

Oracle Data Integration Family Solutions & News

Oracle Data Integrator Cloud Service - In the News

Oracle Data Integration recently announced the availability of Oracle Data Integrator Cloud Service (ODICS).  In case you missed the earlier announcement you can check the news out in this entry.  Following the announcement ODICS has received the validation of the data integration market. ODICS brings together disparate data for enterprises to power cloud-based data warehousing and analytics. Below are some of the news coverage about how the market and customers are viewing the latest cloud offering from Oracle. Oracle Data Integrator Cloud launched for data integration Oracle helps customers integrate data in the cloud Oracle expands data integration cloud offerings Oracle connects the data swamps Oracle launches cloud service Oracle Launches Oracle Data Integrator Cloud Oracle Beefs Up IoT Cloud Oracle Launches 4 New IoT Cloud Applications; Also Releases Data Integrator Cloud for Enhanced Data Integration Oracle Launches Data Integrator Cloud Service Oracle rolls out cloud data integration system for real-time analytics New Oracle Cloud Data Solution To Boost Real-Time Analytics Oracle launches new service that can integrate disparate data to drive real-time analytics Oracle is red, violets are blue, we hope you'll integrate biz analytics in our cloud soon Oracle looks to the cloud for easier enterprise data integration Oracle Adds Data Integrator Cloud Service to Cloud Platform Portfolio Oracle Launches Data Integrator Cloud Oracle Fleshes Out Cloud Data Strategy Oracle, Ivanti Announce New Data Integration Cloud Services Oracle Expands Cloud Platform Capabilities Cloud Services Can Boost Productivity, But Are They Safer? Oracle Launches Data Integrator Cloud Service Oracle Launches Data Integrator Cloud For Easier Enterprise Data Wrangling BRIEF-Oracle Corp expands Oracle cloud platform's data integration offerings with launch of Oracle data integrator cloud Oracle launches data integration cloud service

Oracle Data Integration recently announced the availability of Oracle Data Integrator Cloud Service (ODICS).  In case you missed the earlier announcement you can check the news out in this entry.  Follow...

GoldenGate Solutions and News

OGG Custom Adapters: How to include a unique identifier for every record in custom adapter?

Add redo attributes to be captured by the primary Oracle extract process, so that there will be additional tokens written in the trail for each captured record.a) Add the user token. Now use the actual @GETENV() in extract parameter file to retrieve those desired attributes.Then inject the combination of redo seqno and rba of the actual source record into the trail file.  The redo seqno and rba will be unique for each single source record.  Note: In case of RAC source system, one additional attribute: redo thread id, can also be added to achieve uniqueness across all RAC instances.b) Add in the mapping specification of each captured table, particularly in the TOKENS clause.  Refer to documentation: https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_datainteg.htm#GWUAD468,  section 12.13.1 Defining Tokens should help.  For eg: TABLE src.table, TOKENS ( redoseq = @GETENV(‘RECORD’, ‘FILESEQNO’), redorba = @GETENV(‘RECORD’, ‘FILERBA’), redothread = @GETENV(‘TRANSACTION’, ‘REDOTHREAD’) ); Once the trail for that table should now contain those extra attributes for each source DML record, the Big Data / Application Adapter component can retrieve it via the op.getToken(userTokenName) API invocation, and the custom implementation can do whatever it wants with it.

Add redo attributes to be captured by the primary Oracle extract process, so that there will be additional tokens written in the trail for each captured record. a) Add the user token. Nowuse the...

Oracle Data Integration Family Solutions & News

Introducing Oracle Data Integrator Cloud Service (ODICS)!

We are pleased to announce and welcome Oracle Data Integrator CloudService (ODICS)!  Read the press release:  Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive Real-Time Analytics. Overview Oracle Data Integrator Cloud Service (ODICS) delivers high-performancedata movement and transformation capabilities with its open and integrated E-LTarchitecture and extended support for Cloud and Big Data solutions. Oracle DataIntegrator Cloud Service provides all of the functionality included in OracleData Integrator Enterprise Edition in a single heterogeneous Cloud Serviceintegrated with the Oracle Public Cloud. Providing an easy-to-use userinterface combined with a rich extensibility framework, Oracle Data IntegratorCloud Service improves productivity, reduces development costs and lowers totalcost of ownership among data-centric architectures. Oracle Data IntegratorCloud Service is fully integrated with Oracle Platform as a Service (PaaS)offerings such Oracle Database Cloud Service, Oracle Database Exadata CloudService and/or Oracle Big Data Cloud Service to put data and value at thecenter of the enterprise. Oracle Data Integrator Cloud Service is open andstandards-based such that it can work with 3rd party systems as well asOracle’s solutions. CloudE-LT Architecture for High Performance Oracle Data Integrator Cloud Service's E-LTarchitecture leverages disparate relational database management systems (RDBMS)or Big Data engines to process and transform the data. This approach optimizesperformance and scalability and lowers overall solution costs. Instead of relying on a separate,conventional ETL transformation server, Oracle Data Integrator Cloud Service’sE-LT architecture generates native code for disparate RDBMS or big data engines(SQL, HiveQL, or bulk loader scripts, for example). The E-LT architectureextracts data from the disparate sources, loads it into a target, and executestransformations using the power of the database or Hadoop. By leveragingexisting databases and big data infrastructures, Oracle Data Integrator CloudService provides unparalleled efficiency and lower cost of ownership. Byreducing network traffic and transforming data in the server containing thetarget data, the E-LT architecture delivers the highest possible performancefor Cloud environments. HeterogeneousCloud Support Oracle Data Integrator Cloud Serviceprovides heterogeneous support for 3rd party platforms, data-sources, datawarehousing appliances and Big Data systems. While Oracle Data Integrator CloudService leverages optimizations for Oracle Database and Big Data Cloud Servicesto perform E-LT data movement, transformation, data quality and standardizationoperations, Oracle Data Integrator Cloud Service is fully optimized for mixedtechnologies including: sources, targets and applications, etc. KnowledgeModules Provide Flexibility and Extensibility Knowledge Modules are at the core ofthe Oracle Data Integrator Cloud Service’s architecture. They make all OracleData Integrator processes modular, flexible, and extensible. Knowledge Modules implement the actual dataflows and define the templates for generating code across the multiple systemsinvolved in each data integration process. Knowledge Modules are generic,because they allow data flows to be generated regardless of the transformationrules. At the same time, they are highly specific, because the code theygenerate and the integration strategy they implement are explicitly tuned for agiven technology. Oracle Data Integrator Cloud Service provides a comprehensivelibrary of Knowledge Modules, which can be tailored to implement existing bestpractices ranging from leveraging heterogeneous source and/or target systems,to methodologies for highest performance, for adhering to corporate standards,or for specific vertical know-how. By helping companies capture and reusetechnical expertise and best practices, Oracle Data Integrator Cloud Service’sKnowledge Module framework reduces the cost of ownership. It also enablesmetadata-driven extensibility of product functionality to meet the mostdemanding data integration challenges. Oracle’sData Integration solutions provide continuous access to timely, trusted, andheterogeneous data across the enterprise to support both analytical andoperational data integration. We lookforward to hearing how you might use Oracle Data Integrator Cloud Service withinyour enterprise.

We are pleased to announce and welcome Oracle Data Integrator Cloud Service (ODICS)!  Read the press release:  Oracle Launches Cloud Service to Help Organizations Integrate Disparate Data and Drive...

ODI Technical Feature Overviews

Git Versioning Support in Oracle Data Integrator (ODI) 12.2.1.2.6

Oracle Data Integrator (ODI) 12.2.1.2.6 now supports Git asan external Version Control System (VCS). Now you can use either ApacheSubversion or Git for source controlling ODI objects. Regardless of whichtechnology is used, the user experience for an ODI user will be the same forany versioning operation and the underlying differences between the two systemsare transparent to ODI users. This is consistent with the ODI core benefits of keepingusers shielded from learning underlying systems and providing a seamlessexperience across technologies. In addition, there are numerous new features added toincrease productivity and address various use cases. Now you can have a consolidatedview and manage all out-of-sync versioned objects from a single screen. The ODIPopulate Repository option is also enhanced to allow populating a repository froma tag so that you can restore objects state from it. You can create version forall dependent object as well, along with versioning of the base object. Thereare options provided to regenerate scenarios during tag or deployment archivecreation to ensure that the scenario corresponds to the current state of thecorresponding object in VCS. The automatic merge process is made smarter toperform three way merge with change detection which reduces conflicts duringbranch merges. In this article we are going to explore the version controlrelated features and capabilities. I will cover the smart merge capabilities ina later article so stay tuned for that. Configuring Git The administrator needs to first enable Git as versioncontrol system and configure the repository connectivity. Selecting Git as version control system will enable all theGit related configuration menu options. You can switch anytime between theversion control systems so an ODI repository, previously configured withSubversion, can be switched to Git-based versioning. ODI however, does notmigrate the version history during such switch so you need to migrate any suchhistory directly though the tools provided by those systems. Selecting Git for versioning enables the VCS settingsoptions in the studio menu. You need to perform all three settings configuration in theorder of their appearance. The EditConnection dialog configures Git connectivity details such asauthentication type (protocol), URL, password etc. After connection setup, youcan create and configure local Git repository through Clone Remote Repository operation. Then select the Git branchthrough the Configure menu option. Oncesuccessfully configured, you will notice that the Git indicator in the bottomright corner of ODI Studio turns green from grey indicating successful Gitconfiguration. The indicator also displays the configured Git branch name – forexample master branch in below screenshot. Managing Versions All the Lifecycle Management functionalities that existed inprevious ODI releases for creating and managing versions in Subversion are nowavailable for Git as well. Some of the operations are as follows Adding one or more objects to VCS Creating versions for modified objects that areout of sync from VCS Viewing version history – Hierarchical view andTabular view Comparing difference between two object versionsin VCS or comparing the VCS version with repository object Restoring an object version from VCS Restoring a deleted object from VCS Since these functionalitiesare the same as noted earlier, I am not going to cover their details here. Instead,I will focus here on the new options provided in the latest release to makethese more user friendly. If you are interested in details on above-mentioned operations,please refer to my previous post OracleData Integrator 12.2.1 - Managing Versions in Apache Subversion. New options while creating versions There are a couple of useful options available when adding anew object to VCS or creating new version of an already versioned object. Theseadvanced options are available at the bottom of all the versioning dialogs. Bydefault, they are not selected to give you the existing behavior from previousreleases. Include Dependencies Regenerate and Version Scenarios Include Dependencies This option allows you to ensure that all the dependenciesof an object are also versioned along with the object. For example if you areversioning a mapping which uses two data sources which in turn depends uponsome technology or logical schema, then using this option you can version allthese objects in a single operation along with the mapping itself. If a dependent object is not yet versioned then it adds itto Git or Subversion or if a dependent object is out-of-sync with VCS systemsthen it creates a new version. If the dependent object is already versionedthen it will not do anything for that object. This option is particularly useful in keeping consistency inthe versioned objects in VCS, which is key for continuous integration. Itremoves any chances of missing to create a version for a dependent object sothat the current copies of all the relevant objects are present in VCS. Regenerate andVersion Scenarios This option takes care of regenerating the scenario beforecreating a version for it in VCS. If you also select Include Dependencies then it regenerate a scenario for any of theselected objects and their dependent objects. This option will be useful when you want to ensure that the scenariospresent in your VCS correspond to the current copy of the corresponding objectin VCS. Such requirement could be crucial if you have an automated build andtesting process, which takes the scenarios from VCS and validates them. Pending Changes The newly introduced Pending Changes dialog allows you to manageobjects that are out-of-sync from a single place. You can access it from ODIStudio Team menu option. The Pending Changes dialog shows the list of all theversioned objects from the ODI repository that are out-of-sync. If you arelooking for a particular object, you can directly reach to it through thesearch field. This dialog allows you to perform following operations on theselected objects. Create version for the selected object(s) hereif the selected object is a deleted object from ODI Repository then you canpush the deletion to VCS. Restore a deleted object from VCS. This option isapplicable only when you select all the deleted objects. Compare the highlighted object with the VCS version. New options while creation Tag There are a couple of useful options provided during a Tagcreation allowing you to control this: Option to add only the versioned objects to thetag. This provides flexibility of creating Full Tag with only versioned objectis a very useful flexibility that allows you to push all the versioned objects alongwith dependencies to VCS in a single operation. Regenerate and version scenarios so that thescenarios referred by a Tag always corresponds to the relevant object in theTag. New options while Populating Repository from VCS The enhanced PopulateODI repository from VCS dialog provides a number of flexibility. Populate from branch or Tag: Now not only you canpopulate ODI repository contents from the currently configured branch but youcan also restore the objects from a particular Tag created on the currentbranch. It now gives you the flexibility to selecteither all the objects or a subset of objects to be populated from the selectedTag or branch. Such selective populating will be useful if you want to breakdown the process into smaller chunks or if you are interested only in aselected subset of objects from the branch or Tag. However, this flexibilitycomes with a pitfall that you may miss to import some of the dependent objectsaffecting the repository objects consistency. So this selective populationprocess should be used with caution. Deleting existing work repository objects fromODI repository before populating objects from VCS. This will be useful inensuring that there are no remnant of old work repository and after populatethe ODI work repository objects are in sync with VCS contents. SDK APIs for Continuous Integration One of the needs for Continuous Integration is to automate theTag Creation and Deployment Archive build process. There are a couple ofservices available in ODI SDK APIs that allows you to automate your entirebuild and testing process. VersionManagementServiceprovides the necessary APIs to configure the VCS system, create Tags, andpopulate an ODI repository from the VCS contents from a Tag. DeploymentServiceprovides APIs for creating Deployment Archive and applying it to target ODIRepository. Conclusion The enhancements added in Oracle Data Integrator (ODI)12.2.1.2.6 around Lifecycle Management capabilities provides broader supportfor the leading Version Control Systems, improves developer’s productivity, andaddresses the needs for automating Continuous Integration and build process.Yet again ODI differentiates itself by providing Lifecycle Management capabilities notavailable in any other competing products.

Oracle Data Integrator (ODI) 12.2.1.2.6 now supports Git as an external Version Control System (VCS). Now you can use either Apache Subversion or Git for source controlling ODI objects. Regardless of...

ODI Technical Feature Overviews

Using Loading Knowledge Modules for both On-Premise & Cloud Computing - Oracle Data Integrator (ODI) Best Practices

More from the A-Team... Thank you Benjamin! Are you curious about using Loading Knowledge Modules for both on-premise and cloud computing in Oracle Data Integrator (ODI)?  Benjamin has put together some best practices for selecting and using the Oracle Data Integrator (ODI) Loading Knowledge Modules (LKMs) for both on-premise and on cloud computing.  The Loading Knowledge Modules (LKMs) are one of seven categories of Knowledge Modules (KMs) found in ODI.  Other categories of KMs include: Reverse-Engineering, Check, Integration, Extract, Journalizing, and Service.  This particular article focuses on the selection and use of LKMs.  You can learn more about other categories of KMs, by going to Oracle Data Integrator (ODI) Knowledge Modules (KMs). LKMs are code templates in ODI that can perform data upload operations from on-premise data servers to cloud services, between cloud services, or between on-premise data servers.  ODI supports a variety of technologies such as SQL databases, Big Data, Files, Java Messaging Systems (JMSs), and many other technologies.  Most of these technologies are now available on both on-premise and on data cloud services.  For each of these technologies, a variety of LKMs are available.  For instance, ODI offers LKMs for SQL databases such as Oracle, Teradata, MySQL, MS SQL Server, among others.  For Big Data, ODI offers LKMs for Spark, Hive, Sqoop, Kafka, and Pig, among others. Read Benjamin's post:  Oracle Data Integrator Best Practices: Using Loading Knowledge Modules on both On-Premises and Cloud Computing.

More from the A-Team... Thank you Benjamin! Are you curious about using Loading Knowledge Modules for both on-premise and cloud computing in Oracle Data Integrator (ODI)?  Benjamin has put together...

GoldenGate Solutions and News

GoldenGate for Big Data 12.3 is released !

Oracle GoldenGate for Big Data 12.3.0.1.0 is released on 15/Dec/2016.   New Major Features in this release:   Generic JDBC Targets - Amazon Redshift & IBM NetezzaOracleGoldenGate for Big Data 12.3.0.1 can deliver to generic JDBC targets. GenericJDBC target interface has been tested and certified with JDBC interfaces from Oracle,SQLServer, Amazon Redshift and Netezza. The JDBC handler supports standardreplicat features such as Statement caching, REPERROR and HANDLECOLLISIONS.JDBC handler can also do some basic mapping of metadata changes between sourceand target similar to other big data handlers. NoSQL Targets – MongoDB and CassandraOracleGoldenGate for Big Data can now natively write Logical Change Records (LCR)data to a Mongo DB and Cassandra in real-time into their native formats.Operations such as Insert, Update, Delete and truncate can be handled.Additional features such as bulk-write, asynchronous mode, compressedupdatesare also possible based on the respective applications., . Improved PerformanceOracleGoldenGate for Big Data can now benefit from Coordinated Delivery feature for Replicat which instantiatesmultiple Java adapter process inside a single Replicat. This can improve thedata delivery performance by automatic scaling with multiple process. Latest Certifications Apache Kafka 0.10.0.1, 0.10.1.0 Hortonworks Data Platform 2.5(HDFS 2.7.3) Cloudera Hadoop 5.8(HDFS 2.6.0) ........ any many more More information onOracle GoldenGate 12.3.0.1 Learn more about Oracle GoldenGate 12c Download Oracle GoldenGate for Big Data 12.3.0.1 Documentation for Oracle GoldenGate for Big Data 12.3.0.1Certification Matrix for Oracle GoldenGate for Big Data 12.3.0.1 Data sheet for Oracle GoldenGate for Big Data

Oracle GoldenGate for Big Data 12.3.0.1.0 is released on 15/Dec/2016.   New Major Features in this release:   Generic JDBC Targets - Amazon Redshift & IBM Netezza OracleGoldenGate for Big Data 12.3.0.1...

GoldenGate Solutions and News

Oracle GoldenGate Studio 12.2.1.2.6 Now Available

I'm pleased to announce the new Oracle GoldenGate Studio 12.2.1.2.6 release. Oracle GoldenGate Studio enables you to design and deploy high-volume, real-time database replication by automatically handling table and column mappings, allowing drag and drop custom mappings, and generating best practice configurations from templates. It also provides context sensitive help. In this release of Oracle GoldenGate Studio, we have further extended our capabilities in four major areas: Big Data, Instantiation, User Life cycle Management, and Heterogeneity.  Big Data support includes design and deployment of Oracle GoldenGate artifacts for Big Data targets. It enables you to use Oracle GoldenGate for Big Data and Application Adapter while you design and deploy solutions to Big Data targets. It also allows you to attach Big Data environment specific properties file to the product. It will then deploy it to OGG Instance along with other Oracle GoldenGate artifacts. Instantiation includes Oracle Data Pump (ODP) support with auto coordination between Oracle GoldenGate CDC and ODP processes. It allows you to design and deploy Oracle database environment through an easy to use interface where you can configure and monitor the ODP processes. Oracle GoldenGate Studio already supports Oracle GoldenGate Instantiation method, now it further enables you to use Oracle Data Pump method for your Oracle database. You can use either of the Instantiation method for all of your replication paths and they can run simultaneously. User Life cycle Management and Heterogeneity includes better user management, IBM DB2 zOS, and Teradata (delivery only) heterogeneous database support provided by Oracle GoldenGate Studio.  You can refer GoldenGate Studio 12.2.1.2 Data Sheet for brief information about Studio, new features, and benefits. The information can be read from GoldenGate OTN Page, GoldenGate Studio User Guide. Hope to see you guys using the new Studio release. Feel free to reach out to me for your queries by posting in this blog or tweeting @nisharsoneji

I'm pleased to announce the new Oracle GoldenGate Studio 12.2.1.2.6 release. Oracle GoldenGate Studio enables you to design and deploy high-volume, real-time database replication by...

ODI Solutions and News

Oracle Data Integrator (ODI) 12.2.1.2.6 Now Available

We are pleased to announce a newrelease to Oracle Data Integrator (ODI): 12.2.1.2.6. Oracle Data Integrator (ODI)provides a flexible and heterogeneous data integration platform for dataprocessing that enables you to transform, enrich, and govern data for faster,more informed decision-making. In this release of OracleData Integrator, we have further extended our capabilities in 4 major areas:Big Data, Cloud, Lifecycle Management and Developer Productivity. Cloud and BigData remain key investment areas and ensure that Oracle Data Integrator willcontinue to accompany customers throughout their technologicaltransformation. Please visit thefollowing two links for more information on Oracle Data Integrator (OD): ODI OTN Page, O.com DataIntegration Page. Big Data investments include Spark Streaming support, Apache Kafkaand Apache Cassandra support, enhanced support around Hadoop Complex Types andStorage Formats in addition to enhancements to ODI’s Big Data ConfigurationWizard. Through its unique decoupling of the Logical and Physical design of Mappings,Oracle Data Integrator is the only Data Integration tool on the market givingdevelopers the flexibility to design Mappings with a generic business logic andthen generate code for as many data processing technologies (Hive, Spark, SparkStreaming etc.) as they want. Thisprovides a great platform for the ever changing and improved world around BigData. Cloud investments include RESTful Service support, where Oracle Data Integrator can nowinvoke RESTful Service. Data chunkingand pagination are also supported for uploading or downloading larger payloads. Additionally, Business Intelligence CloudService (BICS)Knowledge Modules is now supported out of the box in Oracle Data Integrator.You can define Business Intelligence Cloud Service connectivity in Topology,reverse engineer metadata and load data into it just like any other target dataserver. Lifecycle Management investments include Git Support as an externalversion control system. Otherimprovements to the overall lifecycle management functionality have been made,such as the enhanced merge capability with auto-merging of changes andsimplified conflict resolution. Developer Productivity investments include a superior KnowledgeModule Framework that helps maximize flexibility and minimize maintenance. You can now inherit steps from a KnowledgeModule into another Knowledge module and override steps like in object orientedprogramming languages. There are brandnew template languages and syntaxes introduced providing greater control overthe generated code. For the full review of newfunctionality, please view the following What’sNew Whitepaper.

We are pleased to announce a new release to Oracle Data Integrator (ODI): 12.2.1.2.6. Oracle Data Integrator (ODI) provides a flexible and heterogeneous data integration platform for dataprocessing...

ODI Technical Feature Overviews

Integrating Database Cloud Service (DBCS), Big Data Preparation (BDPCS), and Business Intelligence Cloud Service (BICS) with Oracle Data Integrator (ODI)

Today, during our Data Integration webcast, BenjaminPerez-Goytia from our Oracle A-Team illustrated this exciting integration: Database Cloud Service (DBCS), Big DataPreparation (BDPCS), and Business Intelligence Cloud Service (BICS) with OracleData Integrator (ODI). You can view therecording here. Here are some of the steps, details, and highlights he wentthrough during the demonstration:             · Uploading data from an on-premise Oracledatabase into DBCS using ODI.  Datapump was used in conjunction withsecured copy (SCP).  The Datapump Knowledge Module (KM) had been modifiedto perform a secured copy command to move the on-prem datapump files into DBCS.             · Once data was loaded into DBCS (with datapump),ODI populated a small star schema with dimensions and fact tables (onDBCS).  This was be done with the ODI agent on JCS.             · The fact table was then populated with OraclePartition Exchange.             · An aggregate file from the fact table waswritten on disk (on JCS).  ODI performed this operation with the agenton JCS.             · ODI then sent the file into Storage CS using CLI. The Storage CS instance was the one used by BDP as a source.             · On BDP, a transform script was created to cleanthe file, and publish it into BICS.  A BDP policy was executed on BDPto publish the data into BICS.             · On BICS, a Visual Analyzer dashboard was beencreated and shown to view the data.             · In addition, the BICS dashboard wasrefreshed to demonstrate how data changes each time a new partition is loadedinto the fact table. It was fun – and really interesting! Did you miss it?  View the webcast here. To view more of the recordings we have done, please go toour webcastarchive page. Lastly, for more ofthe A-Team blogs, visithere. Thanks again Benjamin!

Today, during our Data Integration webcast, Benjamin Perez-Goytia from our Oracle A-Team illustrated this exciting integration: Database Cloud Service (DBCS), Big DataPreparation (BDPCS), and Business...

Oracle Data Integration Family Solutions & News

The Always-On Business Requires Oracle GoldenGate (OGG)

“Always-on, data-driven, and real-time have truly become table stakesfor enterprises in today's global economy. The challenges for getting a consistent, up-to-date picture arecompounded as enterprises embrace cloud deployments.” Tony Baer states it plainly. Tosucceed in today’s competitive environment, you need real-time information – itreally is that simple. This requires a platform that can unite information fromdisparate systems across your enterprise without compromising availability andperformance. Oracle GoldenGate providesreal-time capture, transformation, routing, and delivery of database transactionsacross heterogeneous systems. Oracle GoldenGate facilitates high-performance,low-impact data movement with low latency to a wide variety of databases andplatforms while maintaining transaction integrity – and that is the key. With the help of Ovum’s Tony Baer, we have further explored how businesses valuetheir data in this changing environment. Businesses need to continuously innovate, but execute at the sametime. This whitepaper explores the realities of today, and those coming tomorrow. Real-time is crucial and an essentialingredient to driving key advances in everyday business. Read this newly release whitepaper for key insights into how: · Real-timeis driving service-level expectations, even for offline workloads · With enterprises embracing hybrid cloud deployment, the need to keepdata in sync, in real time between cloud and on-premise systems becomes tablestakes · Real-time data integration is not simply a technology argument; it isessential to the ability to execute faster, which can lead to bottom-line costreductions and top-line creation of new business and revenue opportunities · Supporting real-time business requires a data-replication technologythat is flexible, scalable, performant, and platform-independent, with littleto no overhead. Tony interviews a handful of Oracle GoldenGate customers, and foundvaluable insights into the uniqueness and flexibility Oracle GoldenGateprovides to today’s market. To learn more about Real-Time Data Integration & Replication - Oracle GoldenGate – visit us here. This short video: Chalk Talk:Boost Big Data Solutions with Oracle GoldenGate might be of interest as well. We hope you find this research exciting! Click for the WHITEPAPER.

“Always-on, data-driven, and real-time have truly become table stakes for enterprises in today's global economy. The challenges for getting a consistent, up-to-date picture arecompounded as...

Oracle Data Integration Family Solutions & News

2016 Oracle Excellence Awards… and the Winners for the Data Integration & Governance Category are…

Every yearat OpenWorld, Oracle announces the winners to its most prestigious awards, theOracle Excellence Awards. This year, aswe have in previous years, we celebrated in style! In an Oscars-like ceremony on Tuesday, September20th, customers were highlighted for their innovativesolutions. Congratulations to allwinners! Let meintroduce this year’s 2016 Data Integration & Governance category winners: Cummins,Inc., a globalpower leader, is a corporation of complementary business units that design,manufacture, distribute and service diesel and natural gas engines and related technologies,including fuel systems, controls, air handling, filtration, emission solutionsand electrical power generation systems. Cummins,Inc. had the goal to implement a global, sustainable solution to manage andgovern their data. This directlysupported Cummins’ business objective of adopting common, global processes toachieve operating as one company. Historically the various business units operated in a silo and prior tothis effort, data was managed for a business unit or manufacturing site’sspecific needs. In leveraging OracleData Integration Solutions – they: • Implementeda global, sustainable solution to manage & govern all dataleveraging Enterprise Data Quality, with 7,500+ applications • Optimized inventory,effectively manage supplier & customer risk gaining efficiencies • Providesuperior service to customers & achieve supply chain excellence • Dissolved &removed IT & business stakeholder tensions by having a clearframework & tool for decision making • Saved $1.5M inannual run costs • Improvedemployee productivity by > 10 % Helsana Gruppe is Switzerland's leading healthinsurance company with over 1.9 million policyholders. Helsana madea strategic decision to leverage Oracle Data Integrator and Oracle GoldenGatefor their Data Integration Platform as well as for their datawarehouse. Theyhad been looking to integrate different systems within their IT infrastructureand find a central standardized platform for data integration needs thatencourages good application governance and reuse of integration paths handlinghigh volume with low latency. With thehelp of their partner IPT, they built andachieved: • Strategic central & standard data platform executing on Oracle DataIntegrator & Oracle GoldenGate • 35+applications, 200+ integration routes, Millions of records, on-prem & inthe cloud • Data delivery betweenmajor customer facing applications is now 300X faster • Customerinformation fully synchronized in realtime: streamlining processes, mitigatingrisk & improving customer satisfaction • 30% savings development& maintenance efforts & costs Talas represents: Idealists. Dreamers. Geeks. Allon a mission to change how business is done – with a big ambition to transformthe Philippines through technology. Talas empowers its Customers with Big Dataservices including realtimeanalytics and insights by helping to assimilate ticketing data, calldata, and financial data. By leveragingOracle Data Integration Solutions and partner Solvento, they show: • Reducedtime in transforming, cleansing, enriching & preparing data with OracleData Integrator for Big Data & Big Data Preparation nativelywithin Hadoop • 21 days developmenttime reduced to 3.5 days • Reduced man hours in test/production by 50% due to increased manageability • 300% improvement in employee productivity The success has provided these organizations to set a shift in paradigm and forwardmovement for their businesses – and for more proactive, strategic andoperational excellence – all in an effort to better serve their customers.  Check out the press release that went out last week on the program. CONGRATULATIONS to our DataIntegration & Governance category winners!

Every year at OpenWorld, Oracle announces the winners to its most prestigious awards, the Oracle Excellence Awards. This year, aswe have in previous years, we celebrated in style! In an Oscars-like...

Oracle Data Integration Family Solutions & News

Why Self Service Data Wrangling Speeds Up Time to Value

Oracle’s Big Data Preparation CloudService (BDP)provides value in analytics and data management projects at any scale. Itempowers business users to process complex business data of any source, sizeand format; from small departmental data to large enterprise data to massiveIOT and log data. The service is the only data wrangling tool using a unique combination of Machine Learning,Natural Language Processing leveraging a semantic knowledge graph in the Cloud.This means that it is more efficient in mapping relationships and making moreaccurate repair and enrichment recommendations. Are you curious?  Checkout this short BDP Video to have BDP explained to you! It isbecoming more evident that Data Preparation is important in speeding time tovalue.  Due to growing data volumes, and siloed data, businesses arefinding that further and faster growth can be achieved via better data, ofwhich one preliminary step is that of preparing, enriching, and wrangling thedata.  With the help of Forrester,160 IT decision makers from around the world were surveyed, which yielded greatclues and information on the growing importance of streamlining the datapreparation and data to deliver cutting-edge business insights.  READ theTechnology Adoption Profile:  DataPreparation Accelerates Self-Service. Oracle's cloud based technology with Oracle Big Data Preparation helps to bridge the IT-Business gap, showing how self service data wrangling, when done right imparts great value, provides rich recommendation and helps streamline and automate the data preparation pipeline.  Oracle Big Data Preparation Cloud Service provides an agile, intuitive interface that automates, streamlines, and guides the process of data ingestion, preparation, enrichment, and publishing of data targeted at the data integration needs of the data steward and IT. To learn more about Oracle Big Data Preparation Cloud Service, visit us at our websites here and here.  We hope you find this research compelling!

Oracle’s Big Data Preparation Cloud Service (BDP) provides value in analytics and data management projects at any scale. It empowers business users to process complex business data of any source, sizea...

Enterprise Data Quality Solutions and News

Oracle Data Integration at OOW16

Oracle Open World is just around the corner. Like always we have an exciting lineup of events, sessions, and gatherings focusing on Data Integration to pick and choose from. Keep following this blog and social media (facebook and twitter) for regular and live updates on the happenings during OpenWorld. Scheduling and Planning for the Event With so much on offer it always pays to plan your schedule ahead of time so that you do not miss those sessions that you are really interested in. This focus on document compiles everything that focuses on Data Integration Solutions. This is a “must have” bookmark that is very handy to have when planning and attending Oracle Open World. Seasoned OpenWorld attendees swear by it! Sessions to Watch out For While every session has its dedicated focus, attending any OpenWorld event is an exercise in priorities. Below are some, by no stretch exhaustive, crucial sessions listed in no particular order of priority. a. Oracle Data Integration Solutions: Platform Overview and Roadmap [CON6619] – This session by Jeff Pollock, Vice President of Product Management at Oracle, covers the full range of solutions in Oracle's Data Integration platform. An overview and road-map are given for each product, including how the Data Integration Platform tackles emerging customer requirements including migration to the cloud, data warehousing in the cloud, and big data integration. Why Attend: This session covers the overall vision and strategy of Oracle Data Integration across all DIS products. Attend this session to hear Jeff make the case for why Oracle is the best provider of Data Integration Solutions. b. Oracle Data Integration for Cloud Data Migration and Warehousing [CON6620] - The rapid movement to cloud-based solutions brings with it a new set of data integration challenges as data moves from ground to cloud and cloud to cloud. Join this session led by Oracle Product Management to hear how the Oracle Data Integration suite of products is responding. Why Attend: Cloud! If you are at all remotely interested in how Oracle Data Integration is taking advantage of and incorporating the latest cloud trends attend this session. c. Oracle Big Data Integration in the Cloud [CON7472] - As more and more companies adopt the cloud platform, there is a pressing need to handle data seamlessly across various combinations of on-premises, cloud and heterogeneous data management systems and changing business models. This session is an overview of Oracle Data Integration’s cloud services and solutions that help customers manage their data movement and data integration challenges and opportunities across the enterprise. Come to the session for details about how to get started quickly in the cloud with data integration for Hadoop, Spark, NoSQL, and Kafka. You will also see the latest in self-service tools that focus on easy and reliable data preparation for nontechnical users. Why Attend: - Combining Big Data and Cloud seamlessly is one of the top priorities for Oracle Data Integration. Attend this session to see how Oracle Data Integration contributes to and is helped by the rest of the Oracle solutions and investments in Big Data and Cloud. Customer Appreciation AwardsOpenWorld is not just all about technology updates and showcases. It is also an occasion to highlight and appreciate customers, their innovations in using technology to solve business problems and a whole lot of fun. To promote best practices and celebrate the accomplishments of Oracle Cloud Platform innovations around the world, Oracle will announce the winners of the 2016 Oracle Cloud Platform Awards at this event. These awards honor organizations for their cutting-edge solutions using Oracle Cloud Platform. Meet these innovators and hear how they are using Oracle Cloud Platform to transform their businesses. We look forward to seeing you at this year’s Open World and hearing from you. For more information about Oracle Data Integration and its products visit our home page here. Happy packing! Mark your calendars for: 

Oracle Open World is just around the corner. Like always we have an exciting lineup of events, sessions, and gatherings focusing on Data Integration to pick and choose from. Keep following this blog...

Enterprise Data Quality Solutions and News

Oracle Metadata Management 12.2.1.1 is Now Available

Oracle MetadataManagement Oracle Metadata Management is essential to solving a wide varietyof critical business and technical challenges around data governance andtransparency. For example, “How are thosereport figures are calculated?” or “How does that change impact the dataupstream?”. Oracle Metadata Managementis built to answer these pressing questions for customers in a lightweightbrowser-based interface. What is new in Oracle Metadata Management’s newest release Oracle Metadata Management 12.2.1.1includes a number of keyupdates including: Extended support for new cloud, big data and other technology with new harvesting bridges, Enhanced search and findability function to manage objects within OMM, and Quick and easy naming and renaming capabilities. To read the entire list click here. A few highlights include: Metadata Explorer – Designed for thebusiness user This release provides a major redesign of the Metadata ExplorerUser Interface on both the look and feel in addition to actual capabilities. The new METADATA SEARCH/ BROWSE with FILTERING on any attributesand/or properties allows for the choice of result set displays as a classicGoogle like "LIST", or a new powerful "GRID" offering aspreadsheet like display of multiple attributes/properties at once.Attributes/properties can be individually selected, reordered, and sorted,offering a full metadata reporting solution. The results can of course bedownloaded as CSV/Excel files. METADATA EDITING is now possible at various levels. Numerous new fast and easy "inplace" editing to Rename objects, Edit Descriptions, and more areavailable. The new Search/Browse GRID display also offers efficient editingwith TABULAR EDITING of multiple objects at once such as Business GlossaryTerms, Data Models Tables, or Table Columns. Also, BULK CHANGE of multiple objects at once is feasible, where asearch can return multiple objects (that can then be selectively sub-setted)for which changes can be performed at once (e.g. Change the Security ThreatLevel to Orange to a set of tables at once) Metadata Documentation - Capture Metadata through documentation Metadata Documentation has been further developed. Previously, you could import a database model but could not enrich it with your own metadata. Now databases can be documented with extra metadata and any number of custom attributes. Harvesting Bridges Oracle Enterprise Metadata Management 12.2.1.1 provides a whole set of new harvesting bridges. A few examples include: · Amazon Web Services AWS) RedShift Database import · Apache Cassandra import (including from DataStax Enterprise) · Apache Hadoop HiveQL DDL import (including from Cloudera, Hortonworks, MapR, etc.) · Microsoft SQL Server Integration Services (SSIS) (Repository Database) import bridge · Microsoft SQL Server Integration Services (SSIS) 2014 import bridge Enhancements to existing bridges arealso available. For instance: There has been a major redesign of all DI/ETL/ELT bridges basedupon a more universal DI meta-model which better supports the new generationsof DI tools while reducing the volume of metadata. As a result, DI/ETL/ELTbridges now import much faster (some of them twice as fast), without losing anyDI/ETL/ELT design or data flow lineage details. Business Intelligence bridges have also seen improvements,specifically: · IBM Cognos Content Manager with PowerPlay Transformer import bridge has major improvements to better support lineage for cubes · MicroStrategy import bridge has major improvements for scalability and protection against broken/corrupted reports · Tableau import bridge has major improvements to better support massive amounts of database or file connections (popular in self service BI) Metadata Manager and Version Handling This release of Oracle Enterprise Metadata Management sees majorimprovements in its Metadata Manager by way of performance and more efficient handlingof versions themselves. The MODELVERSION / CHANGE MANAGEMENT feature prevents the creation of unnecessary newversions of models if the source metadata (e.g. a database, or data model) hasnot changed since the last automatically scheduled metadata harvesting. This isnew feature is achieved by taking advantage of the harvesting bridge’s newcapabilities to compare the metadata of a newly imported model with apreviously imported one in order to detect any change. The major benefit ofthis new feature is to dramatically reduce the disk space in the repository byautomatically deleting unnecessary versions. CONFIGURATION / VERSION CHANGEMANAGEMENT capabilities now exist which offer a way to compare various versionsof configurations. Again, this is just a snippet of what’s new in this release ofOracle Metadata Management 12.2.1.1 but the added value is tremendous. We hope your journey with this solution inenlightening! For more information visit the home page here.

Oracle Metadata Management Oracle Metadata Management is essential to solving a wide variety of critical business and technical challenges around data governance andtransparency. For example, “How are...

GoldenGate Solutions and News

Oracle GoldenGate Adapter for Confluent Platform, powered by Apache Kafka

Oracle GoldenGate Adapter/Handler for Kafka Connect (open-source) is released on 07/Jul/2016. Summary Kafka Connect is a tool for streaming data between Apache Kafka and other data systems in a scalable and reliable way. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. An export connector can deliver data from Kafka topics into secondary indexes like Elasticsearch or into batch systems such as Hadoop for offline analysis. (Source: Confluent Website) This Kafka Connect handler lets you to integrate using the Kafka Connect API which can be managed using Schema Registry on Confluent Platform. The Kafka Connect Handler takes change data capture operations from a source trail file and generates data structs (org.apache.kafka.connect.data.Struct) as well as the associated schemas (org.apache.kafka.connect.data.Schema). The data structs are serialized via configured converters then enqueued onto Kafka topics. The topic name used corresponds to the fully qualified source table name as obtained from the GoldenGate trail file. Individual operations consist of inserts, updates, and delete operations executed on the source RDBMS. Insert and update operation data include the after change data. Delete operations include the before change data. A primary key update is a special case for an update where one or more of the primary key(s) is/are changed. The primary key update represents a special case in that without the before image data it is not possible to determine what row is actually changing when only in possession of the after change data. The default behavior of a primary key update is to ABEND in the Kafka Connect formatter. However, the formatter can be configured to simply treat these operations as regular updates or to treat them as deletes and then an insert which is the closest big data modeling to the substance of the transaction. Difference between official GoldenGate Kafka Handler and Kafka Connect Handler The Kafka Handler officially released in Oracle GoldenGate for Big Data 12.2.0.1.x is slightly different in functionality than the Kafka Connect Handler/Formatter included in this opensource component. The officially released Kafka Handler interfaces with pluggable formatters to output the data to Kafka in XML, JSON, Avro, or delimited text format. The Kafka Connect Handler/Formatter builds up Kafka Connect Schemas and Structs.  It relies on the Kafka Connect framework to perform the serialization using the Kafka Connect converters before delivering the data to topic. Compatibility Matrix Oracle GoldenGate for Big Data 12.2.0.1.1 Confluent.io Kafka/Kafka Connect 0.9.0.1-cp   Download Location  It is available for download at Oracle GoldenGate Exchange      Other Confluent Connectors are listed here Note: This Kafka Connect adapter source code is open-source and free, however,  you would need to purchase Oracle GoldenGate for Big Data license for the Oracle GoldenGate infrastructure to run this open-source code. 

Oracle GoldenGate Adapter/Handler for Kafka Connect (open-source) is released on 07/Jul/2016. Summary Kafka Connect is a tool for streaming data between Apache Kafka and otherdata systems in a scalable...

Oracle Data Integration Family Solutions & News

LAST CALL: 2016 Excellence Awards - Nominate TODAY!

It is nominationtime!!! This year's OracleCloud Platform Innovation – Excellence Awards will honor customers andpartners who are creatively using Oracle products. Think you have something unique andinnovative with Oracle Data Integration &Governance products? We'd love to hear from you! Please submit today in the Data Integration & Governance category. The deadlinefor the nomination is Wednesday June 30, 2016. Win a free pass to Oracle OpenWorld 2016! Let’s reminisce alittle… For details on the2015 Big Data, Business Analytics, and Data Integration Winners: amazon.com,Caixa Bank, Serta Simmons Bedding, Skanska, Scottish & Southern Energy and Tampa International Airport, check out this blog post. For details on the2014 Data Integration Winners: NETServiços and Griffith University,checkout this blog post. For details on the2013 Data Integration Winners: Royal Bank of Scotland’s Market andInternational Banking and TheYalumba Wine Company, check out this blog post. For details on the2012 Data Integration Winners: RaymondJames Financial and Morrisons Supermarkets, check out this blog post. We hope to honoryou! Click hereto submit your nomination today – scroll to the bottom of the page and selectthe appropriate category. And just a reminder: the deadline to submit a nomination isJune 30th, 2016.

It is nomination time!!! This year's Oracle Cloud Platform Innovation – Excellence Awards will honor customers andpartners who are creatively using Oracle products. Think you have something unique...

ODI Solutions and News

Oracle Data Integrator (ODI) 12.2.1.1.0 Now Available

Oracle Data Integrator 12.2.1.1.0 is nowavailable! Please visit the ODIOTN page for downloads, documentation, relatedcollateral and other useful links. Take a look here for the latest details of this release– and read on for some highlights: Hyperion Essbase and Hyperion Planning Knowledge Modules Hyperion Essbase and HyperionPlanning Knowledge Modules have been made available out of the box with OracleData integrator and support the latest version (11.1.2.4) of these HyperionApplications. Integrated Capture/Delivery support in GoldenGate Knowledge Modules The GoldenGate Journalization KnowledgeModules (JKMs) for Oracle databases have been updated and now supportIntegrated Capture and Delivery. This updated functionality can improveperformance and provides better scalability and load balancing. Support for Cubes and DimensionsCore ETL – ELT enhancements have been made; where ODI now provides support fortwo types of dimensional objects: Cubes and Dimensions. Users can create anduse Cubes and Dimensions objects directly in Mappings to improve developerproductivity with out of the box patterns that automate the loading ofdimensional objects. This also allowsfor improved Type 2 Slowly Changing Dimensions and brand new Type 3 Slowly ChangingDimensions support with ODI. Big DataConfiguration Wizard A brand new Big DataConfiguration wizard is now available in the ODI Studio Gallery and provides asingle entry point to configure the Topology objects for Hadoop technologiessuch as Hadoop, Hive, Spark, Pig, Oozie, etc. Let us know how you are using ODI – and try out this newversion to keep up with what’s current!

Oracle Data Integrator 12.2.1.1.0 is now available! Please visit the ODI OTN page for downloads, documentation, related collateral and other useful links. Take a look here for the latest details of...

GoldenGate Technical Features

Oracle GoldenGate for Big Data 12.2.0.1.1 update is available now!

Oracle GoldenGate for Big Data 12.2.0.1.1 update is available I am pleased to announce the general availability of Oracle GoldenGate for Big Data 12.2.0.1.1. What’s new in Oracle GoldenGate for Big Data 12.2.0.1.1 ? New Formats – Avro OCF and Sequence File Oracle GoldenGate for Big Data can now write data in Avro Object Container Format (OCF) and HDFS Sequence File format. Automatic Metadata creation and update DDL changes to Hadoop. Oracle GoldenGate for Big Data can now automatically create target Hive table and provide DDL updates to Hive/HCatalog. Oracle GoldenGate for Big Data can also automatically provide versioned schema files to HDFS directories as well. Data Conversion – Hex Encoding and Character replacements You will now be able to deliver your CDC data in hex encoding format in addition to base24 format. Oracle GoldenGate for Big Data will be able to find and replace using regular expressions. You can now include primary key in JSON format. Oracle GoldenGate for Big Data can convert time-stamp to ISO8601 format.. Security Improvements – Oracle Wallet Integration and Kafka SSL Oracle GoldenGate for Big Data can now integrate with Oracle Wallet. Kafka integration now provides integration with Kafka SSL which is available from Kafka 0.9.x. Kafka Multiple Topic support Oracle GoldenGate for Big Data can now automatically create target topics based on the source tables, if you would like to segregate the data from one table to a topic. · Newer Certifications and many more !!! Note: The 12.2.0.1.1 installer is available as an update installer for 12.2.0.1.0. So please look carefully for the update release file which contains the description with 12.2.0.1.1 release version. More information on Oracle GoldenGate 12.2.0.1.1 Learn more about Oracle GoldenGate 12c Download Oracle GoldenGate for Big Data 12.2.0.1.1 Documentation for Oracle GoldenGate for Big Data 12.2.0.1.1 Certification Matrix for Oracle GoldenGate for Big Data 12.2.0.1.1

Oracle GoldenGate for Big Data 12.2.0.1.1 update is available I am pleased to announce the general availability of Oracle GoldenGate for Big Data 12.2.0.1.1. What’s new in Oracle GoldenGate for Big Data 1...

Oracle Data Integration Family Solutions & News

Looking for Cutting-Edge Data Integration & Governance Solutions: Nominate NOW for the 2016 Excellence Awards

It is nominationtime!!! This year's OracleCloud Platform Innovation – Excellence Awards will honor customers andpartners who are creatively using Oracle products. Think you have something unique andinnovative with Oracle Data Integration &Governance products? We'd love to hear from you! Please submit today in the Data Integration & Governance category. This year’s nomination process is a littlebit different in the sense that the forms are fully online. The deadlinefor the nomination is Monday, June 20, 2016. Win a free pass to Oracle OpenWorld 2016! Let’s reminisce alittle… For details on the2015 Big Data, Business Analytics, and Data Integration Winners: amazon.com,Caixa Bank, Serta Simmons Bedding, Skanska, Scottish & Southern Energy and Tampa International Airport, check out this blog post. For details on the2014 Data Integration Winners: NETServiços and Griffith University,checkout this blog post. For details on the2013 Data Integration Winners: Royal Bank of Scotland’s Market andInternational Banking and TheYalumba Wine Company, check out this blog post. For details on the2012 Data Integration Winners: RaymondJames Financial and Morrisons Supermarkets, check out this blog post. We hope to honoryou! Click hereto submit your nomination today – scroll to the bottom of the page and selectthe appropriate category. And just a reminder: the deadline to submit a nomination is5pm Pacific Time on June 20th, 2016.

It is nomination time!!! This year's Oracle Cloud Platform Innovation – Excellence Awards will honor customers andpartners who are creatively using Oracle products. Think you have something unique...

GoldenGate Solutions and News

Oracle GoldenGate for DB2 for i 12.2.1.0.2 Release is Available

Oracle GoldenGate for DB2 i 12.2.0.1.2 was released on 4/15/2016. You now can download the software from OTN and eDelivery. This is the first 12.2 release of Oracle GoldenGate for DB2 for i (DB2 for i).The following new features are provided: Column-Level Character Set Encoding:  In DB2 for i, each table column can have a different encoding. Oracle GoldenGate for DB2 for i 12.2 keeps the column-level encoding unchanged during the replication. The replication performance is improved by avoiding character set conversions.  ​Metadata In Trail: the definition (defgen) files and the SOURCEDEFS/ASSUMETARGETDEF parameters are no longer needed. One Billion Trail Files: trail file sequence length has increased to nine digits, a 1000x increase to one billion trail files. Heartbeat Table: Users can enable the feature to record end-to-end replication lag and view replication statistics with a database view.  Capturing from Remote Journals: Remote journal support allows GoldenGate extract running on a remote IBM i system to read journal data generated from the primary IBM i system. This eliminates most of the interactions of Oracle GoldenGate extract with the primary system.  The following features are added to GoldenGate for DB2 for i 12.1 and are available in 12.2: Support Multi-Journals Native Name Handling  Support Partitioned Tables  Certified to run on IBM iSeries 7.2  The following platforms are certified in this release:  DB2 for i: 7.1, 7.2  You can find more information from Oracle GoldenGate 12.2 documentation. ​

Oracle GoldenGate for DB2 i 12.2.0.1.2 was released on 4/15/2016. You now can download the software from OTN and eDelivery. This is the first 12.2 release of Oracle GoldenGate for DB2 for i (DB2 for...

Oracle

Integrated Cloud Applications & Platform Services