Thursday Jul 02, 2015

Chalk Talk Video: How to Raise Trust and Transparency in Big Data with Oracle Metadata Management

Some fun new videos are available; we call the series ‘Chalk Talk’!

The first in the series that we will share with you around Oracle Data Integration speaks to raising trust and transparency within big data. It is known that crucial big data projects often fail due to a lack in the overall trust of the data. Data is not always transparent, and governing it can become a costly overhead. Oracle Metadata Management assists in the governance and trust across all data with the enterprise, Oracle and 3rd party.

View this video to learn more: Chalk Talk: How to Raise Trust and Transparency in Big Data.

For additional information on Oracle Metadata Management, visit the OEMM homepage.

Friday Jun 19, 2015

Oracle GoldenGate Certified on Teradata Unity

Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.11.  With the certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.  This joined effort from Oracle GoldenGate and Teradata extends the Oracle GoldenGate and Teradata ecosystem for real-time data integration. 

Tuesday Jun 09, 2015

Oracle Data Integrator Journalizing Knowledge Module for GoldenGate Integrated Replicat Blog from the A-Team

As always, useful content from the A-Team…

Check out the most recent blog about how to modify the out-of-the-box Journalizing Knowledge Module for GoldenGate to support the Integrated Replicat apply mode.

An Oracle Data Integrator Journalizing Knowledge Module for GoldenGate Integrated Replicat

Enjoy!

Wednesday May 13, 2015

Looking for Cutting-Edge Data Integration: 2015 Excellence Awards

It is nomination time!!!

This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with Oracle Data Integration products?

We'd love to hear from you! Please submit today in the Big Data and Analytics category.

The deadline for the nomination is July 31, 2015. Win a free pass to Oracle OpenWorld 2015!!

Let’s reminisce a little…

For details on the 2014 Data Integration Winners: NET Serviços and Griffith University, check out this blog post.

For details on the 2013 Data Integration Winners: Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post.

For details on the 2012 Data Integration Winners: Raymond James Financial and Morrisons Supermarkets, check out this blog post.

We hope to honor you!

Click here to submit your nomination today. And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on July 31, 2015.

Monday May 11, 2015

Oracle Big Data Preparation Cloud Service (BDP) – Coming Soon

What are your plans around Big Data and Cloud?

If your organization has already begun to explore these topics, you might be interested a new offering from Oracle that will dramatically simplify how you use your data in Hadoop and the Cloud:

Oracle Big Data Preparation Cloud Service (BDP)

There is a perception that most of the time spent in Big Data projects is dedicated to harvesting value. The reality is that 90% of the time in Big Data projects is really spent on data preparation. Data may be structured, but more often it will be semi-structured such as weblogs, or fully unstructured such as free form text. The content is vast, inconsistent, and incomplete, often off topic, and from multiple differing formats and sources. In this environment each new dataset takes weeks or months of effort to process, frequently requiring programmers writing custom scripts. Minimizing data preparation time is the key to unlocking the potential of Big Data.

Oracle Big Data Preparation Cloud Service (BDP) addresses this very reality. BDP is a non-technical, web-based tool that sets out to minimize data preparation time in an effort to quickly unlock the potential of your data. The BDP tool provides an interactive set of services that automate, streamline, and guide the process of data ingestion, preparation, enrichment, and governance without costly manual intervention.

The technology behind this service is amazing; it intuitively guides the user with a machine learning driven recommendation engine based on semantic data classification and natural language processing algorithms. But the best part is that non-technical staff can use this tool as easily as they use Excel, resulting in a significant cost advantage for data intensive projects by reducing the amount of time and resources required to ingest and prepare new datasets for downstream IT processes.

Curious to find out more? We invite you to view a short demonstration of BDP below:

Let us know what you think!

Stay tuned as we write more about this offering… visit often here!

Thursday Apr 30, 2015

How To Setup Oracle GoldenGate When Performing the DB2 11.1 Upgrade

After the announcement of DB2 11.1 support in Oracle GoldenGate for DB2 z/OS ,a lot of questions was received on how to setup Oracle GoldenGate when performing the DB2 11.1 upgrade. This blog provides some instructions and explanations. 

DB2 11.1 increases the log record sequence numbers from 6 bytes to 10 bytes. The log reading API changed significantly to support the new log record format.  Oracle GoldenGate provides support for DB2 11.1 with a version specific build.  In other words, starting with Oracle GoldenGate 12.1.2.1.4, two downloadable builds will be provided to support DB2 z/OS: 

  • GoldenGate for DB2 10 and earlier versions
  • GoldenGate for DB2 11
If upgrading to DB2 11.1 in a data sharing configuration and you’ll be upgrading the subsystems in the group gradually (i.e. you’ll have a mixed DB2 11.1 & DB2 v10.1/9.1 group for some period of time), we first recommend that you upgrade the existing GoldenGate being used to the GoldenGate version that you plan to use once you’ve upgraded to DB2 11.1.  At the time of writing this document, the earliest version of GoldenGate that supports DB2 11.1 is 12.1.2.1.4.  

The diagram below depicts the GoldenGate and data sharing configuration prior to upgrading the first subsystem to DB2 11.1.
Picture
Please make sure you are not using the data sharing group name i.e. ADDO in the extract connection parameter. For example if the data sharing group name is ADDO, and the subsystem SSIDs of the group are ADD1 and ADD2..., please use the SSID name instead. When you use the data sharing group name, GoldenGate will connect to any of the subsystems to access log files from all of the subsystems in the data sharing group. However, during the upgrade process, we need to make sure the GoldenGate extract is connected to a specific subsystem of the group that will not be upgraded to DB2 v11.1 initially. For example,

SOURCEDB ADD1 userid uuuuuu, password ppppppp


To quickly modify a GoldenGate extract connection to another subsystem in the data sharing group, it is common practice to use an include file to define the connection parameter.  For example, the following “extract-conn.inc” file denoted in the “INCLUDE” parameter would contain the connection parameter above:

INCLUDE extract-conn.inc

In this example, you can keep the extract connected to ADD1 while upgrading the other members of the data sharing group to DB2 11.1. Data from all members in the data sharing group will be captured by GoldenGate. 
Picture
As soon as you upgrade one member of the data sharing group to DB2 11.1, you can choose to use the new GoldenGate for DB2 z/OS 11 build and connect the extract to that subsystem and capture log records from all the other subsystems in the data sharing group as illustrated below:
Picture
The DB2 IFI allows a GoldenGate extract to access log files for all DB2 subsystems that are a part of the DB2 data sharing group no matter which LPAR these subsystems are running in.  GoldenGate can capture from all members of a data sharing group even if there are different DB2 subsystem versions.  To clarify this further:

  • GoldenGate can connect to a DB2 11.1 subsystem and successfully capture log records from DB2 10.1 subsystem(s) that are also a part of the DB2 data sharing group.
  • In like manner, GoldenGate can also connect to a DB2 10.1 subsystem and successfully capture log records from DB2 11.1 subsystem(s) that are a part of the DB2 data sharing group.

Please refer to KM 1060540.1 if you need more information about the Oracle GoldenGate support for DB2 z/OS data sharing group.

If you have further question or suggestions,  please feel free to reach me at @@jinyu512

(Thanks my colleague Mark Geisler, Richard Johnson and Greg Wood for reviewing this doc.)

Friday Apr 10, 2015

This Week's A-Team Blog Speaks to Automating Changes after Upgrading ODI or Migrating from Oracle Warehouse Builder

The A-Team not only provides great content, they are humorous too!

Check out this week’s post, the title says it all: Getting Groovy with Oracle Data Integrator: Automating Changes after Upgrading ODI or Migrating from Oracle Warehouse Builder

The article covers various scripts written in Groovy and leverage the ODI SDK that assist in automating massive changes to one’s repository. These initially came to be as a result of customer desire in enhancing their environment in their effort to move from Oracle Warehouse Builder (ODI) to Oracle Data Integrator (ODI), but in the end came the realization that these scripts could be used by any ODI user.

Happy reading!

Wednesday Apr 08, 2015

Oracle GoldenGate for DB2 z/OS Supports DB2 11

With the release of Oracle GoldenGate 12.1.2.1.4 release, Oracle GoldenGate for DB2 z/OS provides the support for DB2 11. This release also includes the fix to make Oracle GoldenGate z/OS Extract compatible with  IBM APAR PI12599 for DB2 z/OS. [Read More]

Monday Apr 06, 2015

Announcing Oracle Data Integrator for Big Data

Proudly announcing the availability of Oracle Data Integrator for Big Data. This release is the latest in the series of advanced Big Data updates and features that Oracle Data Integration is rolling out for customers to help take their Hadoop projects to the next level.

Increasing Big Data Heterogeneity and Transparency

This release sees significant additions in heterogeneity and governance for customers. Some significant highlights of this release include

  • Support for Apache Spark,
  • Support for Apache Pig, and
  • Orchestration using Oozie.

Click here for a detailed list of what is new in Oracle Data Integrator (ODI).

Oracle Data Integrator for Big Data helps transform and enrich data within the big data reservoir/data lake without users having to learn the languages necessary to manipulate them. ODI for Big Data generates native code that is then run on the underlying Hadoop platform without requiring any additional agents. ODI separates the design interface to build logic and the physical implementation layer to run the code. This allows ODI users to build business and data mappings without having to learn HiveQL, Pig Latin and Map Reduce.

Oracle Data Integrator for Big Data Webcast

We invite you to join us on the 30th of April for our webcast to learn more about Oracle Data Integrator for Big data and to get your questions answered about Big Data Integration. We discuss how the newly announced Oracle Data Integrator for Big Data

  • Provides advanced scale and expanded heterogeneity for big data projects 
  • Uniquely compliments Hadoop’s strengths to accelerate decision making, and 
  • Ensures sub second latency with Oracle GoldenGate for Big Data.


Thursday Mar 26, 2015

Oracle Big Data Lite 4.1.0 is available with more on Oracle GoldenGate and Oracle Data Integrator

Oracle's big data team has announced the newest Oracle Big Data Lite Virtual Machine 4.1.0.  This newest Big Data Lite Virtual Machine contains great improvements from a data integration perspective with inclusion of the recently released Oracle GoldenGate for Big Data.  You will see this in an improved demonstration that highlights inserts, updates, and deletes into Hive using Oracle GoldenGate for Big Data with Oracle Data Integrator performing a merge of the new operations into a consolidated table.

Big Data Lite is a pre-built environment which includes many of the key capabilities for Oracle's big data platform.   The components have been configured to work together in this Virtual Machine, providing a simple way to get started in a big data environment.  The components include Oracle Database, Cloudera Distribution including Apache Hadoop, Oracle Data Integrator, Oracle GoldenGate amongst others. 

Big Data Lite also contains hands-on labs and demonstrations to help you get started using the system.  Tame Big Data with Oracle Data Integration is a hands-on lab that teaches you how to design Hadoop data integration using Oracle Data Integrator and Oracle GoldenGate. 

                Start here to learn more!  Enjoy!

Thursday Jan 29, 2015

Oracle GoldenGate 12c for Oracle Database - Integrated Capture sharing capture session

Oracle GoldenGate for Oracle Database has introduced several features in Release 12.1.2.1.0. In this blog post I would like to explain one of the interesting features:  “Integrated Capture (a.k.a. Integrated Extract) sharing capture session”. This feature allows making the creation of additional Integrated Extracts faster by leveraging existing LogMiner dictionaries. As Integrated Extract requires registering the Extract let’s first see what is ‘Registering the Extract’?

REGISTER EXTRACT EAMER DATABASE

The above command registers the Extract process with the database for what is called “Integrated Capture Mode”. In this mode the Extract interacts directly with the database LogMining server to receive data changes in the form of logical change records (LCRs).

When you create Integrated Extract prior to release Oracle GoldenGate 12.1.2.1.0, you might have seen the delay in registering the Extract with database. It is mainly because the creation of Integrated Extract involves dumping the dictionary and then processing that dictionary to populate LogMiner tables for each session, which causes overhead to online systems and hence it requires extra time to startup. The same process is being followed when you create additional Integrated Extract.

What if you could use the existing LogMiner dictionaries to create the additional Integrated Extract? This is what it has been done in this release. Additional Integrated Extract creation can be made faster significantly by leveraging existing LogMiner dictionaries which have been mined already. Hence no more separate copy of the LogMiner dictionaries to be dumped with each Integrated Extract. As a result, it will make the creation of additional Integrated Extracts much faster and helps avoid significant database overhead caused by dumping and processing the dictionary.

In order to use the feature, you should have Oracle DB version 12.1.0.2 or higher, and Oracle GoldenGate for Oracle version 12.1.2.1.0 or higher. The feature is currently supported for non-CDB databases only.

Command Syntax:

REGISTER EXTRACT group_mame DATABASE

..

{SHARE [AUTOMATIC | extract | NONE]}

It has primarily three options to select with; NONE is default if you don’t specify anything.

AUTOMATIC option will clone/share the LogMiner dictionary from the existing closest capture. If no suitable clone candidate is found, then a new LogMiner dictionary is created.

Extract option will clone/share from the capture session associated for the specified Extract. If this is not possible, then an error occurs the register does not complete.

NONE option does not clone or create a new LogMiner dictionary; this is the default.

While you use the feature, the SHARE options should be followed by SCN and specified SCN must be greater than or equal to at least one of the first SCN of existing captures and specified SCN should be less than current SCN.

Let’s see few behaviors prior to 12.1.2.1.0 release and with SHARE options. 'Current SCN’ indicates the current SCN value when register Extract command was executed in following example scenario.

Capture Name

LogMiner ID

First SCN

Start SCN

LogMiner Dictionary ID (LM-DID)

EXT1

1

60000

60000

1

EXT2

2

65000

65000

2

EXT3

3

60000

60000

3

EXT4

4

65000

66000

2

EXT5

5

60000

68000

1

EXT6

6

70000

70000

4

EXT7

7

60000

61000

1

EXT8

8

65000

68000

2

Behavior Prior to 12.1.2.1.0 – No Sharing

Register extract EXT1 with database (current SCN: 60000)

Register extract EXT2 with database (current SCN: 65000)

Register extract EXT3 with database SCN 60000 (current SCN: 65555)

Register extract EXT4 with database SCN 61000   à Error!!

Registration of Integrated Extract EXT1, EXT2 and EXT3 happened successfully where as EXT4 fails because the LogMiner server does not exist at SCN 61000.

Also take a note that all Integrated Extract (EXT1 – EXT3) created dictionaries separately (LogMiner Dictionary IDs are different, now onwards I’ll call them LM-DID).

New behavior with different SHARE options

  • Register extract EXT4 with database SHARE AUTOMATIC (current SCN: 66000)

EXT4 automatically chose the capture session EXT2 as it has Start SCN 65000 which is nearer to current SCN 66000. Hence EXT4 & EXT3 capture sessions would share the same LM-DID 2.

  • Register extract EXT5 with database SHARE EXT1 (current SCN: 68000)

EXT5 is sharing the capture session EXT1. Since EXT1 is up and running, it doesn’t give any error. LM-DID 1 would be shared across EXT 5 and EXT1 capture sessions.

  • Register extract EXT6 with database SHARE NONE (current SCN: 70000)

EXT6 is being registered with SHARE NONE option; hence the new LogMiner dictionary will be created or dumped. Please see LM-DID column for EXT6 in above table. It contains LM-DID value 4.

  • Register extract EXT7 with database SCN 61000 SHARE NONE (current SCN: 70000)

It would generate an error as similar to EXT4 @SCN61000. The LogMiner Server doesn’t exist at SCN 61000 and since the SHARE option is NONE, it won’t share the existing LogMiner dictionaries as well. This is same behavior as prior to 12.1.2.1.0 release.

  • Register extract EXT7 with database SCN 61000 SHARE AUTOMATIC (current SCN: 72000)

EXT7 is sharing the capture session EXT1 as it is the closest for SCN61000. You must have noticed that the EXT7 @SCN61000 scenario has passed with SHARE AUTOMATIC option, which was not the case earlier (EXT4 @61000).

  • Register extract EXT8 with database SCN 68000 SHARE EXT2 (current SCN: 76000)

EXT8 extract is sharing EXT2 capture session. Hence sharing of LogMiner dictionaries happens between EXT8 & EXT2

This feature is not only providing you faster start up for additional Integrated Extract, but also resolves few scenarios which wasn’t possible earlier. If you are using this feature and had questions or comments, please let me know by leaving your comments below. I’ll reply to you as soon as possible.

Monday Dec 29, 2014

Oracle Data Enrichment Cloud Service (ODECS) - Coming Soon

What are your plans around Big Data and Cloud?

If your organization has already begun to explore these topics, you might be interested a new offering from Oracle that will dramatically simplify how you use your data in Hadoop and the Cloud:

Oracle Data Enrichment Cloud Service (ODECS)

There is a perception that most of the time spent in Big Data projects is dedicated to harvesting value. The reality is that 90% of the time in Big Data projects is really spent on data preparation. Data may be structured, but more often it will be semi-structured such as weblogs, or fully unstructured such as free form text. The content is vast, inconsistent, and incomplete, often off topic, and from multiple differing formats and sources. In this environment each new dataset takes weeks or months of effort to process, frequently requiring programmers writing custom scripts. Minimizing data preparation time is the key to unlocking the potential of Big Data.

Oracle Data Enrichment Cloud Service (ODECS) addresses this very reality. ODECS is a non-technical, web-based tool that sets out to minimize data preparation time in an effort to quickly unlock the potential of your data. The ODECS tool provides an interactive set of services that automate, streamline, and guide the process of data ingestion, preparation, enrichment, and governance without costly manual intervention.

The technology behind this service is amazing; it intuitively guides the user with a machine learning driven recommendation engine based on semantic data classification and natural language processing algorithms. But the best part is that non-technical staff can use this tool as easily as they use Excel, resulting in a significant cost advantage for data intensive projects by reducing the amount of time and resources required to ingest and prepare new datasets for downstream IT processes.

Curious to find out more? We invite you to view a short demonstration of ODECS below:


Let us know what you think!

Stay tuned as we write more about this offering…

Wednesday Dec 17, 2014

Oracle Partition Exchange Blog from the ODI A-Team

More great information from the ODI A-Team!

Check out the A-Team’s most recent blog about the Oracle Partition Exchange – it does come in two parts:

Using Oracle Partition Exchange with ODI

Configuring ODI with Oracle Partition Exchange

The knowledge module is on Java.Net, and it is called “IKM Oracle Partition Exchange Load”.  To search for it, enter “PEL” or “exchange” in the Search option of Java.Net.

A sample ODI 12.1.3 Repository is available as well.  The ODI sample repository has great examples of how to perform both initial and incremental data upload operations with Oracle Partition Exchange.  This repository will help users to understand how to use Oracle Partition Exchange with ODI.

Happy reading!

Wednesday Dec 10, 2014

Oracle Enterprise Metadata Management 12.1.3.0.1 is now available!

As a quick refresher, Metadata Management is essential to solve a wide variety of critical business and technical challenges which include how report figures are calculated, understanding the impact of changes to data upstream, providing reports in a business friendly way in the browser and providing reporting capabilities on the entire metadata of an enterprise for analysis and improvement. Oracle Enterprise Metadata Management is built to solve all these pressing needs for customers in a lightweight browser-based interface. Today, we announce the availability of Oracle Enterprise Metadata Management 12.1.3.0.1 as we continue to enhance this offering.

With Oracle Enterprise Metadata Management 12.1.3.0.1, you will find business glossary updates, updates for a better experience to the user interface as well as improved and new metadata harvesting bridges including Oracle SQL Server Data Modeler, Microsoft SQL Server Integration Services, SAP Sybase PowerDesigner, Tableau and more. There are also new dedicated web pages for tracing data lineage and impact! At a more granular level you will also find new customizable action menus per repository object type for more personalization. For a full read on new features, please read here. Additionally, view here for the certification matrix details.

Download Oracle Enterprise Metadata Management 12.1.3.0.1!

Thursday Nov 20, 2014

Let Oracle GoldenGate 12c Take You to the Cloud

If your organization is in the ~80% of the global business community, you are most likely working on a cloud computing strategy for your organization, or actively implementing. The cloud computing growth rate is 5X more than the overall IT growth rate because of the clear and already proven cost savings, agility, and  scalability benefits of cloud architectures.

When organizations decide to embark on their cloud journey, they notice there are several questions and challenges to be addressed, involving data accessibility, security, availability, system management, performance etc. Oracle GoldenGate's real-time data integration and bi-directional transactional replication technology addresses critical challenges such as:

  • How to move my systems to the cloud without interrupting operations?
  • How to enable timely data synchronization between the systems on the cloud and on-premises to ensure access to consistent data for all end users?
  • How do I run operational reports with the data I have in cloud environments, or feed my analytical systems in cloud solutions?
  • In managed or private clouds, how do I keep the cloud platform highly available when I need to do maintenance, upgrades?

 On Tuesday,  December 2nd we will tackle these questions in a free webcast:

Live Webcast: Oracle GoldenGate 12c for the Enterprise and the Cloud

Tuesday, December 2nd, 2014 10am PT/ 1pm ET 

In this webcast, you will not only hear about Oracle GoldenGate's strong solutions for cloud environments, but also the latest features that strengthen its offering. The new features we will discuss include:

  • Support for Informix, SQL Server 2014, MySQL Community Edition, and big data environments
  • Real-time data integration between on premises and cloud with SOCKS5 compliance
  • New data repair functionality to help ensure database consistency across heterogeneous systems
  • Moving from Oracle Streams to GoldenGate with the new migration utility

 I would like to invite you to join me and my colleague Chai Pydimukkala, Senior Director of Product Management for Oracle GoldenGate in this session to learn the latest on GoldenGate 12c and ask your questions in a live Q&A.

Hope to see you there!

About

Learn the latest trends, use cases, product updates, and customer success examples for Oracle's data integration products-- including Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Data Quality

Search

Archives
« July 2015
SunMonTueWedThuFriSat
   
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
 
       
Today