## Thursday Jan 29, 2015

### Oracle GoldenGate 12c for Oracle Database - Integrated Capture sharing capture session

Oracle GoldenGate for Oracle Database has introduced several features in Release 12.1.2.1.0. In this blog post I would like to explain one of the interesting features:  “Integrated Capture (a.k.a. Integrated Extract) sharing capture session”. This feature allows making the creation of additional Integrated Extracts faster by leveraging existing LogMiner dictionaries. As Integrated Extract requires registering the Extract let’s first see what is ‘Registering the Extract’?

REGISTER EXTRACT EAMER DATABASE

The above command registers the Extract process with the database for what is called “Integrated Capture Mode”. In this mode the Extract interacts directly with the database LogMining server to receive data changes in the form of logical change records (LCRs).

When you create Integrated Extract prior to release Oracle GoldenGate 12.1.2.1.0, you might have seen the delay in registering the Extract with database. It is mainly because the creation of Integrated Extract involves dumping the dictionary and then processing that dictionary to populate LogMiner tables for each session, which causes overhead to online systems and hence it requires extra time to startup. The same process is being followed when you create additional Integrated Extract.

What if you could use the existing LogMiner dictionaries to create the additional Integrated Extract? This is what it has been done in this release. Additional Integrated Extract creation can be made faster significantly by leveraging existing LogMiner dictionaries which have been mined already. Hence no more separate copy of the LogMiner dictionaries to be dumped with each Integrated Extract. As a result, it will make the creation of additional Integrated Extracts much faster and helps avoid significant database overhead caused by dumping and processing the dictionary.

In order to use the feature, you should have Oracle DB version 12.1.0.2 or higher, and Oracle GoldenGate for Oracle version 12.1.2.1.0 or higher. The feature is currently supported for non-CDB databases only.

Command Syntax:

REGISTER EXTRACT group_mame DATABASE

..

{SHARE [AUTOMATIC | extract | NONE]}

It has primarily three options to select with; NONE is default if you don’t specify anything.

AUTOMATIC option will clone/share the LogMiner dictionary from the existing closest capture. If no suitable clone candidate is found, then a new LogMiner dictionary is created.

Extract option will clone/share from the capture session associated for the specified Extract. If this is not possible, then an error occurs the register does not complete.

NONE option does not clone or create a new LogMiner dictionary; this is the default.

While you use the feature, the SHARE options should be followed by SCN and specified SCN must be greater than or equal to at least one of the first SCN of existing captures and specified SCN should be less than current SCN.

Let’s see few behaviors prior to 12.1.2.1.0 release and with SHARE options. 'Current SCN’ indicates the current SCN value when register Extract command was executed in following example scenario.

 Capture Name LogMiner ID First SCN Start SCN LogMiner Dictionary ID (LM-DID) EXT1 1 60000 60000 1 EXT2 2 65000 65000 2 EXT3 3 60000 60000 3 EXT4 4 65000 66000 2 EXT5 5 60000 68000 1 EXT6 6 70000 70000 4 EXT7 7 60000 61000 1 EXT8 8 65000 68000 2

Behavior Prior to 12.1.2.1.0 – No Sharing

Register extract EXT1 with database (current SCN: 60000)

Register extract EXT2 with database (current SCN: 65000)

Register extract EXT3 with database SCN 60000 (current SCN: 65555)

Register extract EXT4 with database SCN 61000   à Error!!

Registration of Integrated Extract EXT1, EXT2 and EXT3 happened successfully where as EXT4 fails because the LogMiner server does not exist at SCN 61000.

Also take a note that all Integrated Extract (EXT1 – EXT3) created dictionaries separately (LogMiner Dictionary IDs are different, now onwards I’ll call them LM-DID).

New behavior with different SHARE options

• Register extract EXT4 with database SHARE AUTOMATIC (current SCN: 66000)

EXT4 automatically chose the capture session EXT2 as it has Start SCN 65000 which is nearer to current SCN 66000. Hence EXT4 & EXT3 capture sessions would share the same LM-DID 2.

• Register extract EXT5 with database SHARE EXT1 (current SCN: 68000)

EXT5 is sharing the capture session EXT1. Since EXT1 is up and running, it doesn’t give any error. LM-DID 1 would be shared across EXT 5 and EXT1 capture sessions.

• Register extract EXT6 with database SHARE NONE (current SCN: 70000)

EXT6 is being registered with SHARE NONE option; hence the new LogMiner dictionary will be created or dumped. Please see LM-DID column for EXT6 in above table. It contains LM-DID value 4.

• Register extract EXT7 with database SCN 61000 SHARE NONE (current SCN: 70000)

It would generate an error as similar to EXT4 @SCN61000. The LogMiner Server doesn’t exist at SCN 61000 and since the SHARE option is NONE, it won’t share the existing LogMiner dictionaries as well. This is same behavior as prior to 12.1.2.1.0 release.

• Register extract EXT7 with database SCN 61000 SHARE AUTOMATIC (current SCN: 72000)

EXT7 is sharing the capture session EXT1 as it is the closest for SCN61000. You must have noticed that the EXT7 @SCN61000 scenario has passed with SHARE AUTOMATIC option, which was not the case earlier (EXT4 @61000).

• Register extract EXT8 with database SCN 68000 SHARE EXT2 (current SCN: 76000)

EXT8 extract is sharing EXT2 capture session. Hence sharing of LogMiner dictionaries happens between EXT8 & EXT2

This feature is not only providing you faster start up for additional Integrated Extract, but also resolves few scenarios which wasn’t possible earlier. If you are using this feature and had questions or comments, please let me know by leaving your comments below. I’ll reply to you as soon as possible.

## Friday Jan 09, 2015

### ODI 12c - Mapping SDK Overview

In this post I'll show some of the high level concepts in the physical design and the SDKs that go with it. To do this I'll cover some of the logical design area so that it all makes sense. The conceptual model for logical mapping in ODI 12c is shown below (it's quite a change from 11g), the model below allows us to build arbitrary flows. Each entity below you can find in the 12c SDK. Many of these have specialized classes - for example MapComponent has specializations for the many mapping components available from the designer - these classes may have specific business logic or specialized behavior. You can use the strongly typed, highly specialized classes like DatastoreComponent or you can write applications in a generic manner using the conceptual high level SDK - this is the technique I used in the mapping builder here.

The heart of the SDK for this area of the model can be found here;

If you need to see these types in action, take the mapping illustration below as an example, I have annotated the different items within the mapper. The connector points are shown in the property inspector, they are not shown in the graphical design. Some components have many input or output connector points (for example set component has many input connector points). Some components are simple expression based components (such as join and filter) we call these selector components, other components project a specific shape, we call those projectors - that's just how we classify them.

In 12c we clearly separated the physical design from the logical, in 11g much of this was blended together. In separating them we also allow many physical designs for a logical mapping design. We also had to change the physical SDK and model so that we could support multiple targets and arbitrary flows. 11g was fairly rigid - if you look at the 'limitations' sections of the KMs you can see some of that. KMs are assigned on map physical nodes in the physical design, there are some helper methods on execution unit so you can set/get KMs.

The heart of the SDK for this logical mapping area of the model can be found here;

If we use the logical mapping shown earlier and look at the physical design we have for it, we can annotate the items below so you can envisage how each of the classes above is used in the design;

The MapPhysicalDesign class has all of the physical related information such as the ODI Optimization Context and Default Staging Location (there also exists a Staging Location Hint on the logical mapping design) - these are items that existed in ODI 11g and are carried forward.

To take an example if I want to change the LKMs or IKMs set on all physical designs, one approach would be to iterate through all of the nodes in a physical design and you can check whether an LKM or an IKM is assigned for that node - this then let;s you do all sorts - from get the current setting, to setting it with a new value. The snippet below gives a small illustration using groovy of the methods from the ODI SDK;

1.         PhysicalDesignList=map.getPhysicalDesigns()
2.          for (pd in PhysicalDesignList){
3.             PhysicalNodesList=pd.getPhysicalNodes()
4.             for (pn in PhysicalNodesList){
5.                 if (pn.isLKMNode()){
6.                     CurrentLKMName=pn.getLKMName()
7. ...
8.                          pn.setLKM(myLKM)
9.                 }else if (pn.isIKMNode()){
10.                     CurrentIKMName=pn.getIKMName()
11. ...
12.                      pn.setIKM(myIKM)

There are many other methods within the SDK to do all sorts of useful stuff - first example is the getAllAPNodes method on a MapPhysicalDesign. This gives all of the nodes in a design which will have LKMs assigned - so you can quickly set or check. The second example is the getTargetNodes method on MapPhysicalDesign - this is handy to get all target nodes to set IKMs on, Final example is to find an AP node in the physical design for a logical component in your design - use the method findNode to achieve this.

Hopefully there are some useful pointers here, worth being aware of the ODI blog on Mapping SDK the ins and outs which provides an overview and cross reference to the primary ODI objects and the underpinning SDKs. If there are any other specific questions let us know.

## Thursday Sep 25, 2014

### ODI 12c - Migrating from OWB to ODI - PLSQL Procedures

The OWB to ODI migration utility does a lot, but there are a few things it doesn't handle today. Here's one for our OWB customers moving to ODI who are scratching their heads on the apparent lack of support for PLSQL procedures in ODI... With a little creative work you can not only get those mappings into ODI but there is potential to dramatically improve performance (my test below improves performance by 400% and can be easily further tuned based on hardware).

This specific illustration will cover the support for PLSQL procedures (not functions) in OWB and what to do in ODI, OWB takes care of invoking PLSQL procedures mid-flow - it did this by supporting PLSQL row-based code for certain (not all) map designs out of the box - PLSQL procedure invocation was one case this was done. The PLSQL that OWB generated was pretty much as efficient as it could have been for PLSQL (used bulk collect and many other best practices you didn't have to worry about as a map designer) but it was limited in the logical map support (couldn't have a set-based operator such as join after a row based only operator such as a PLSQL transformation) - it was also PLSQL not SQL.

Here we see how with a simple pipelined, parallel enabled table function wrapper around your PLSQL procedure call, how you capture the same design in ODI 12c and/or get the mapping migrated from OWB. I think the primary hurdle customers have is what is the option going forward. To solve this, we will just leverage more of the Oracle database; table functions and parallelize the <insert your favorite word> out of it!

The mapping below calls a PLSQL procedure and OWB generated PLSQL row based code for this case, the target is getting loaded with data from the source table and the 2 output parameters of the PLSQL procedure;

When you try and migrate such a mapping using the OWB to ODI migration utility, you'll get a message indicating that the map cannot be migrated using the utility. Let's see what we can do! The vast majority of mappings are set-based, generally a very small subset are row based PLSQL mappings. Let's see how this is achieved in ODI 12c.

I did a test using the generated code from OWB - no tuning just the raw code for the above mapping - it took ">">">12 minutes 32 seconds to process about 32 million rows and invoke the PLSQL procedure and perform a direct path insert into the target. With my ODI 12c design using a very simple table function wrapper around the PLSQL procedure I can cut the time to 3 minutes 14 seconds!! Not only can I do this, but I can easily further optimize it to better leverage the Oracle database server by quickly changing the hints - I had a 4 processor machine, so that's about as much as I could squeeze out of it.

The table function wrapper to call the PLSQL procedure is very simple, line 7 is where I call the PLSQL procedure, I use the object instance in the call and pipe the data when the call is made;

1. create or replace function TF_CALL_PROC(input_values sys_refcursor) return TB_I_OBJ pipelined parallel_enable(PARTITION input_values BY ANY) AS
2.   out_pipe i_obj := i_obj(null,null,null,null,null,null,null,null,null);
3. BEGIN
4.   LOOP
5.     FETCH input_values INTO out_pipe.prod_id, out_pipe.cust_id,out_pipe.time_id,out_pipe.channel_id,out_pipe.promo_id,out_pipe.quantity_sold,out_pipe.amount_sold;
6.     EXIT WHEN input_values%NOTFOUND;
7.     MYPROC(out_pipe.prod_id,out_pipe.status,out_pipe.info);
8.     PIPE ROW(out_pipe);
9.   END LOOP;
10.   CLOSE input_values;
11.   RETURN;
12. END;

This is a very simple table function (with enough metadata you could generate it), it uses table function pipelining and parallel capabilities - I will be able to parallelize all aspects the generated statement and really leverage the Oracle database. The above table function uses the types below, it has to project all of the data used downstream - whereas OWB computed this, you will have to do that.

1. create or replace type I_OBJ as object (
2.  prod_id number,
3.  cust_id number,
4.  time_id date,
5.  channel_id number,
6.  promo_id number,
7.  quantity_sold number(10,2),
8.  amount_sold number(10,2),
9.  status varchar2(10),
10.  info number
11.   );
12. create or replace type TB_I_OBJ as table of I_OBJ;

The physical design in ODI has the PARALLEL(4) hints on my source and target and I enable parallel DML using the begin mapping command within the physical design.

You can see in above image when using Oracle KMs there are options for hints on sources and targets, you can easily set these to take advantage of the hardware resources, tweak these to pump the performance throughput!

To summarize, you can see how we can leverage the database to really speed the process up (remember the 400%!), also we can still capture the design in ODI and on top of that unlike in OWB, this approach let's us carry on doing arbitrary data flow transformations after the table function component which is invoking our PLSQL procedure - so we could join, lookup etc. Let me know what you think of this, I'm a huge fan of table functions I think they afford a great extensibility capability.

## Wednesday Aug 06, 2014

### OWB to ODI 12c Migration in action

The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI.

Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;

There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.

http://oracleconferencing.webex.com/oracleconferencing/ldr.php?RCID=df8729e0c7628dde638847d9511f6b46

It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
http://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb-to-odi-2130001.html

Check it out and see what you think!

## Monday Jul 21, 2014

### ODI 12.1.3: New Model and Topology Objects Wizard

Oracle Data Integrator 12.1.3 introduces a new wizard to quickly create Models. This wizard will not only help you create your Models more easily, if needed it will also create the entire required infrastructure in the ODI Topology: Data Servers, Physical and Logical Schemas.

In this blog article we will go through an example together and add a new Model to access the HR sample schema of an Oracle database. You can follow through this example using the ODI Getting Started VirtualBox image which is available here: http://www.oracle.com/technetwork/middleware/data-integrator/odi-demo-2032565.html

The ‘New Model and Topology Objects’ wizard can be accessed from the Models menu as shown below:

The wizard opens up and displays default settings. From there we can customize our objects before they actually get created in the ODI repositories.

In this example we want to access tables stored in the HR schema of an Oracle database so we name the Model ORACLE_HR. Note that the Logical Schema as well as the Schema and Work Schema fields in the Physical Schema section automatically default to the Model name:

Next we will give a new name to our Data Server: LINUX_LOCAL_ORACLE since we are connecting to a local Oracle database running on a Linux host.

We then fill in the User, Password and URL fields to reflect the environment we are in. To access the HR schema we use the ODI Staging area user which is ODI_STAGING. This is a best practice and it also ensures that the Work Schema field automatically gets updated with the right value for the Staging Area.

Note that the wizard also allows us to link a new Model to an existing Data Server.

Finally we click on Test Connection to make sure the parameters are correct.

Then we update the Schema field using the drop-down list to point to the HR schema at the database level.

Our Model is now fully set up, we click on OK to have it created along with its related Topology objects. The Model ORACLE_HR opens up allowing us to reverse-engineer the tables using the Selective Reverse-Engineering tab:

We pick all the tables and click on the Reverse Engineer button to start this process and save the Model at the same time. A new Model called ORACLE_HR was created as shown below as well as the appropriate objects in the Topology:

## Thursday Jul 17, 2014

### ODI 12c and Eloqua using DataDirect Cloud JDBC Driver

Sumit Sarkar from Progress DataDirect just posted a great blog on connecting to Eloqua in ODI 12c using the DataDirect Cloud JDBC driver. You can find the article here: http://blogs.datadirect.com/2014/07/oracle-data-integrator-etl-connectivity-eloqua-jdbc-marketing-data.html

The steps described in this tutorial also apply to other datasources supported by the DataDirect Cloud JDBC driver.

## Wednesday Jun 04, 2014

There's a complete soup to nuts post from Deepak Vohra on the Oracle community pages of ToadWorld on loading a fixed length file into the Oracle database. This post is interesting from a few fronts; firstly this is the out of the box experience, no specialized KMs so just basic integration from getting the software installed to running a mapping. Also it demonstrates fixed length file integration including how to use the ODI UI to define the fields and pertinent properties.

Check the blog post out below....

Hopefully you also find this useful, many thanks to Deepak for sharing his experiences. You could take this example further and illustrate how to load into Oracle using the LKM File to Oracle via External table knowledge module which will perform much better and also leverage such things as using wildcards for loading many files into the 12c database.

## Wednesday May 21, 2014

### Zero Downtime Consolidation to Oracle Database 12c: Webcast Recap

As companies move to private cloud implementations to increase agility and reduce costs, and start their plans with consolidating their databases, a major question arises: how do we move our systems without causing major disruption to the business. At the end, the most important systems that need to move to this new architecture are the ones that cannot tolerate extensive downtime for any reason. In last week’s webcast “Zero Downtime Consolidation to Oracle Database 12c with Oracle GoldenGate 12c” we tackled this specific dilemma that IT organizations face. The webcast is now available on demand via the link above in case you missed it last week.

In the webcast, we started discussing the benefits companies achieve when they consolidate to a private database cloud, critical database capabilities to deliver database as a service, and a quick overview Oracle Database 12c features that enable private database clouds deployments. Nick Wagner, director of product management for Oracle Database High Availability, talked about the new Global Data Services feature in Oracle Database Maximum Availability Architecture (MAA) as well. Global Data Services helps organizations to handle challenges involved with managing multiple database replicas across different sites. The product manages database replicas of Active Data Guard and Oracle GoldenGate products, and offers workload balancing to maximize performance as well as intelligent handling of outages using all available replicas. Nick also discussed the database features available for upgrading, migrating, and consolidating into Oracle Database 12c and when it makes sense to use Oracle GoldenGate for these efforts.

After Nick, Chai Pydimukkala from senior director of product management for GoldenGate discussed some of the key new features of Oracle GoldenGate 12c for Oracle Database and its expanding heterogeneity that supports consolidation from all major databases. Chai continued his presentation with GoldenGate’s solution architecture for zero downtime consolidation and explained how it minimizes risk with failback option as well as via the phased migration option by running old and new environments concurrently.

Chai also gave an example on how Oracle Active Data Guard customers can seamlessly use GoldenGate for a zero downtime migration without affecting their standby system for high ongoing high availability and disaster recoery. After customer examples we had a long Q&A section. For the full presentation and Q&A with the experts you can watch the on-demand version of the webcast via the following link.

Zero Downtime Consolidation to Oracle Database 12c with Oracle GoldenGate 12c

In the rest of this blog post, I provided  answers to some of the questions we could not take in the live event.

1) Additional difference in features between GoldenGate 11g and 12c?

Please take a look at our previous blog posts below where we discussed many of the new features of GoldenGate 12c.

· Advanced Replication for The Masses – Oracle GoldenGate 12c for the Oracle Database.

· GoldenGate 12c - What is Coordinated Delivery?

· GoldenGate 12c - Coordinated Delivery Example

· Oracle GoldenGate 12c - Announcing Support for Microsoft and IBM

You can also listen to our podcast: Unveiling Oracle GoldenGate 12c

More comprehensive information can be found in the webcast replay  or

2) Does GoldenGate have a term license that customer can purchase for migration purposes?

We do offer 1 year license that customer can be used for this purposes.

3) Is there a Oracle Tutorial on configuring and testing Zero downtime upgrade using GoldenGate?

Here is a high level outline/tutorial for the migration. It will require some knowledge of how to setup GoldenGate and configuration.

Oracle GoldenGate Best Practice: Oracle Migrations and Upgrades 9i and up

A more broad discussion of this topic can be found in the following white paper. Zero Downtime Database Upgrades using Oracle GoldenGate 12c

4) If the number of updates to the database is very high, does GoldenGate give some kind of compression to transfer the Trail Files and be up to date with the changes?

Oracle GoldenGate does provide support of compress of change data in the Trail File to better the transfer throughput from source to target with the usage of COMPRESS and COMPRESSTHRESHOLD.

RMTHOST

{, MGRPORT port | PORT port}

[, COMPRESS]

[, COMPRESSTHRESHOLD]

[, ENCRYPT algorithm [KEYNAME key_name]]

[, PARAMS collector_parameters]

[, STREAMING | NOSTREAMING]

[, TCPBUFSIZE bytes]

[, TCPFLUSHBYTES bytes]

[, TIMEOUT seconds]

COMPRESS

This option is valid for online or batch Capture processes and any Oracle GoldenGate initial-load method that uses Trails. Compresses outgoing blocks of records to reduce bandwidth requirements. Oracle GoldenGate decompresses the data before writing it to the Trail. COMPRESS typically results in compression ratios of at least 4:1 and sometimes better. However, compressing data can consume CPU resources.

COMPRESSTHRESHOLD

This option is valid for online or batch Capture processes and any Oracle GoldenGate initial-load method that uses trails. Sets the minimum block size for which compression is to occur. Valid values are from 0 and through 28000. The default is 1,000 bytes.

5) Can the Source system utilize 12C GoldenGate for DML/DDL capture and target system utilize GoldenGate 11g?

Yes.  You can replicate from a higher version of GoldenGate to a lower version using FORMAT RELEASE option.

FORMAT RELEASE <major>.<minor>

In the on demand replay of the webcast you will find more questions answered by our product management experts. So make sure to watch the on demand webcast as well.

## Thursday May 15, 2014

### Oracle Data Integrator Webcast Archives

Have you missed some of our Oracle Data Integrator (ODI) Product Management Webcasts?

Don’t worry – we do record and post these webcasts for your viewing pleasure. Recent topics include Oracle Data Integrator (ODI) and Oracle GoldenGate Integration, BigData Lite, the Oracle Warehouse Builder (OWB) Migration Utility, the Management Pack for Oracle Data Integrator (ODI), along with other various themes focused on Oracle Data Integrator (ODI) 12c. We run these webcasts monthly, so please check back regularly.

You can find the Oracle Data Integrator (ODI) Webcast Archives here.

And for a bit more detail:

The webcasts are publicized on the ODI OTN Forum if you want to view them live.  You will find the announcement at the top of the page, with the title and details for the upcoming webcast.

Thank you – and happy listening!

## Wednesday Apr 30, 2014

### ODI - Who Changed What and When? Search!

Cell phones ringing, systems with problems, what's going on? How do you diagnose? Sound a common problem? Some ODI runtime object failed in a production run, what's the first thing to do? Be suspicious. Be very suspicious. I have lost count of the number of times I have heard 'I changed nothing' over the years. The ODI repository has audit information for all objects so you can see the created date and user for objects as well as the updated date and user. This let's you at least identify if an object was updated.

Firstly you can check the object you are suspicious of, you can do this in the studio, operator navigator or console - the information is also in the SDK and for some adventurous folk in the repository tables. Below you can see who updated the object and when.

You can also check other objects such as variables (were they updated, what's their values?) and load plans.

Casting the web wider -Search

There is a very useful but not very well known feature in ODI for searching for objects, invoke it from the Search -> Find ODI Object menu.

For example if I wanted to query the objects that have been updated in the last 7 days, then this could be used - I can do this for runtime objects like scenarios and load plans as well as design objects. Below you can see I have the search scope as 'Scenarios' - this lets you restrict the search to scenarios, variables and scheduled executions. Here is a table summarizing the search scope and objects within scope;

 Search Scope Object Types Projects Folder, Package, Mapping, Procedure, Knowledge Module, Variable, User Function, Sequence Meta-data Model, Diagram, Datastore, Column, Key, Reference, Condition, Sub-Model, Model Folder Scenarios Scenario, Scenario Variable, Scheduled Execution Organization Project, Folder, Model Folder, Model, Sub-Model, Load Plan and Scenario Folder Procedure / Knowledge Modules Knowledge Module,Option, Procedure, Procedure Command Packages Package, Step Load Plans Load Plan

The search supports searching objects based on the last update date, by object type (scenario, load plan etc.), by object name and by update user. You can define information about the search in the text field in the favorite criteria section.

So if you are working in a large customer deployment and have to cast the web wider, this would let you quickly find objects that have recently been updated across an ODI repository. This can have many useful purposes - could use it for identifying potential suspicious changes, for doing a cross and check between what has changed and what you have been told has changed and also for understanding the changes so you can then perform some subsequent actions - such as exporting and importing them.

The results of this utility are returned into the result panel which can be viewed in a hierarchic tree including foldering or in a linear list, you can then open/edit individual objects. Below you can see the result in a linear list. Selecting the 'Hierarchical View' tab would allow you to see any foldering.

One thing I did note, is that their is no wildcard support for users, so you have to pick a specific user. It would be better if you could search for all objects updated within some date period regardless of user - then be able to sort the objects based on date/name etc. The audit information is also available via the SDK and the non-supported but useful repository tables in order to build custom scripts.

When a search is made you can also save the search (as an XML file) and use this across teams by opening the saved file whenever you wish to use this search criteria. This is useful for discovering what updates have happened recently - whether it is runtime objects or design artifacts, you can use this.

## Tuesday Apr 29, 2014

### What's New in Oracle GoldenGate 12c for DB2 iSeries

Oracle GoldenGate 12c (12.1.2.0.1) for IBM DB2 iSeries was released on February 15, 2014. The new version delivers the following new features:

• Native Delivery (Replicat): the new feature allows the user to install Oracle GoldenGate on the IBM i server and delivers the data directly to the IBM DB2 for i database. In previous releases, users had to install Oracle GoldenGate on a remote server and apply the data to the IBM DB2 for i database via ODBC. The native delivery not only improves the performance but also supports the GRAPHIC, VARGRAPHIC and DBCLOB datatypes, which are not supported in the ODBC remote delivery for IBM DB2 for i.
• Schema Wildcarding: as with other databases that Oracle GoldenGate supports, the schema wildcarding is now available for the IBM DB2 for i database as well.
• Coordinated Delivery (Replicat):  the coordinated delivery is supported in the native delivery (replicat).
• BATCHSQL: supported on IBM i 7.1 and higher version.

## Friday Apr 25, 2014

### What's New in Oracle GoldenGate 12c for Teradata

Oracle GoldenGate 12c (12.1.2.0.1) for Teradata was released on April 24, 2014; it is currently available for download at eDelivery (https://edelivery.oracle.com). In this release, the following new features are available:

• Capture and delivery support for Teradata 14.10. The support is based on Teradata Access Module (TAM) 13.10 with no support for the capture of the NUMBER or VARRAY datatypes.
• Delivery support for Teradata 15.0.
• Coordinated delivery (replicat): provides the ability to spawn multiple Oracle GoldenGate replicat processes from one parameter file.

Oracle GoldenGate for Teradata’s statement of direction has been updated in the Oracle GoldenGate Statement of Direction document.

The Oracle GoldenGate for Teradata Best Practice Document is updated at:

• Oracle GoldenGate Best Practice: Configuring Oracle GoldenGate for Teradata Databases (Doc ID 1323119.1)

## Friday Apr 11, 2014

### ODI 12c - Expression Hints

The ODI 12c mapping designer let's you design a mapping using components and define expressions for each of those components. Like 11g, in 12c there are hints to let ODI know how you would like to compute a physical plan for your design and from this generate code. I think the rules about how some of this work are not known - both in 11g and 12c - people second guess how they think it works. There's no magic to it, let's have a look at it. Underpinning the mapping are the technology definitions, it's here that datatypes are defined and datatype mappings between technologies. This let's ODI be very flexible in support for arbitrary data transformations between systems and how such data and its datatype is mapped across heterogeneous systems.

Putting the heterogeneous nature aside we can look at how datatypes are transformed just in a distributed example for Oracle (the example is for demonstration, in reality the database link LKM will be used which will do no staging). The example has 2 columns in a source table that are both VARCHAR2(30), one of those columns has an actual string, the other has a date. The target system has 2 columns in our target table that are VARCHAR2(10) and DATE. Note the target for one is shorter than its source and the other is a DATE datatype and not a string.

We can define the simple table to table mapping as below and define the expressions on the target table.

By default the expressions have no hint defined and will execute where the table is executed - in this case on the target system. We can see how the C$table would be defined by previewing the DDL code in the physical design, we will see the type/DDL in the syntax of the particular technology. Below you can see the source datatype information is propagated - the length is still 30. If we look at the target table we can see the expressions defined on it, in the example below I have selected the TGTEMP table and I can see the 2 expressions, I could actually change where the expression is defined for this particular physical design, I will not do that though, I will go back to the logical designer and set the hint there - then all of my potential physical designs leverage it. Use the 'Execute on hint' property for the attribute, you can see the different options there, just now it has value no hint. Below I have selected 'Source' to indicate I want the SUBSTR expression executed on the source system. After this change has been made, if you look at the physical design you will see that the datatype information on our AP is now different. Because the data has been prepared on the source then the datatype for our C$ column now takes on the definition of the target (VARCHAR2(10)).

This gives you some idea as to how the hints works for expressions in a datastore. ODI 12c also has an expression component that let's you define groups of expressions. I generally think this is good for when an expression is reused within a mapping, but I know everyone works differently with tools and I have heard that some people like using this for all complex expressions rather than it being defined within a component such as a target table as they can easily 'see' where complex expressions are defined. Each to their own, the good thing is that you can do whatever you like. One benefit with using the expression component is that ODI by default will push that expression component as close to the source as possible and you can easily grab the entire component and push it to the stage, target or wherever.

The mapping below defines the expressions on the expression component, again there are no hints defined on the component or individual expressions.

When the physical design for this mapping is inspected we can see the expression component is by default on the source execution unit. This goes for other components too (filter, join etc.). In this case you can see both of the columns in the AP take on the downstream target table's datatypes (VARCHAR2(10) and DATE).

Changing the hint on the logical design for the expression component to stage will place the expression in the downstream execution unit. If I had just switched the hint to be stage for the expression component then in the physical design the expression would go in TARGET_UNIT_1. In 11g, ODI also supported a concept where the stage was different for the target. This is still available in 12c and is configured by defining what you want to execute in the stage by using these hints plus defining what the stage is (so similar to 11g apart from you don't have to switch tabs and the gestures are more simple). So firstly, define the expression to execute on the stage using that hint. Then on the logical mapping if you click on the canvas background you will see a property named 'Staging Location Hint', you can set this to the logical schema location for the staging area if you have one. By default it is not set as the staging area is the same as the target.

Let's change this to MEMORY_ENGINE just to see what the physical design looks like. We see we now have multiple execution units and the middle one where we executed the expression component is the 'stage' executing on the MEMORY_ENGINE location.

The hints are done on the logical design. You can also hard-wire physical changes in the physical design, I will go into that in a subsequent post but wanted to stick to the logical hints here to demystify how this works. I hope this is some useful background, I think for ODIers from 11g it will help.

## Tuesday Apr 01, 2014

### ODI 12c - Mapping Builder

A few years ago I posted a utility (see interface builder post here) to build interfaces from driver files, here I have updated it for 12c to build mappings from driver files. The example uses a tab delimited text file to control the mapping creation, but it could be easily taken and changed to drive from whatever you wanted to capture the design of the mapping.

The mapping can be as complete or incomplete as you’d like, so could just contain the objects or could be concise and semantically complete.

The control file is VERY simple and just like ODI requests the minimal amount of information required. The basic format is as follows;So for example the control file below can define the sources, target, joins, mapping expressions etc;

 Directive Column2 Column3 Column4 Column5 Column6 source .....can add many target filter lookup join ....can add many of the components above. mapping

So for example the control file below can define the sources, target, joins, mapping expressions etc;

• source SOURCE EMP EMP
• source SOURCE DEPT DEPT
• target TARGET_MODEL TGTEMP
• join EMP.DEPTNO = DEPT.DEPTNO AJOIN
• filter EMP.SAL > 1 EMP AFILTER
• lookup SOURCE BONUS EMP BONUS.ENAME = EMP.ENAME ALOOKUP
• mapping ENAME UPPER(EMP.ENAME)
• mapping DEPTNO ABS(DEPT.DEPTNO)
• mapping COMM ABS(BONUS.COMM)

When executed, this generates the mapping below with the join, filter, lookup and target expressions from the file;

You should be able to join the dots between the control file sample and the mapping design above. You will need to compile and execute the code in OdiMappingBuilder;

java –classpath <cp> OdinterfaceBuilder jdbc:oracle:thin:@localhost:1521:ora112 oracle.jdbc.OracleDriver ODI_MASTER mypwd WORKREP1 SUPERVISOR myodipwd DEMOS SDK DEMO1 < mymappingcontrolfile.tab

The mapping to be created is passed from the command line. You can intersperse other documentation lines between the control lines so long as the control keywords in first column don’t clash. See the driver file below viewed from within Excel;

Anyway some useful snippets of code for those learning the SDK (download OdiMappingBuilder here), or for those wanting to capture the design outside and generate ODI mappings. Have fun!

## Wednesday Mar 12, 2014

### ODI 12c - Data Input Experts

Back in the olde days of OWB I blogged about a few utilities (see here) that were useful for collecting user input data in custom flows, users build such flows to implement accelerators to take the mundane tasks out of common activities. In ODI you can also use groovy SwingBuilder, this let's you build useful dialogs very easily. I posted some examples such as the one below for model creation in ODI and a launchpad example;

The utilities for OWB I mentioned in the blog are just basic java classes that were invoked from OWB via tcl/jacl. These utilities are written in java and can still be used from ODI via groovy. Still as useful, still as functional. Let's see how we call them now!

The required JARs need to be put on the groovy classpath, which is under the ODI IDE's Tools->Preferences option, and then under ODI->System->Groovy and set the groovy classpath to include jexpert.jar, tcljava.jar and jacl.jar. For example I have the following referencing the JARs from my 11gR2 database which has the OWB code;

• D:\app\dallan\product\11.2.0\dbhome_1\owb\lib\int\jexpert.jar;D:\app\dallan\product\11.2.0\dbhome_1\owb\lib\int\tcljava.jar;D:\app\dallan\product\11.2.0\dbhome_1\owb\lib\int\jacl.jar

I can then launch the shuttle dialog for example as follows;

1. import oracle.owb.jexpert.ShuttleObjects
2. arrayOfString = [ "PRODUCT_ID", "PRODUCT_NAME", "PRODUCT_COLOR", "PRODUCT_DESC", "PRODUCT_LONG_DESC", "CATEGORY_ID", "CATEGORY_NAME", "CATEGORY_DESCRIPTION", "SUBCATEGORY_ID", "SUBCATEGORY_NAME", "SUBCATEGORY_DESCRIPTION" ]
3. sels = ShuttleObjects.getselection("Select dimension levels", "Select columns to identify levels:", "Columns:", "Levels", (String[]) arrayOfString.toArray())

4. println sels

I can use the returned variable sels and do whatever ODI stuff I need, you can see the code above executed from within ODI and the dialog appearing with the information;

Likewise the data entry dialog works as is, when that dialog is executed from groovy, just like in OWB we can get the information displayed, the user can enter data, we can collect it and action it in our groovy using the ODI SDK;

The blog on the 12c mapping SDK here has a good SDK reference table that gives you pointers for all parts of the product into the SDK areas. This is definitely a handy one to bookmark, I often use it myself. Learn some scripting it'll help save you are your teams a lot of time.