Wednesday Sep 17, 2014

Cloud Data Integration Sessions at Oracle OpenWorld

It’s almost time for Oracle OpenWorld and I would like to give you a sneak peek into some of the cloud data integration sessions we are hosting this year.

On Tuesday September 30th we will start with ‘Oracle GoldenGate and the Cloud [CON7774]’. Chai Pydimukkala and Pawan Kumar Kumar will be discussing how Oracle GoldenGate can run in the cloud and be used in the Oracle Cloud.

To learn more about it join us for this session:
Oracle GoldenGate and the Cloud - CON7774
09/30/14 (Tuesday) 12:00 PM - Moscone South - 302

Then on Thursday September 2nd I will have the pleasure to co-present ‘Oracle Data Integration: A Crucial Ingredient for Cloud Integration [CON7926]’ with Sumit Sarkar from Progress DataDirect. In this session we will be explaining how the Oracle Data Integration solutions help businesses integrate with cloud resources such as cloud databases and applications or cloud BI infrastructures. We will cover topics like initializing and synchronizing Oracle DBaaS or performing native E-LT transformations on a cloud data warehousing platform.

We are also very excited to be doing 2 live demonstrations in this session using Oracle Data Integrator and the Progress DataDirect JDBC drivers: extracting data from Salesforce.com as well as integrating and transforming data on Amazon Redshift.

If you are interested in cloud integration make sure you attend this session:
Oracle Data Integration: A Crucial Ingredient for Cloud Integration - CON7926

10/2/14 (Thursday) 9:30 AM - Moscone South - 270

Of course there are also plenty of other Oracle Data Integration sessions you should plan on attending at Oracle OpenWorld this year. For a complete list review our Focus On Data Integration website. See you there!

Tuesday Sep 09, 2014

ODI 12c - Models in Data Modeler and ODI

Ever wondered how to get your models from Oracle SQL Developer Data Modeler (SDDM) in to ODI data models? The most common practice is to generate and execute the physical DDL scripts from SDDM to the target database. Another technique is possible leveraging the ODI SDK and the SDDM SDK - that's what I will illustrate here. There is an example script posted here on the java.net site.

There is an end to end demo viewlet below to see the script in action, check it out;

https://blogs.oracle.com/dataintegration/resource/viewlets/odi_models_from_sddm_viewlet_swf.html

The viewlet shows the transformation script in action creating the ODI model from the Data Modeler design, here's a peek at the script in action;

In the viewlet you will see how I added the groovy scripting engine as an engine for SDDM - I can then leverage my groovy skills and the ODI SDK to build the script and provide useful capabilities.

This script is publicly available at link above, take, enhance and comment! Join the communities on LinkedIn for Oracle Data Integration and the OTN forum to learn and exhange with other members of the community.

Wednesday Aug 06, 2014

OWB to ODI 12c Migration in action

The OWB to ODI 12c migration utility provides an easy to use on-ramp to Oracle's strategic data integration tool. The utility was designed and built by the same development group that produced OWB and ODI.

Here's a screenshot from the recording below showing a project in OWB and what it looks like in ODI 12c;


There is a useful webcast that you can play and watch the migration utility in action. It takes an OWB implementation and uses the migration utility to move into ODI 12c.

http://oracleconferencing.webex.com/oracleconferencing/ldr.php?RCID=df8729e0c7628dde638847d9511f6b46

It's worth having a read of the following OTN article from Stewart Bryson which gives an overview of the capabilities and options OWB customers have moving forward.
http://www.oracle.com/technetwork/articles/datawarehouse/bryson-owb-to-odi-2130001.html

Check it out and see what you think!

Monday Jul 28, 2014

New Security Enhancements in ODI 12.1.3

Oracle Data Integrator now uses Advanced Encryption Standard (AES) as the standard encryption algorithm for encrypting ODI objects such as Knowledge Modules, procedures, scenarios or actions as well as any password.

You can configure the encryption algorithm and key length to meet your specific requirements. By default ODI uses AES-128 but you can also use cryptographic keys of 192 and 256 bits.

Passwords and other sensitive information included in repository exports are also now encrypted and secured by an export key. This export key must be provided when importing the exported content, if it is not provided all the sensitive information is removed from the imported object.

You can find more information about these security enhancements in the ODI 12.1.3 documentation: Advanced Encryption Standard


This is just one of the many new features added to ODI 12.1.3! You can find a complete list in the following document: Oracle Data Integrator 12c New Features Overview.

Monday Jul 21, 2014

ODI 12.1.3: New Model and Topology Objects Wizard

Oracle Data Integrator 12.1.3 introduces a new wizard to quickly create Models. This wizard will not only help you create your Models more easily, if needed it will also create the entire required infrastructure in the ODI Topology: Data Servers, Physical and Logical Schemas.

In this blog article we will go through an example together and add a new Model to access the HR sample schema of an Oracle database. You can follow through this example using the ODI Getting Started VirtualBox image which is available here: http://www.oracle.com/technetwork/middleware/data-integrator/odi-demo-2032565.html

The ‘New Model and Topology Objects’ wizard can be accessed from the Models menu as shown below:

The wizard opens up and displays default settings. From there we can customize our objects before they actually get created in the ODI repositories.

In this example we want to access tables stored in the HR schema of an Oracle database so we name the Model ORACLE_HR. Note that the Logical Schema as well as the Schema and Work Schema fields in the Physical Schema section automatically default to the Model name:

Next we will give a new name to our Data Server: LINUX_LOCAL_ORACLE since we are connecting to a local Oracle database running on a Linux host.

We then fill in the User, Password and URL fields to reflect the environment we are in. To access the HR schema we use the ODI Staging area user which is ODI_STAGING. This is a best practice and it also ensures that the Work Schema field automatically gets updated with the right value for the Staging Area.

Note that the wizard also allows us to link a new Model to an existing Data Server.

Finally we click on Test Connection to make sure the parameters are correct.


Then we update the Schema field using the drop-down list to point to the HR schema at the database level.

Our Model is now fully set up, we click on OK to have it created along with its related Topology objects. The Model ORACLE_HR opens up allowing us to reverse-engineer the tables using the Selective Reverse-Engineering tab:

We pick all the tables and click on the Reverse Engineer button to start this process and save the Model at the same time. A new Model called ORACLE_HR was created as shown below as well as the appropriate objects in the Topology:


Thursday Jul 17, 2014

ODI 12c and Eloqua using DataDirect Cloud JDBC Driver

Sumit Sarkar from Progress DataDirect just posted a great blog on connecting to Eloqua in ODI 12c using the DataDirect Cloud JDBC driver. You can find the article here: http://blogs.datadirect.com/2014/07/oracle-data-integrator-etl-connectivity-eloqua-jdbc-marketing-data.html


The steps described in this tutorial also apply to other datasources supported by the DataDirect Cloud JDBC driver.

Sunday Jul 13, 2014

New Big Data Features in ODI 12.1.3

Oracle Data Integrator (ODI) 12.1.3 extends its Hadoop capabilities through a number of exciting new cababilities. The new features include:

  • Loading of RDBMS data from and to Hadoop using Sqoop
  • Support for Apache HBase databases
  • Support for Hive append functionality
With these new additions ODI provides full connectivity to load, transform, and unload data in a Big Data environment.

The diagram below shows all ODI Hadoop knowledge modules with KMs added in ODI 12.1.3 in red. 

Sqoop support

Apache Sqoop is designed for efficiently transferring bulk amounts of data between Hadoop and relational databases such as Oracle, MySQL, Teradata, DB2, and others. Sqoop operates by creating multiple parallel map-reduce processes across a Hadoop cluster and connecting to an external database and transfering data from or to Hadoop storage in a partitioned fashion. Data can be stored in Hadoop using HDFS, Hive, or HBase. ODI adds two knowledge modules IKM SQL to Hive- HBase-File (SQOOP) and IKM File-Hive to SQL (SQOOP).

Loading from and to Sqoop in ODI is straightforward. Create a mapping with the database source and hadoop target (or vice versa) and apply any necessary transformation expressions.

In the physical design of the map, make sure to set the LKM of the target to LKM SQL Multi-Connect.GLOBAL and choose a Sqoop IKM, such as  IKM SQL to Hive- HBase-File (SQOOP). Change the MapReduce Output Directory IKM property MAPRED_OUTPUT_BASE_DIR to an appropriate HDFS dir. Review all other properties and tune as necessary. Using these simple steps you should be able to perform a quick Sqoop load. 

For more information please review the great ODI Sqoop article from Benjamin Perez-Goytia, or read the ODI 12.1.3 documentation about Sqoop.

HBase support

ODI adds support for HBase as a source and target. HBase metadata can be reverse-engineered using the RKM HBase knowledge module, and HBase can be used as source and target of a Hive transformation using LKM HBase to Hive and IKM Hive to HBase. Sqoop KMs also support HBase as a target for loads from a database. 

For more information please read the ODI 12.1.3 documentation about HBase.

Hive Append support

Prior to Hive 0.8 there had been no direct way to append data to an existing table. Prior Hive KMs emulated such logic by renaming the existing table and concatenating old and new data into a new table with the prior name. This emulated append operation caused major data movement, particularly when the target table has been large.

Starting with version 0.8 Hive has been enhanced to support appending. All ODI 12.1.3 Hive KMs have been updated to support the append capability by default but provide backward compatibility to the old behavior through the KM property HIVE_COMPATIBLE=0.7. 

Conclusion

ODI 12.1.3 provides an optimal and easy-to use way to perform data integration in a Big Data environment. ODI utilizes the processing power of the data storage and processing environment rather than relying on a proprietary transformation engine. This core "ELT" philosophy has its perfect match in a Hadoop environment, where ODI can provide unique value by providing a native and easy-to-use data integration envionment.

Wednesday Jul 02, 2014

Learn more about ODI and Apache Sqoop

The ODI A-Team just published a new article about moving data from relational databases into Hadoop using ODI and Apache Sqoop. Check out the blog post here: Importing Data from SQL databases into Hadoop with Sqoop and Oracle Data Integrator (ODI)

Wednesday Jun 04, 2014

ODI 12c - Loading Files into Oracle, community post from ToadWorld

There's a complete soup to nuts post from Deepak Vohra on the Oracle community pages of ToadWorld on loading a fixed length file into the Oracle database. This post is interesting from a few fronts; firstly this is the out of the box experience, no specialized KMs so just basic integration from getting the software installed to running a mapping. Also it demonstrates fixed length file integration including how to use the ODI UI to define the fields and pertinent properties.

 Check the blog post out below....

http://www.toadworld.com/platforms/oracle/w/wiki/10935.loading-text-file-data-into-oracle-database-12c-with-oracle-data-integrator-12c.aspx

Hopefully you also find this useful, many thanks to Deepak for sharing his experiences. You could take this example further and illustrate how to load into Oracle using the LKM File to Oracle via External table knowledge module which will perform much better and also leverage such things as using wildcards for loading many files into the 12c database.

Friday May 30, 2014

Looking for Cutting-Edge Data Integration: 2014 Excellence Awards

2014 Oracle Excellence Awards Data Integration

It is nomination time!!!

This year's Oracle Fusion Middleware Excellence Awards will honor customers and partners who are creatively using various products across Oracle Fusion Middleware. Think you have something unique and innovative with one or a few of our Oracle Data Integration products?

We would love to hear from you! Please submit today.

The deadline for the nomination is June 20, 2014.

What you win:

  • An Oracle Fusion Middleware Innovation trophy

  • One free pass to Oracle OpenWorld 2014

  • Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications & press release

  • Oracle Fusion Middleware Innovation logo for inclusion on your own Website and/or press release

Let us reminisce a little…

For details on the 2013 Data Integration Winners:

Royal Bank of Scotland’s Market and International Banking and The Yalumba Wine Company, check out this blog post: 2013 Oracle Excellence Awards for Fusion Middleware Innovation… and the Winners for Data Integration are…

and for details on the 2012 Data Integration Winners:

Raymond James and Morrisons, check out this blog post: And the Winners of Fusion Middleware Innovation Awards in Data Integration are… 

Now to view the 2013 Winners (for all categories).

We hope to honor you!

Here's what you need to do: 

Click here to submit your nomination today.  And just a reminder: the deadline to submit a nomination is 5pm Pacific Time on June 20, 2014.

Thursday May 15, 2014

Oracle Data Integrator Webcast Archives

Have you missed some of our Oracle Data Integrator (ODI) Product Management Webcasts?

Don’t worry – we do record and post these webcasts for your viewing pleasure. Recent topics include Oracle Data Integrator (ODI) and Oracle GoldenGate Integration, BigData Lite, the Oracle Warehouse Builder (OWB) Migration Utility, the Management Pack for Oracle Data Integrator (ODI), along with other various themes focused on Oracle Data Integrator (ODI) 12c. We run these webcasts monthly, so please check back regularly.

You can find the Oracle Data Integrator (ODI) Webcast Archives here.

And for a bit more detail:

The webcasts are publicized on the ODI OTN Forum if you want to view them live.  You will find the announcement at the top of the page, with the title and details for the upcoming webcast.

Thank you – and happy listening!

Monday May 12, 2014

Check it out – BI Apps 11.1.1.8.1 is now available!

As of May 8, 2014, Oracle Business Intelligence (BI) Applications 11.1.1.8.1 is available on the Oracle Software Delivery Cloud (eDelivery), and on the Oracle BI Applications OTN page. This is the second major release on the 11g code line leveraging the power of Oracle Data Integrator (ODI), and certified with the latest version of Oracle BI Foundation 11.1.1.7. For more details on this release and what’s new – check it out!

Friday May 02, 2014

3 Key Practices For Using Big Data Effectively for Enhanced Customer Experience

As organizations focus on differentiating their offering via superior customer experience, they are looking into ways to leverage big data in this effort. Couple of weeks ago my colleague Pete Schutt and I hosted a webcast on this very topic: Turning Big Data into Real-Time Action for a Greater Customer Experience

In this webcast we talked about 3 key practices to make the most out of big data for improving customer experience, which are:

  1. Know your customer leveraging big data: Leverage all relevant data (internal and external; structured, semi-structured, and unstructured) to understand and predict customers needs & preferences accurately.
  2. Capture, analyze, act on data fast to create value: Achieve accurate insight and take the right action fast so your action can be still relevant to the customer’s situation.
  3. Empower employees & systems with insight & smarter decisions: In this step you ensure that the capability to act right and fast is not limited to a few in the organization, but everyone and every system that interacts and influences customers’ experience.


After explaining why these practices are critical to improving customer experience, we discussed Oracle’s complete big data analytics and management platform, as well as the specific products and architectural approaches to execute on these 3 key areas. We focused particularly on data integration for fast and timely data acquisition and business analytics for real-time insight and action, and how they fit together in a real-time analytics architecture.

You can watch this webcast now on demand via the link below:

Turning Big Data into Real-Time Action for a Greater Customer Experience

In this webcast we received many great questions and I have provided below a few of them along with the answers.

Is real-time action related to the Internet of Things?

Yes, more physical things will be connected to the internet, often wirelessly with RFID tags or other sensors and Java to record where they are and what they are doing (or not doing). The IoT will be more practical by automating the information process from capture to analysis to appropriate and immediate action.

What does Oracle have for real-time mobile analytics?

Oracle BI Mobile App Designer empowers business users to easily create interactive analytical applications on any device without writing a single line of code and to also take action and respond to events in the context of their day-today business activities

Can these real-time systems be managed by business users?

Yes, you need the agility for business owners to be able to respond, experiment, and adapt, in real-time as the environment or consumer behavior changes. The systems have to be intuitive enough for users with the business content and context who can easily visualize, understand, and change the patterns they're looking and the rules that are being enforced.

Can the real-time systems use other statistical models or algorithms?

Yes. Oracle Advanced Analytics offer an enterprise version of R and Oracle RTD can source and publish scores from other advanced analytical models such as R, SAS, or SPSS or others.

Where do we get more information about ODI for big data?

 You can start with Oracle Data Integrator Application Adapter for Hadoop. And also take a look at the  Oracle BigDataLite Virtual Machine, a pre-built environment to get you started on an environment reflecting the core software of Oracle's Big Data Appliance 2.4. BigDataLite is a VirtualBox VM that contains a fully configured Cloudera Hadoop distribution CDH 4.5, an Oracle DB 12c, Oracle's Big Data Connectors, Oracle Data Integrator 12.1.2, and other software. You can use this environment to see ODI 12c in action integrating big data with Oracle DB using ODI's declarative graphical design, efficient EL-T loads, and Knowledge Modules designed to optimize big data integration. 

For GoldenGate, can a target be something other than a database, e.g. queue?

Yes, GoldenGate can deliver database changes into JMS message queues and topics, as well as in flat file format. Oracle GoldenGate Application Adapters would need to be used for those use cases. For low-impact real-time data integration into Hadoop systems customers will need to use the Java Adapter within this GoldenGate Application Adapters license as well.

What other data warehouses can does Oracle support for real-time data integration?

Oracle's data integration offering is heterogeneous for both sources and targets. Both Oracle Data Integrator and Oracle GoldenGate work with non-Oracle data warehouses including Teradata, DB2, Netezza, Greenplum.

I invite you to watch this webcast on demand to hear the details of our solution discussion and the Q&A with the audience. For more information big data integration and analytics you can review Bridging Two Worlds. Big Data and Enterprise Data and Big Data @ Work Turning Customer Interactions into Opportunities.

·


Wednesday Apr 30, 2014

ODI - Who Changed What and When? Search!

Cell phones ringing, systems with problems, what's going on? How do you diagnose? Sound a common problem? Some ODI runtime object failed in a production run, what's the first thing to do? Be suspicious. Be very suspicious. I have lost count of the number of times I have heard 'I changed nothing' over the years. The ODI repository has audit information for all objects so you can see the created date and user for objects as well as the updated date and user. This let's you at least identify if an object was updated.

Firstly you can check the object you are suspicious of, you can do this in the studio, operator navigator or console - the information is also in the SDK and for some adventurous folk in the repository tables. Below you can see who updated the object and when.

You can also check other objects such as variables (were they updated, what's their values?) and load plans.

Casting the web wider -Search

There is a very useful but not very well known feature in ODI for searching for objects, invoke it from the Search -> Find ODI Object menu.

For example if I wanted to query the objects that have been updated in the last 7 days, then this could be used - I can do this for runtime objects like scenarios and load plans as well as design objects. Below you can see I have the search scope as 'Scenarios' - this lets you restrict the search to scenarios, variables and scheduled executions. Here is a table summarizing the search scope and objects within scope;

Search Scope Object Types
 Projects  Folder, Package, Mapping, Procedure, Knowledge Module, Variable, User Function, Sequence
 Meta-data  Model, Diagram, Datastore, Column, Key, Reference, Condition, Sub-Model, Model Folder
 Scenarios  Scenario, Scenario Variable, Scheduled Execution
 Organization  Project, Folder, Model Folder, Model, Sub-Model, Load Plan and Scenario Folder
 Procedure / Knowledge Modules  Knowledge Module,Option, Procedure, Procedure Command
 Packages  Package, Step
 Load Plans  Load Plan

The search supports searching objects based on the last update date, by object type (scenario, load plan etc.), by object name and by update user. You can define information about the search in the text field in the favorite criteria section.


So if you are working in a large customer deployment and have to cast the web wider, this would let you quickly find objects that have recently been updated across an ODI repository. This can have many useful purposes - could use it for identifying potential suspicious changes, for doing a cross and check between what has changed and what you have been told has changed and also for understanding the changes so you can then perform some subsequent actions - such as exporting and importing them.

The results of this utility are returned into the result panel which can be viewed in a hierarchic tree including foldering or in a linear list, you can then open/edit individual objects. Below you can see the result in a linear list. Selecting the 'Hierarchical View' tab would allow you to see any foldering.


One thing I did note, is that their is no wildcard support for users, so you have to pick a specific user. It would be better if you could search for all objects updated within some date period regardless of user - then be able to sort the objects based on date/name etc. The audit information is also available via the SDK and the non-supported but useful repository tables in order to build custom scripts.

When a search is made you can also save the search (as an XML file) and use this across teams by opening the saved file whenever you wish to use this search criteria. This is useful for discovering what updates have happened recently - whether it is runtime objects or design artifacts, you can use this. 

Friday Apr 25, 2014

Long Running Jobs in Oracle Business Intelligence Applications (OBIA) and Recovery from Failures

Written by Jayant Mahto, Oracle Data Integrator Product Management

In Oracle Business Applications 11.1.1.7.1 (OBIA), the Data Warehouse load is performed using Oracle Data Integrator (ODI). In ODI, using packages and load plans, one can create quite a complex load job that kicks off many scenarios in a coordinated fashion. This complex load job may run for a long time and may fail before completing the entire job successfully and will require restarts to recover from failure and complete successfully.

This blog uses the complex load plan defined in Load Plan for Oracle Business Applications 11.1.1.7.1 (OBIA) to illustrate the method of recovery from failures. Similar methods can be used in the recovery of complex load plans defined independently in Oracle Data Integrator (ODI). Note that this post does not go into the details of troubleshooting a failed load plan and only talks about the different restart parameters that affect the behavior of a restarted job.

The failures can happen due to the following reasons:

  • Access failure – Source/Target DB down, network failure etc.
  • Agent failure.
  • Problem with the Database – As in running out of space or some other DB related issue.
  • Data related failure – Exceptions not caught gracefully, like null in not null column etc.

It is important to find out the reason of failure and address it before attempting to restart the load plan otherwise the same failure may happen again. In order to recover from the failure successfully the recover parameters in the load plan steps need to be selected carefully. These parameters are selected during design time of the load plan by the developers. The goal is to be able to make the restarts robust enough so that the administrator can do restart without knowing the details of the failed steps. This is why it is the developer’s responsibility to select the restart parameters for the load plan steps in such a way which guarantees that the correct set of steps will be re-run during restart to make sure that data integrity is maintained.

In the case of OBIA, the load plans have appropriate restart parameters in the generated load plans for out of the box steps. If you are adding a custom steps then you need to choose similar restart parameters for the custom steps.

Now let us look at a typical load plan and the restart parameters at various steps.

Restart of a serial load plan step:


SDE Dimension Group Step highlighted above is a serial step. Let us say the Load plan failed when running the 3 SDE Dims GEO_DIM step. Since this is a serial step and it has been set to “Restart from Failure”, the load plan on restart would start from 3 SDE Dims GEO_DIM only and not run the 3 SDE Dims USER_DIM again. This parameter is widely used in the OBIA serial steps.

The other restart parameter for a serial steps is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of a parallel load plan step:


The Workforce Dependant Facts Step highlighted above is a parallel step with restart parameter set to “Restart from failed children”. It means all the 5 parallel steps under it would be kicked off in parallel (subject to free sessions being available). Now amongst those 5 steps let us say 2 of them completed (indicated by the green boxes above) and then the load plan failed. When the load plan is restarted all the steps that did not complete/failed, will be started again (in this example being Learning Enrollment Fact, Payroll Facts and Recruitment Facts). This parameter is widely used in the OBIA parallel steps.

The other restart parameter for a parallel step is “Restart all children”. This will cause all the children steps to be re-run during restart even if only one failed and others succeeded. This parameter could be useful in some case and developers will decide that.

Restart of the scenario session:

At the lowest order in any load plan are the scenario steps. While the parent steps (serial or parallel) are used to set the dependencies, the scenario steps are what finally load the tables. A scenario step in turn could have one or more steps (corresponding to number of steps inside the package).

It is important to understand the structure of a session that gets created for the execution of a scenario step to understand the failure points and how the restart takes place.

The following diagram illustrates different components in a session:


The restart parameters for the scenario steps in the load plan are:

  • Restart from a new session – This creates a new session for the failed scenario during restart and executed all the steps again.
  • Restart from a failed task – This uses the old session for the failed scenario during restart and starts from the failed task.
  • Restart from a failed step – This uses the old session for the failed scenario during restart and re-executes all the tasks in the failed step again. This is the most common parameter used by OBIA and is illustrated below.


In the above example, scenario step 2 failed when running. It internally has 3 steps (all under the same session in Operator log but identified with different step numbers 0,1,2 in above case). As per the setting corresponding to OBIA Standard, the Scenario would execute from Failed Step which is from Step number 2 Table_Maint_Proc (and the substep 3 Initialize Variables onwards as shown in diagram).

Note that the successful tasks such as “3 – Procedure – TABLE_MAINT_PROC – Initialize variables” will be executed again during restart since the scenario restart parameter is set to “Restart from failed step” in the Load Plan.

Summary:

OBIA has certain coding standard for setting up restart parameters as discussed above. For serial and parallel steps, the parameters “Restart from failure” and “Restart from failed children” allow the completed steps to be skipped. For scenario steps (which actually kick of the load sessions) the restart parameter of “Restart from failed step” skips the completed steps in the session and reruns all the tasks in the failed step, allowing recovery of an incomplete step.

This standard allows a hands free approach to restart a failed load plan by an administrator who has no knowledge of what the load plan is doing.


About

Learn the latest trends, use cases, product updates, and customer success examples for Oracle's data integration products-- including Oracle Data Integrator, Oracle GoldenGate and Oracle Enterprise Data Quality

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today