X

Welcome to All Things Data Integration: Announcements, Insights, Best Practices, Tips & Tricks, and Trend Related...

Data Governance for Migration and Consolidation

Guest Author

By Martin Boyd, Senior Director of Product
Management

How would you integrate millions of parts,
customer and supplier information from multiple acquisitions into a
single JD Edwards instance?  This was the question facing National Oilwell
Varco (NOV), a leading worldwide provider of worldwide components used in the oil and
gas industry.  If they could not find an answer then many operating
synergies would be lost, but they knew from experience that simply “moving and
mapping” the data from the legacy systems into JDE was not sufficient, as the
data was anything but standardized.

This was the problem described yesterday in a session at the
Collaborate Conference in Las Vegas.  The presenters were Melissa Haught
of NOV and Deepak Gupta of KPIT, their systems integrator. Together they
walked through an excellent discussion of the problem and the solution they
have developed:

The Problem:  It is first important to recognize that the data to be integrated from many and various legacy systems had been
created over time with different standards by different people according to
their different needs. Thus, saying it
lacked standardization would be an understatement.  So how do you “govern”
data that is so diverse?  How do you apply standards to it months or years
after it has been created? 

The Solution:  The answer is that there is no single
answer, and certainly no “magic button” that will solve the problem for
you.  Instead, in the case of NOV, a small team of dedicated data
stewards, or specialists, work to reverse-engineer a set of standards from the
data at hand.  In the case of product data, which is usually the most
complex, NOV found they could actually infer rules to recognize, parse, and extract information from ‘smart’ part
numbers, even from part numbering schemes from acquired companies.  Once
these rules are created for an entity or a category and built in to their
Oracle Enterprise Data Quality (EDQ) platform. Then
the data is run through the DQ process and the results
are examined.  Most often you will find
out problems, which then suggest some
rule refinements are required. Rule refinement and
data quality processing steps run repeatedly
until the result is as good as it can be.  The result is never 100% standardized and clean data though. Some data
is always flagged into a “data dump” for future manual remediation. 

Lessons Learned:

  • Although technology is a key enabler, it is not
    the whole solution. Dedicated specialists are required to build the
    rules and improve them through successive iterations
  • A ‘user friendly’ data
    quality platform is essential so that it is approachable and intuitive
    for the data specialists who are not (nor should they be) programmers
  • A rapid iteration through testing and rules
    development is important to keep up project momentum.  In the case of NOV,
    specialists request rule changes, which are
    implemented by KPIT resources in India. So
    in effect, changes are made and re-run overnight which has worked very well

Technical Architecture:  Data is extracted from
the legacy systems by Oracle Data Integrator (ODI), which also transforms the
data in to the right ‘shape’ for review in EDQ.  An Audit Team reviews these results for completeness and
correctness based on the supplied data compared to the required data
standards.  A secondary check is also performed using EDQ, which verifies
that the data is in a valid format to be loaded into JDE.

The Benefit:  The benefit of having data that is
“fit for purpose” in JDE is that NOV can mothball the legacy systems and use
JDE as a complete and correct record for all kinds of purposes from operational
management to strategic sourcing.  The benefit of having a defined
governance process is that it is repeatable.  This means that every time the process is run, the individuals and the governance team as a whole learn something
from it and they get better at executing it next time around.  Because of
this NOV has already seen orders of magnitude improvements in productivity as
well as data quality, and is already looking for ways to expand the program into
other areas.

All-in-all, Melissa and Deepak gave the audience great insight into how they are solving a complex integration program and reminded us of what we should already know: "integrating" data is not simply moving it. To be of business value, the data must be 'fit for purpose', which often means that both the integration process and the data must be governed. 

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.