Thursday Oct 16, 2014

Why Not Data Quality?

By: John Siegman – Applications Sales Manager for Master Data Management and Data Quality, & Murad Fatehali – Senior Director with Oracle’s Insight team leading the Integration practice in North America.

Big data, business intelligence, analytics, data governance, data relationship management, the list of data oriented topics goes on. Lots of people are talking about all of these and yet very few people talk about poor data quality, let alone do anything about it. Why is that?

We think it’s because Data Quality suffers from the “Do I have to?” (DIHT) syndrome. Anyone with kids or anyone who was a kid will recognize this syndrome as in, “Clean up your room.” “Do I have to?” Dealing with poor quality data is not glamorous and it doesn’t get the headlines. Installing business intelligence systems or setting up data governance, gets the headlines and makes careers. But what good is better reporting and structure if the underlying data is junk?

Recently we were in a half day planning session for an existing customer. The customer wanted to know what they could do better using the existing software they had already purchased as Phase 1 of the project and what they would need to acquire to do things better for Phase 2. Reviews like this are critically important, as people change on both sides of the customer/vendor relationship, to ensure knowledge transfer and reaffirmation of goals. The customer provided access to numerous departments across their company for interviews and focus groups. All of this information was gathered, reviewed, summarized and suggestions were made. Excel spreadsheets and PowerPoints ensued. Even though the Aberdeen Group and others have shown significant performance increases in established ERP and other business systems through the use of Data Quality and Master Data Management, because the customer did not directly say they had a data issue (and very few customers ever admit this because poor data is just standard operating procedure), no emphasis was put on data quality as a way of improving the customer’s processes and results with their existing software packages. What is it about data quality that makes it the option of last resort? The go to, when all else fails. It’s got to be the belief that the data in underlying systems and sources is good by default. I mean, really, who would keep bad data around? Well, pretty much everyone. Because if you don’t know that it’s bad, you end up keeping it around.

Let’s admit it, DQ is not glamorous. There are no DQ-er of theyear awards. People in DQ don’t typically have their names on a parking spot right up front in the corporate lot. And besides not being glamorous, it’s hard. Very rarely do we see someone ‘own’ data quality – after all since bad data affects multiple people, across multiple functions, no one really has the right incentives to drive data quality improvements where the resulting benefits accrue to multiple constituencies. Nobody really wants to spend their functional budgets fixing enterprise-wide data problems. Some of the very early DQ adopting companies have teams of people, representing a cross-section of processes and functions, who spend their days manually inspecting data and creating internal systems to meet their specific data quality needs. They are very effective at what they do, but not as efficient as they could be because the whole is greater than the sum of its parts. Also, most of the data knowledge is in their heads and that’s really hard to replicate and subject to loss due to job switching or retirement or the possible run in with the proverbial bus.

 So, given the underwhelming desire to fix poor data, how do you get the powers that be in your company to see the light? In our last article Data Quality, Is it worth it? How do you know? we examined the value of data quality based on units of measure that were meaningful to the given organization. To paraphrase Field of Dreams, if you build the ROI, they will come. The first step to building the ROI, is understanding how poor your data is and what impact that has on your organization. Typically that starts with a Data Quality Health Check.

 A DQ Heath Check takes a sample of your data and looks at varying aspects to determine the quality level of your data. The aspects examined include: Consistency, Completeness, Accuracy, and Validity. These measures attempt to answer the question, Is your data fit for purpose? Consistency looks at the validation of the data within a variable. For example if the variable in question only allows for Ys and Ns, Ms and Ts will lower the consistency rating. Completeness is just that, how complete is the data in your database? Using our previous example if Ys and Ns are only present 20% of the time, your data for that variable is fairly incomplete. Accuracy looks at a number of things but mostly represents data within the bounds of expectations. And Validity looks at usefulness. For example, telephone numbers are typically 10 digits. Phone numbers without area codes or with letters while complete and possibly consistent are not valid.

In another recent customer engagement, we looked at customer records for data anomalies specifically for consistency, completeness, accuracy, and validity. We found that fixing these records resulted in improvements not only in marketing (campaign effectiveness), but also improved service (customer experience), higher collections in finance (lower receivables), and improved reporting. In today’s data rich, integrated, system-driven processes, improving data quality in one part of the organization (whether it be customer data, supplier data, financial data) benefits multiple organizational functions and processes.

So while data quality will never be glamorous for individuals, with a little insight providing a strong ROI for DQ we can move this from Do I Have To? to Let’s Do This.

Friday Sep 12, 2014

Data Quality, Is it worth it? How do you know?

By: John Siegman – Applications Sales Manager for Master Data Management and Data Quality, & 

Murad Fatehali – Senior Director with Oracle’s Insight team leading the Integration practice in North America.

You might think that the obvious answer to the title question would be to fix them, but not so fast. As heretical as this might be to write, not all data quality problems are worth fixing. While the data purists will tell you that every data point is worth making sure it is a correct data point, we believe you should only spend money fixing data that has a direct value impact on your business. In other words, what’s the cost of bad data?

What’s the cost of bad data? That’s a question that is not asked often enough. When you don’t understand the value of your data and the costs associated with poor data quality, you tend to ignore the problem which tends to make matters worse, specifically for initiatives like data consolidation, big data, customer experience, and data mastering. The ensuing negative impact has wider ramifications across the organization – primarily for the processes that rely on good quality data. All business operations systems like, ERP, CRM, HCM, SCM, EPM, that businesses run on assume that the underlying data is good.

Then what’s the best approach for data quality success? Paraphrasing Orwell’s Animal Farm, “all data is equal, some is just more equal than others”. What data is important and what data is not so important is a critical input to data quality project success. Using the Pareto rule, 20% of your data is most likely worth 80% of your effort. For example, it can be easily argued that financial data have a greater value as they are the numbers that run your business, get reported to investors and government agencies, and can send people to jail if they’re wrong. The CFO, who doesn’t like jail, probably considers this valuable data. Likewise, a CMO understands the importance of capturing and complying with customer contact and information sharing preferences. Negligent marketing practices, due to poor customer data, can result in non-trivial fines and penalties, not to mention bad publicity. Similarly, a COO may deem up-to-date knowledge of expensive assets as invaluable information, along with description, location, and maintenance schedule details. Any lapses here could mean significant revenue loss due to unplanned downtime. Clearly, data value is in the eye of the beholder. But prioritizing which data challenges should be tackled first needs to be a ‘value-based’ discussion.

How do you decide what to focus on? We suggest you focus on understanding the costs of poor data quality and management and then establishing a metric that is meaningful to your business. For example, colleges might look at the cost of poor data per student, utilities the cost of poor data per meter, manufacturers the cost of poor data per product, retailers the cost of poor data per customer, or oil producers the cost of poor data per well. Doing so makes it easy to communicate the value throughout your organization and allows anyone who understands the business to size the cost of bad data. For example, our studies show that on campus data quality problems can cost anywhere from $70 to $480 per student per year. Let’s say your school has 7,500 students and we take the low end of the range at $100 per student. That’s a $750,000 per year data quality problem. As another example, our engagement with a utility customer estimated that data quality problems can cost between $5 to $10 per meter. Taking the low value of $5 against 400,000 meters quantifies the data quality problem at $2,000,000 annually. Sizing the problem lets you know just how much attention you should be paying to it. But this is the end result of your cost of poor data quality analysis. Now that we know the destination, how do we get there?

To achieve these types of metrics you have to assess the impact of bad data on your enterprise by engaging all of the parties that are involved in attempting to get the data right, and all of the parties that are negatively affected when it is wrong. You will need to go beyond the creators, curators and users of the data and also involve IT stakeholders and business owners to estimate: impact on revenues; cost of redundant efforts in either getting the data or cleaning it up; the number of systems that will be impacted by high quality data; cost of non-compliance; and cost of rework. Only through this type of analysis can you gain the insight necessary to cost-justify a data quality and master data management effort.

The scope of this analysis is determined by the focus of your data quality efforts. If you are taking an enterprise-wide approach then you will need to deal with many departments and constituencies. If you are taking a Business Unit, functional or project focus for your data quality efforts, your examination will only need to be done on a departmental basis. For example, if customer data is the domain of analysis, you will need to involve subject matter experts across marketing, sales, and service. Alternatively, if supplier data is your focus, you will need to involve experts from procurement, supply-chain, and reporting functions.

Regardless of data domain, your overall approach may look something like this:

  1. Understanding business goals and priorities
  2. Documenting key data issues and challenges
  3. Assessing current capabilities and identifying gaps in your data
  4. Determining data capabilities and identifying needs
  5. Estimating and applying benefit improvement ranges
  6. Quantifying potential benefits and establishing your “cost per” metric
  7. Developing your data strategy and roadmap
  8. Developing your deployment timeline and recommendations

Going through this process ensures executive buy-in for your data quality efforts, gets the right people participating in the decisions that will need to be made, and provides a plan with a ROI which will be necessary to gain the necessary approvals to go ahead with the project.

Be sure to focus on: Master Data Management @ OpenWorld

Thursday Aug 28, 2014

How Do You Know if You Have a Data Quality Issue?

By: John Siegman – Applications Sales Manager for Master Data Management and Data Quality & Murad Fatehali – Senior Director with Oracle’s Insight team and leads the Integration practice in North America.

Big Data, Master Data Management, Analytics are all topics and buzz words getting big play in the press.  And they’re all important as today’s storage and computing capabilities allow for automated decision making that provides customers with experiences more tailored to them as well as provides better information upon which business decisions can be made.  The whole idea being, the more you know, the more you know.

Lots of companies think they know that they should be doing Big Data, Master Data Management and Analytics, but don’t really know where to start or what to start with.  My two favorite questions to ask any prospective customer while discussing these topics are: 1) Do you have data you care about? And, 2) Does it have issues?  If the answers come back “Yes” and “Yes” then you can have the discussion on what it takes to get the data ready for Big Data, Master Data Management and Analytics.  If you try any of these with lousy data, you’re simply going to get lousy results.

But, how do I know if I’ve got less than stellar data?  All you have to do is listen to the different departments in your company and they will tell you.  Here is a guide to the types of things you might hear.

You know you have poor data quality if MARKETING says:

1. We have issues with privacy management and customer preferences
2. We can’t do the types of data matching and enhancement we want to do
3. There’s no way to do data matching with internal or external files
4. We have missing data but we don’t know how much or which variables
5. There’s no standardization or data governance
6. We don’t know who our customer is
7. We’ve got compliance issues

You know you have poor data quality if SALES says:

1. The data in the CRM is wrong and needs to be re-entered and is outdated
2. I have to go through too many applications to find the right customer answers
3. Average call times are too long due to poor data and manual data entry
4. I’m spending too much time fixing data instead of selling

You know you have poor data quality if BUSINESS INTELLIGENCE says:

1. No one trusts the data so we have lots of Excel spreadsheets and none of the numbers match
2. It’s difficult to find data and there are too many sources
3. We have no data variables with consistent definitions
4. There’s nothing to clean the data with
5. Even data we can agree on, like telephone number, has multiple formats

You know you have poor data quality if OPERATIONS or FINANCE says:

1. The Billing report does not match the BI report

2.   1. Payment information and address information does not match the information in the Account Profile
3. Accounts closed in Financial Systems show up as still open in CRM system or vice versa where customers get billed for services terminated
4. Billing inaccuracies are caught during checks because there are no up-front governance rules
5. Agents enter multiple orders for the same service or product on an account
6. Service technicians show up on site with wrong parts and equipment which then requires costly repeat visits and negatively impacts customer satisfaction
7. Inventory systems show items sales deemed OK to sell while suppliers may have marked obsolete or recalled
8. We have multiple GLs and not one single version of financial truth

You know you have poor data quality if IT says:

1. It’s difficult to keep data in synch across many sources and systems
2. Data survivorship rules don't exist
3. Customer Data types (B2B, end user in B2B, customer in B2C, account owner B2C) and status (active, trial, cancelled, etc.) changes for the same customer over time and it’s difficult to keep track without exerting herculean manual effort
You know you have poor data quality if HUMAN RESOURCES says:
1. First have to wait for data, then when it is gathered and delivered we need to work to fix it
2. Ten-percent of our time is wasted due to waiting on things or re-work cycles
3. Employee frustration with searching, finding, and validating data results in churn, and will definitely delay re-hire of employees
4. Incorrect competency data results in: a) productivity loss in terms of looking at the wrong skilled person; b) possible revenue loss due to lack of skills needed; and c) additional hires when none are needed

You know you have poor data quality if PROCUREMENT says:

1. Not knowing our suppliers impacts efficiencies and costs
2. FTEs in centralized sourcing spend up to 20% of their time fixing bad data and related process issues
3. Currently data in our vendor master, material master and pricing information records is manually synched since the data is not accurate across systems.  We end up sending the orders to the wrong suppliers
4. Supplier management takes too much time
5. New product creation form contains wrong inputs rendering many fields unusable
6. Multiple entities: 1) Logistics, 2) Plants, 3) Engineering, 4) Product Management, enter or create Material Master information.  We cannot get spend analytics
7. We have no good way of managing all of the products we buy and use

You know you have poor data quality if PRODUCT MANAGEMENT says:

1. Product development and life-cycle management efforts take longer and cost more
2. We have limited standards and rules for product dimensions.  We need to manually search missing information available elsewhere
3. Our product data clean-up occurs in pockets across different groups, the end result of these redundant efforts is duplication of standards
4. We make status changes to the product lifecycle that don't get communicated to Marketing and Engineering in a timely manner.  Our customers don’t know what the product now does

All of these areas suffer either individually or together due to poor data quality.  All of these issues impact corporate performance which impacts stakeholders which impacts corporate management.  If you’re hearing any of these statements from any of these departments you have a data quality issue that needs to be addressed.  And that is especially true if you’re considering any type of Big Data, Master Data Management or Analytics initiative.

Thursday Aug 07, 2014


Author: John Siegman 

How do you know if you have a Master Data Management (MDM) or Data Quality (DQ) issue on your campus? One of the ways is to listen to the concerns of your campus constituents. While none of them are going to come out and tell you that they have a master data issue directly, by knowing what to listen for you can determine where the issues are and the best way to address them.

What follows are some of the key on-campus domains and what to listen for to determine if there is a MDM or DQ issue that needs to be resolved.

Student: Disconnected processes lacking coordination

· Fragmented data across disparate systems, disconnected across groups for:

- data collection efforts (duplicate/inconsistent student/faculty surveys)

- data definitions, rules, governance

- data access, security, and analysis

· Lack of training around security/access further complicated due to number of sources

· No information owner/no information strategy

· Student attributes maintained across many systems

Learning: Does not capture interactions

· Cannot identify students at risk. Do not capture interactions with students and faculty, and faculty interactions for research support, etc.

· No way to track how many undergraduates are interested in research

· Don't do any consistent analytics for course evaluations

· Difficult and time consuming to gather information because of the federated nature of the data – for example, job descriptions in HR are different than what is really being used

· There is no view of Student experience

HR: Process inconsistencies, lack of data standards complicates execution

· Faculty not paid by the university are not in the HCM system, while students receiving payments from the university are in the HCM system

· Disconnected process to issue IDs, keys, duplicate issues

· Given multiplicity of data sources, accessing the data is a challenge

· Data analytics capabilities and available reports are not properly advertised, so people do not know what is available. As a consequence an inordinate amount of time is spent generating reports

· Faculty/Staff information collection is inconsistent, sometimes paper-based. Implication: lose applicants because it is too difficult to complete the application process

Research: Getting from data to insight is a challenge

· Very time consuming to determine: Which proposals were successful? What type of awards are we best at winning?

· Difficult to understand: number of proposals, dollar value, by school, by department, by agency, by time period

· Data challenges in extracting data out of the system for grants, faculty, and making it centrally available

Deans & Officers: Reporting is a challenge

· Significant use of Excel, reporting is becoming unstable because of the amount of data in the files

· Information charter, a common retention policy does not exist

· A lot of paper is generated for the domains we are covering. Converting paper to digital is a challenge

· Collecting information on faculty activity (publications) is a challenge. Data in documents requires validation

· Data requests result in garbage. Donors receiving the wrong information.

Finance: Has little trust in data

· Do not have workflow governance processes. Implication, information goes into the system without being reviewed, therefore errors can make it into the records

· Systems connected to ERP systems do not always give relevant or requested info

· Closing the month or quarter takes too long as each school and each department has its own set of GLs.

Facilities: Efficiencies are hampered due to data disconnects

· Do not have accurate space metrics due to outdated system, schools not willing to share their info with Research Administrators and Proposal Investigators

· Do not have utility consumption, building by building

· No clear classroom assignment policy (a large room may be assigned to a small number of students)

· Not all classes are under the registrar's control

· No tool showing actual space for planning purposes

· Difficult to determine research costs, without accurate access to floor plans and utilization

· Cannot effectively schedule and monitor classrooms

If your campus has data, you have data issues. As the push for students becomes more competitive, being able to understand your current data, mine your social data, target your alumni, make better use of your facilities, improve your supplier relationships, and increase your student success will be dependent on better data. The tools exist to take data from a problem filled issue to a distinct competitive advantage. The sooner campuses adopt these tools, the sooner they will receive the benefits of doing so.

Friday May 23, 2014

Supercharge Your Web Storefront with Superior Data Quality

By Neela Chaudhari (Compiled from Profit Magazine - May 2014)

A targeted, comprehensive data management strategy can differentiate your business from the competition. By providing a great online customer experience starts with providing consumers with the information and research tools needed to make informed buying decisions. All too often, product research capabilities that already exist on a website get diminished by poor data quality behind the scenes.

Companies like Ace Hardware and are adopting these data management strategies, and many retailers are seeing significant improvements in the quality of their guided navigation, product spec presentation, and product comparison strategies.

So while modern web storefront capabilities are critical to online sales success, be sure to remember that it is the data that makes it work with your customers and prospects!

There are three key areas where product data quality is impacting the customer experience. What are they? Visit our latest Profit article to find out:

Friday May 16, 2014

Master Data Management and Service-Oriented Architecture: Better Together

Master Data Management and Service-Oriented Architecture: Better Together

By Neela Chaudhari

Many companies are struggling to keep up with constant shifts in technology and at the same time address rapid changes in the business. As organizations strive to create greater efficiency and agility with the aid of new technologies, each new business-led project may further fragment IT systems and result in information inconsistencies across the organization. Because data is an essential input for all processes and business objects, these irregularities can undermine the original business objectives of the technology initiatives.

Combining the use of master data management (MDM) on the business side and service-oriented architecture (SOA) on the IT side can counteract the problem of information inconsistency. SOA is a practice that uses technology to decouple services, transactions, events, and processes to enhance data availability for business applications across a range of use cases. But the underlying data is often overlooked or treated as an afterthought when it comes to business processes, leading to poor data quality characteristics for your business applications. Without MDM, the data made available to business applications by an SOA approach might be less than accurate and more widespread throughout an organization. That can lead to a situation where lower quality data is consumed by more business users—ultimately thwarting the objectives of efficiency and agility.

MDM can add value to SOA efforts because it improves the quality and trustworthiness of the data that is being integrated and consumed. MDM aids the tricky issue of upstream and downstream systems integration by ensuring the systems access a data hub containing accurate, consistent master data. It also assists SOA by providing consistent visibility and a technical foundation for master data use. MDM delivers the necessary data services to ensure the quality and timeliness of the enterprise objects the SOA will consume.

To learn more about the importance of MDM to SOA investments, read an in-depth technical article, MDM and SOA Be Warned! (

And don't miss the new Oracle MDM resource center ( Visit today to download white papers, read customer stories, view videos, and learn more about the full range of features for ensuring data quality and mastering data in the key domains of customer, product, supplier, site, and financial data.

Friday May 09, 2014

Improve your Customer Experience with High Quality Information

By: Ulrich Scheuber

Why do I need to care about MDM when talking about Customer Experience?

Companies and organization’s leaders cross industries are talking more and more about “How to improve Customer Experience”.  Multiple companies currently are working on strategies, moving from product centric towards customer centric approaches. The common idea is to take customers and prospects in their buying and service cycle from where they are at the moment.

Given all of the new channels and touch points in the area of social media, customer behavior has changed quite a bit. Instead of reaching out to the sales people and company websites for gathering information, there are wider independent communities, which manage and exchange information about companies, products and solutions on their own.

The way organizations tend to address these new challenges is by extending their processes and IT-solution landscape beyond the borders of the core enterprise. Most companies are focusing on dealing with their customers in the customer’s home zone.

What’s often forgotten in this approach is the “know your customer” foundation, which enables real success. If only an organization is able to understand their customers and prospects current situation, their history, their real needs, their experience and their challenges, ideal treatment and satisfying Customer Experience can happen.

But how do you get there?

The following whitepaper gives you some additional information on improving customer experience with high quality Information:

Improve your Customer Experience with High Quality Information

Friday May 02, 2014

Register Now! Product Data Management Weekly Cloudcast

Don't miss out: Product Data Management Weekly Cloudcast

Every Thursday at 10:00 a.m. PST (1:00 p.m. EST).

The North America Master Data Management (MDM) and Enterprise Data Quality (EDQ) team will present a series of weekly webcasts that give an inside look at how Oracle Product Data Management Cloud modernizes complex data management processes allowing customers to focus on strategic opportunities and delivering value to the business. These webcasts will run throughout FY14, with regular updates being distributed.

These sessions are designed for customers and prospects who are interested in learning more about Product Data Management Cloud.  Customer executives and managers with responsibility for Data Management, Data Quality, Commerce, Manufacturing , IT or other data management responsibilities are encouraged to attend. 

Remember, data that is not managed properly degrades at 27% per year!

Please click  HERE to view a complete schedule and register for the demo.

Friday Apr 25, 2014

Big Data Challenges & Considerations

By: Murad Fatehali

While 'Big Data' is dominating a lot of media and executive attention (it's a Top-5 Initiative according to IDC Retail Insights 2014 Predictions), the underlying considerations & challenges of Big Data are unfortunately getting trivialized.  As corporations and people continue to make the Internet a more fundamental way of life (from social and transactional perspectives), the scope of the underlying data that gets created, stored, retrieved, and analyzed will commensurately and exponentially grow as well. Although many factors and assumptions go into Big Data discussions, outlined below are five pivotal dimensions:

1. Size: This is the most obvious and most talked about factor.  Everybody seems to understand that given the rise of the Internet, coupled with the plethora of apps in the hands of consumers and users, growth in the scale and volume of data is truly astonishing (some say, quantity is doubling every 2 years). However, what gets neglected are the challenges related to storage and analysis of vast volumes of data, captured in a variety of formats, for example: text-based (tweets, SMSs), graphically illustrated (pictures and drawings), and audio-visually represented (podcasts, movies, etc.).  For anyone interested in making sense of Big Data, how efficiently the data gets stored and retrieved will be key factors - no wonder we are seeing an increase in the number and capacity of purpose-built storage systems and lightening-fast data-crunching machines.

2. Sources: Usually Big Data discussions refer to ‘external’ sources like the Internet (Facebook, Twitter) and media (digital or otherwise), but often overlooked are the ‘internal’ data sources that can be scraped for Insight - these would be the transactional and operational systems supporting multiple marketing, sales, fulfillment, and service systems, in addition to the company’s own BI/reporting systems.  Ask any CMO and they will explain to you the widening gaps in data due to lack of integration and data governance.  Many companies struggle when answering how many customers they have, not to mention the difficulties in identifying those most profitable or the most loyal. Since the data opportunities inside the company have not been fully explored, diving into the external sources of Big Data without adequate forethought, can potentially not only add costs, but also complexity and confusion. In our engagement with customers, while ERP systems & POS data remain good inputs into a company’s Business Intelligence and reporting efforts, more and more we see executives asking for data from other sources to be able to develop ‘personalized’ offers and solutions, for example:

a. Machine usage data about performance and diagnostics using sensors (vehicle-to-smartphone connectivity devices for smarter driving from:,,, etc.)

b. Medical data from patients/public records and research studies (fitbit and wearable technologies, research findings, etc.)

c. Geographic and telemetric data from devices, maps, GPS signals, user tags and flags (Google maps, in-vehicle trackers, etc.)

d. Smartphone check-ins (four-square, etc.)

e. Social networks that capture trends (Twitter, etc.)

f. Content sites (Wikipedia, etc.)

3. Speed: The flow of data is something that can no longer go unnoticed - it is instant and constant.  According to Google CEO Eric Schmidt, “There were 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days, and the pace is increasing.”  It is clear that the rate of incoming data can quickly alter any trends captured off historical data and render meaningless any campaigns that don’t take into account real-time response tracking.   For example, without compute-intensive servers tied-into the large-scale infrastructure hosting Facebook posts, Twitter feeds, and Google searches it would be impossible to capitalize on fads, keep pace with up-to-the-minute news, understand critical events, and predict emerging trends [case in point: who knew that tracking the number of flu queries on Google can help predict localized outbreaks or that analyzing global weather patterns can help predict harvest outputs or crop failures]. The implications are huge for those who can robustly and quickly find data relationships: commonality, lineage, and correlation are going to be big differentiators.

4. Standards: Unlike organizational transactions designed to meet compliance standards for financial reporting, Big Data does not need to conform to any such rules or convention. The vast majority (some say as much as 90%) of data coming from uploads-social network chatter-comments-likes or tweets is neither structured, nor precise, not to mention anything about data reliability (accuracy). The sheer variety of posts representing wide-ranging interests/agendas across multiple sites/sources being repeated/re-circulated requires significant analytical prowess, speed, and talent to make intelligible.

5. Strategy: Big Data has the potential to help a company truly cross channels (understand personal profiles and preferences) and deliver world-class experiences to their customers; it can also help inform a company’s strategy by making sense of wide-spread data coming from public and private networks, internal and external systems, and individual and institutional sources.  How Big Data plays into a company’s success will depend on the priorities of its leadership - and their willingness to make data-driven decisions a reality, and turn Big Data into more than just a buzzword. Linked-In, for example, is analyzing vast quantities of data to generate billions of personalized recommendations every week – no small feat when looking through the silo and antiquated lens of yesterday’s IT landscape. According to Baseline, a 10% increase in data accessibility translates into an additional $65.7M in net income for a typical Fortune 1000 company.

Big Data challenges and considerations outlined above are further impacted by availability of skilled talent - people who understand data in all its forms and can help govern and master it.  Additionally, ethical and legal challenges further compound the problem since perspectives differ on what is personal, private, and sensitive information.  To be sure, none of these Big Data challenges are expected to get resolved anytime soon – all the more reason, therefore, for executives to be cognizant of the five Big Data considerations (size, sources, speed, standards, and strategy) as they chart new ground in their Big Data journey to unlock its value.

For additional details on the Insight program, please visit:

Murad Fatehali is a Senior Director with the Insight & Customer Strategy Team at Oracle Corporation; he focuses on helping Oracle customers solve Data Strategy & Integration challenges.

Friday Apr 18, 2014

Thank you for Joining us at Collaborate 2014

Thank you for attending Collaborate 2014! We had over  5500+ customers attend Collaborate 2014 last week.

Our EDQ and MDM customers were very interested in knowing what Oracle offers to help to reconcile them together. The presentation included our definitions related to Data Governance and Master Data Management – and their differences vs. similarities. Also discussed were the general definitions related to Data Management itself, as well as the multiple solution strategies and applications that Oracle offers in the MDM space.

There were sessions by Oracle PMs Bill Miller on overall MDM Footprint and Strategy, Sachin Patel on the various roads to PIM and Martin Boyd on the journey to Enterprise Governance and MDM starting with tactical EDQ projects. There were also sessions from partners including Kaygen and Hitachi as well as 3 different MDM SIG (Special Interest Groups) on Customer Hub, Product Hub and Data Quality managed as always by Mani Kumar Manda of Rhapsody Technologies. Many of the questions related to understanding how to plan for adding Data Governance processes – as well our MDM solutions to support them - into their respective organizations.

This probably represents an opportunity for many of our partner organizations to assist our customers with Business Process management related to Data Governance, using our MDM solutions as the foundational platform for enabling these efforts. Lots of great information sharing from customers and partners alike – and lots of only-in-Vegas fun too!

For more information please visit us at:

Friday Apr 11, 2014

Thank you for Joining us at Gartner Enterprise Information and Master Data Summit

Thank you for stopping by our Oracle Booth at Gartner and attending our session on Feet on the Ground, Head in the Cloud! We hope your time with Gartner and your fellow attendees was productive, fun and above all inspiring for the year to come.

Recognized as a leader in this year’s Gartner Magic Quadrant for MDM solutions, Oracle’s strategy is to make the full spectrum of deployment options available to our customers.  With a mixture of on-premise and cloud-based applications integrated together to deliver best-in-class customer experience, we have wrestled with these problems and created solutions that can be implemented today.   The decisions you make on how to design and deploy MDM will have a magnified impact as the data explosion continues to unfold.

Oracle MDM solutions are built on the same application engines that drive our enterprise software, with thousands of customers in deployment managing billions of records. We have out-of-the-box integrations to a number of Customer Experience applications, industry standard data feeds and even third party applications and data sources. Our MDM applications are specifically engineered to be part of a heterogeneous application and technology ecosystem. Our EDQ and MDM applications are specifically engineered to be part of a heterogeneous application and technology ecosystem. That’s critically important as the MDM footprint gets extended, and your need to operate across these different environments expands.

It’s an exciting time to be in the enterprise data management space. We would welcome the chance to discuss what we’re doing in more detail, and to share our experience and our recommendations for solving your most pressing data problems. 

For more information please visit us our MDM Solutions Factory at:

Sunday Apr 06, 2014

Join us at Collaborate 14 April 7th-11th at the Venetian in Las Vegas!

Join us at Collaborate 14 April 7th-11th at the Venetian in Las Vegas!

The word is out. There is ONE event that delivers the full spectrum of Oracle Applications and Technology education that you need to boost results all year long. Produced by three independent users groups, COLLABORATE14: Technology and Applications Forum for the Oracle Community delivers nearly 1,250 sessions and panels packed with first-hand experiences, case studies and practical “how-to” content.

Thousands of Oracle Applications users around the world connect through the Oracle Applications Users Group (OAUG). They know that OAUG involvement equips them to improve efficiency, enhance problem solving and spark innovation through opportunities for education, networking and influence. We will be delivering three Master Data Management and Data Governance sessions not to be missed!

Session Title: Oracle Master Data Management: Portfolio Overview, Strategy, and Roadmap

Session ID #: 15281

Date: 4/8/2014

Time: 03:00 PM - 04:00 PM PDT (local Las Vegas time)

Room: Level 1, Galileo – 1001

Companies today are being inundated with increasingly vast amounts of data. Tying it all together across various systems so that it can be better leveraged is a major challenge. As a result, Data Governance and Master Data Management (MDM) initiatives are major strategic initiatives that are actively being pursued within many organizations today to help solve for it. In this session, you will gain insight on not only what “MDM” is, but also the related people, processes and technology requirements needed to achieve it.

Session Title:  Oracle Fusion Product Hub - The Foundation for Your Enterprise Product Information

Session ID #:  15280

Date:  4/9/2014

Time:  02:00 PM - 03:00 PM PDT (local Las Vegas time)

Room:  Level 3, Murano - 3201

How much do you trust your product data to give you a competitive advantage and operational excellence in your enterprise? In today’s dynamic and competitive business environment, where faster product launches, cost efficiency, and regulatory compliance are a necessity, many enterprises struggle with achieving consistent, high-quality product data that provides significant business value. This session shows how to achieve a solid Product Information Managment Foundation with Oracle Fusion Product Hub.

Session Title: The Data Quality Journey: From Tactical Fixes to Enterprise Data Quality Governance

Session ID #: 15253

Date: 4/9/2014

Time: 02:00 PM - 03:00 PM PDT (local Las Vegas time)

Room: Level 1, Galileo 1001

Data quality projects take on many forms and grow through many stages. This session will look at some common data quality use cases, motivations and business benefits. It will also look at some best practices running from tactical implementations to more strategic enterprise-wide use for data quality governance, and will give an overview of Oracles strategic data quality solution, Oracle Enterprise Data Quality and highlight why it can be a good fit for any data quality program.

Attendees can add sessions to their personal schedule by accessing link below and get additional session details and information here :

Thursday Mar 27, 2014

Join Oracle at Gartner Enterprise Information and Master Data Management Summit!


Join Oracle at this year’s Gartner Enterprise Information & Master Data Management Summit! It is the world's very first conference to provide the broadest and deepest research on enterprise information and master data management anywhere. A must-attend for data and information management professionals, the 2014 summit offers intensive analysis and insight that will help you achieve a deeper understanding of the tools and technologies needed to implement, manage and grow successful initiatives and programs. The summit’s comprehensive tracks address all levels of MDM and enterprise information maturity.

Oracle is a sponsor at this event and look forward to having you come visit us at our Oracle booth for an opportunity to look at what’s new with our products and solutions.  Please plan on attending our session on “Head in the Cloud, Feet on the Ground”. A session not to be missed!

Social data, machine-generated data, geo location data – all flavors of big data – are generating the question, what do I DO with all that data? Now that strategies for collecting it and storing it are in place, how do you make sense of it? How, specifically, can we inspect those vast reservoirs of data to extract the nuggets of actionable information that create value? That’s the focus of our session at the Gartner MDM conference, how MDM helps to filter big data, identify multiple overlapping profiles and store new attributes that enrich the master record

Title: “Head in the Cloud, Feet on the Ground”

Speaker: John Parker, Oracle Corporation

Date: Friday, April 4, 2014

Start Time: 9:15a.m.

End Time: 9:45a.m.


Venetian Resort Hotel and Casino

Palazzo D

For more information on Gartner and our Oracle track details, please visit us here!

Friday Mar 21, 2014

Master Data Management: How to Avoid Big Mistakes in Big Data

Big Data Quality MDM

Master Data Management: How to Avoid Big Mistakes in Big Data

The paradigm-changing potential benefits of big data can't be overstated—but big changes can deliver big risks as well. For example, exploding data volumes naturally create a corresponding increase in data correlations, but as leading experts warn, correlations should not be mistaken for causes.

To avoid drawing the wrong conclusions from big data, organizations first need a way to assemble reliable master data to analyze. Then they need a way to put those conclusions and that data to work operationally, in the systems that govern and facilitate their day-to-day operations.

Master data management (MDM) helps deliver insightful information in context to aid decision-making. It can be used to filter big data, isolating and identifying key entities and shrinking the dataset to a manageable size for parsing, tagging, and associating with operational system records. And it provides the key intersecting point that enables organizations to map big data results to operational systems that are built on relational databases and structured information.

Adopting master data management capabilities helps organizations create consolidated, consistent, and authoritative master data across the enterprise, enabling the distribution of master information to all operational and analytical applications, including those that contain customer, product, supplier, site, and financial information.

Oracle Master Data Management drives results by delivering the ability to cleanse, govern, and manage the quality and lifecycle of master data.

To learn more about the importance of MDM as an underlying technology that facilitates big data initiatives, read an in-depth Oracle C-Central article, "Masters of the Data: CIOs Tune into the Importance of Data Quality, Data Governance, and Master Data Management."

And don't miss the new Oracle MDM resource center. Visit today to download white papers, read customer stories, view videos, and learn more about the full range of features for ensuring data quality and mastering data in the key domains of customer, product, supplier, site and financial data.

Thursday Mar 13, 2014

Join Us for a Webcast: Enterprise Best Practices for Complex Multi-Chart/Multi-Ledger Organizations

Wednesday, March 19, 2014
2:00 PM – 3:00 PM EST
View your local time
Live Webcast
Register to watch at your desk!
Join Us for a webcast

Enterprise Best Practices for Complex Multi-Chart/Multi-Ledger Organizations

Like many industry leaders, organizations do a good job closing their books and managing change in their structures. Unfortunately, many of the processes put in place are manual and require many hours or planning, spreadsheet update and rework to address inconsistencies and additional change while the manual processes are being executed.

Diverse organizations, planning for record growth in 2014 and support many different general ledgers and/or multiple charts of accounts, have a complex challenge managing corporate reporting hierarchies and financial consolidation mappings. In this session you will learn how to:
  • Automate key manual processes to drive growth
  • Handle hierarchical change and re-organizations
  • Understand Data Governance concerns and strategies

Join Oracle, AdvancedEPM and General Dynamics for this informative webcast as we discuss key issues around how to automate change in your business. General Dynamics achievements included:
  • Reduced Planning Cycles by 30%
  • Reduced Re-Organizations, and What-If Modeling by 25%
  • Delegated maintenance responsibility to business experts most familiar with change events
  • Consolidated & rationalized corporate and divisional hierarchies across many different sources
  • Pushed hierarchies and other reference data (in multiple formats) to many downstream apps
  • Standardized hierarchies, business rules, and data validations
  • Provided better analysis tools to support Re-Orgs, M&A activity, etc.

About the Speaker

Ed Cody is a seasoned IT business manager having over 10 years of experience working with Hyperion EPM and Oracle BI applications. Ed is the author of 2 books on Hyperion products, and has consulted in both commercial and government organizations. Ed was instrumental in the development of General Dynamics "BI/EPM Center of Excellence" and award winning "BI Collaborative", he has leveraged DRM in concert with Oracle E-Business, Oracle BI, and EPM solutions to provide easy-to-use and effective master data management solutions across General Dynamics.

Register Now!



Get the latest on all things related to Oracle Master Data Management. Join Oracle's MDM Community today.

Follow us on twitter Catch Us on YouTube


« December 2016