Friday Jan 24, 2014

Big Data for Little Decisions, Day to Day Operations

At the BIWA Summit 2014 (Business Intelligence, Warehousing, and Analytics), James Taylor, CEO and Principal Consultant of Decision Management Solutions, provided a provocative keynote on how to make Big Data practical, actionable, and operational. 

As companies invest in Big Data infrastructure they are looking for ways to show a return on that data. Using business analytics to put this data to work improving decision-making is central to success.  But which decisions should be the focus and how will you show improvement? 

James discussed the importance of improving day-to-day operational decisions because of the largest scale and impact in front-line customer interactions across multiple channels on the web, in email, on mobile, in social, in call centers, in stores, in field sales or service.  Strategic and tactical decisions are made less frequently and interactively and are riskier than front-line operational decisions where Big Data's velocity, volume, and variety is most relevant. 

Key to powering more proactive decisions with Big Data is reducing decision latency from the time of an event to action with a better blend of human and machine decision making.  Low decision latency requires automating front-line systems to be active participants, where as making people only more analytical does not have as much impact on day-to-day operations.  

Begin with the outcome in mind then leverage Big Data in Decision Management Systems to test, learn, and adapt in production for quicker returns and better responses.  Think in probabilities for new customer demand in marketing and sales, for reducing uncertainty, risk, and fraud in operations, and for improving the experience in service and support.  The opportunity to democratize Big Data is in Little Decisions.  



Friday Mar 02, 2012

Announcing RTD 3.0.0.1.11

It is our pleasure to let you know that Oracle just released a new version of the RTD platform addressing some important scalability requirements for high end deployments.

The driver for this new version, released as a patch on top of the RTD 3.0.0.1 platform was the need to address increasing volumes of “learnings” generated by ever-increasing volumes of “decisions”. This stems from the fact several of our high-end customers now have production environments with multiple cross-channel Inline Services supporting multiple “decisions” which make use of multiple predictive models using hundreds of data attributes to select from potentially thousands of “choices”. Addressing those high-end business requirements required increased RTD Learning Server capacity than was provided by RTD 3.0.0.1.

To address those needs, Oracle re-architected its RTD Learning Server engine to enable some level of parallelization. This new architecture relies on multi-threaded models updating and asynchronous learning records reading/deletions operations. This change provides a 150% improvement in learning record processing rates, which enables RTD to now process more than 58M incremental learning records per day with a deployment configuration consisting of 3 concurrently active inline services each with 900 choices, 200 data attributes, and 4 choice event predictive/learning models. This was achieved on a machine with 4 core / 6GB RAM allocated to the RTD learning server.

This new version of RTD is an important release for companies setting-up Big Data Analysis & Decision platforms in support of real-time and batch targeted customer experience enterprise deployments.

For complete details on this patch please refer to http://www.oracle.com/technetwork/middleware/real-time-decisions/psu11-1532856.html

Wednesday Nov 25, 2009

Measuring reality is much easier than reconstructing it

The title of this entry says it all. When it comes to collecting data for any analytic work, it is much easier to measure the current data than attempting to reconstruct it from historical databases.

For example, assume you need to analyze the factors that affect cross selling success in the call center and you want to include data like the wait time in the queue or the number of calls the agent answered in the current shift before the call where cross selling was attempted. Collecting this data from history is very complex because:

  1. Not all data is collected all the time
  2. Data from different systems may end up in very disparate historical databases
  3. Different data may have different retention periods and granularity
  4. Different systems may have uncoordinated clocks
  5. Queries become very complex when trying to pinpoint the state of a data record at a specific time
  6. Queries become complex in order to include only events that happened before the point in time in question
For all these reasons and more, it is much easier to perform analytics in Real-Time, when reality can be measured by directly connecting to other systems. For example, it does not matter if the clocks in the different systems are totally unccordinated or work in a different time zone, all I need to worry about is to retrieve the latest data. Similarly, if I need to know the city a person lives in I just retrieve it from the DB, there is no need to go through the list of address changes.

This is one of the reasons I believe that even if you can hand-craft very accurate models, the real time models automatically generated by a self learning system can, in many cases, end up being much more accurate because they can take advantage of more data that is also more accurate.
About

Issues related to Oracle Real-Time Decisions (RTD). Entries include implementation tips, technology descriptions and items of general interest to the RTD community.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today