Harnessing Business Events for Predictive Decision Making - part 2 / 3

In my earlier post i discussed the workings of the human brain to illustrate capabilities desired of the next generation decision systems in order to harness business events for real-time predictive decision making.

Below is a graphical depiction of the attributes of a "brain-like" decision system, its benefits and the underlying technology enablers.

Achieving near real-time predictive intelligence creates special demands of the technology components, hardware in particular, in terms of scalability, fault-tolerance and capacity (both compute and storage). After all performance is of the essence in building decision systems that operate at brain-like speeds. A remarkable capability of the human brain is that data and instructions are physically part of the same component, called neurons. This is what allows the brain to store exabytes (1 exabyte = 1 Million terabytes) of data and process equally vast number of real-time events in a flash. In fact more than 99% of the data storage, retrieval and processing in the brain happens without even conscious thought. In contrast, memory and processing is handled by separate components in computers. This is why it takes supercomputers that consume Megawatts of energy (for comparison sake, the human brain consumes around 12 Watts of energy at its peak performance) to simulate human brain activity. Now building a supercomputer for predictive decision making is certainly an over-ambitious, if not audacious endeavor, for most businesses.

Given enough time, skills and financial resources it is certainly possible to build such hardware systems. However, this is neither a forte nor a desirable capability which businesses should strive for. After all, businesses have more pressing concerns around realizing returns on IT investments in ever shrinking time-scales than experimenting with their technology infrastructures. A possible solution lies in integrated systems that are engineered from the ground up and not merely assembled from multi-vendor components. The rationale here is that optimizing individual parts is unlikely to optimize the whole. Hence simply assembling a hardware system by self-integrating server, storage, inter-connects and operating systems from multiple vendors is likely to create bottlenecks at the the slowest links in the chain thereby negating any benefits arising from performance claims of individual component vendors.

Oracle has a unique capability in delivering engineered systems comprising of middleware components and hardware substrata to allow you to get up and running with such decision systems while rationalizing costs and mitigating execution risk. If you are contemplating such event-driven decision systems feel free to drop me a note.

Comments:

Interesting post - thanks for sharing your point of view.

I think the pattern detection piece needs to include predictive analytics not just BAM/CEP. The value of historical data, often transactional data, in pattern detection and prediction is often very high. The past behavior of customers, as recorded in your customer transactions, can be highly predictive of their future behavior. Too many BAM/CEP environments mistake real-time current visibility for true predictive capabilities.

James

Posted by guest on December 15, 2011 at 11:46 AM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

A business centric perspective on Private Cloud, Data-center Modernization and EAI.

Author:
Sanjeev Sharma
Twitter: @sanjeevio

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today