Friday Jan 24, 2014

Big Data for Little Decisions, Day to Day Operations

At the BIWA Summit 2014 (Business Intelligence, Warehousing, and Analytics), James Taylor, CEO and Principal Consultant of Decision Management Solutions, provided a provocative keynote on how to make Big Data practical, actionable, and operational. 

As companies invest in Big Data infrastructure they are looking for ways to show a return on that data. Using business analytics to put this data to work improving decision-making is central to success.  But which decisions should be the focus and how will you show improvement? 

James discussed the importance of improving day-to-day operational decisions because of the largest scale and impact in front-line customer interactions across multiple channels on the web, in email, on mobile, in social, in call centers, in stores, in field sales or service.  Strategic and tactical decisions are made less frequently and interactively and are riskier than front-line operational decisions where Big Data's velocity, volume, and variety is most relevant. 

Key to powering more proactive decisions with Big Data is reducing decision latency from the time of an event to action with a better blend of human and machine decision making.  Low decision latency requires automating front-line systems to be active participants, where as making people only more analytical does not have as much impact on day-to-day operations.  

Begin with the outcome in mind then leverage Big Data in Decision Management Systems to test, learn, and adapt in production for quicker returns and better responses.  Think in probabilities for new customer demand in marketing and sales, for reducing uncertainty, risk, and fraud in operations, and for improving the experience in service and support.  The opportunity to democratize Big Data is in Little Decisions.  



Wednesday May 30, 2012

A conversation with world experts in Customer Experience Management in Rome, Italy - Wed, June 20, 2012

It is my pleasure to share the registration link below for your chance to meet active members of the Oracle Real-Time Decisions Customer Advisory Board.

Join us to hear how leading brands across the world have achieved tremendous return on investment through their Oracle Real-Time Decisions deployments and do not miss this unique opportunity to ask them specific questions directly during our customer roundtable.

Please share this information with anyone interested in real-time decision management and cross-channel predictive process optimization: http://www.oracle.com/goto/RealTimeDecisions

Thursday Mar 08, 2012

The Era of the Decision Graph

Gone are the days when “electronic billboards” for targeted merchandizing programs were leading edge.

Over the course of the last few years we observed a dramatic qualitative shift in how companies have applied Analytical Decision Management techniques to drive Customer Experience Optimization programs. It used to be the case that marketers were happy when they were allocated a dedicated piece of real-estate on their company’s web site (or contact center or any interaction channel for that matter) that they could use at their discretion for the purpose of one-off targeting programs. What companies now want is granular control over the whole user experience so that the various elements composing this cross-channel dialog can be targeted and relevant. Such a shift requires a new approach to analytics, based on understanding how the various elements of the user interaction relate one to the other.

Let me introduce the concept of Decision Graph in support of this idea.

To move from electronic billboard / product spotlight optimization to customer experience optimization, analytics must shift away from focusing on “the right offer for the right customer”. The focus of the Decision Graph is to identify “the right user experience for the right customer”. This change of focus has a critical impact on your analytics requirements as one dimensional targeting approach for matching customers with offers won’t address the need to optimize multiple dimensions at once.

This is where the Decision Graph comes in. Let’s consider the following graph eliciting the relationships between the various facets of the user experience to be optimized in the context of a Marketing Optimization use case.

Now imagine that for every offer presentation on any interaction channels, your analytical engine can record and identify the characteristics of the customer interactions that are associated with success (say click or offer acceptance) across all those dimensions.

Let’s take an example.

  • You see a nice picture of bear cubs on a forest background with a punchy banner stating “please give us back your share of the 20,000 tons of annual account statements” call to action to sign-up for electronic bill payment on the “recommended for you” section of the login page of your financial service web site and … you decide to click on the “one click wildlife donation” link.
  • Our Decision Graph, can then record the fact your customer profile is positively associated with “positive responses” to marketing messages in the following context: Channel (Web), Offer / Product (Electronic Bill Payment), Creative (The Bear Cub image), Tags (Environmental, Wildlife, Donation, Provocative), Slot Type (Image), Slot (Recommended for you), Placement (Login Page). As predictive models are attached to the Decision Graph, this means that such a business event updates 10 predictive models that marketers can now use for reporting and decision management purposes.

You can now generalize the idea and imagine that this graph collects information about all marketing events across all channels and you end-up with an analytical system that let all the actors of customer experience optimization discover the relationship between the different facets of user interactions

With the Decision Graph

  • Marketing stakeholders will learn about customer segments that are receptive to eco-centric marketing messages and which customers in the right context will step out of their standard routine (why they came to web site in the first place) to subscribe to specific causes.
  • Web user experiences stakeholders will learn about which type of marketing messages are appropriate and for whom at the start / at the end or throughout a logged-in web session
  • Content owners can focus their digital agencies on the most effective creative themes as they will be able to correlate response rates based on associated tags
  • And the company as a whole will have learned who is receptive to eco-centric marketing messages when displayed in a given context of a secured dialog from which it will be in a position to dynamically tailor user experiences across channels based on such empirical evidence

Now contrast this with a system that would only record the fact you’ve subscribed to the Electronic Bill Payment option as part of the “Go Green” Campaign and you will get a sense for the power of the Decision Graph. The bottom line is that companies need analytical systems that operate at multiple levels of the Decision Graph if they want to delight their customers with relevant customer experiences.

My next post will be on how the Oracle RTD Decision Manager product enables you to create and configure such graphs and to automatically identify the predictive drivers of response across the whole spectrum of the user experience.

Friday Mar 02, 2012

Announcing RTD 3.0.0.1.11

It is our pleasure to let you know that Oracle just released a new version of the RTD platform addressing some important scalability requirements for high end deployments.

The driver for this new version, released as a patch on top of the RTD 3.0.0.1 platform was the need to address increasing volumes of “learnings” generated by ever-increasing volumes of “decisions”. This stems from the fact several of our high-end customers now have production environments with multiple cross-channel Inline Services supporting multiple “decisions” which make use of multiple predictive models using hundreds of data attributes to select from potentially thousands of “choices”. Addressing those high-end business requirements required increased RTD Learning Server capacity than was provided by RTD 3.0.0.1.

To address those needs, Oracle re-architected its RTD Learning Server engine to enable some level of parallelization. This new architecture relies on multi-threaded models updating and asynchronous learning records reading/deletions operations. This change provides a 150% improvement in learning record processing rates, which enables RTD to now process more than 58M incremental learning records per day with a deployment configuration consisting of 3 concurrently active inline services each with 900 choices, 200 data attributes, and 4 choice event predictive/learning models. This was achieved on a machine with 4 core / 6GB RAM allocated to the RTD learning server.

This new version of RTD is an important release for companies setting-up Big Data Analysis & Decision platforms in support of real-time and batch targeted customer experience enterprise deployments.

For complete details on this patch please refer to http://www.oracle.com/technetwork/middleware/real-time-decisions/psu11-1532856.html

Tuesday Jan 31, 2012

Introducing Nicolas Bonnet

Nicolas is the Director of Product Management for RTD. He has recently joined us as an author for the RTD blog.

As we pick up the pace on sharing information and ideas on this blog, readers please feel free to participate through comments and ideas for blog entries.

Tuesday Jan 24, 2012

RTD, Big Data and Map Reduce algorithms

I was recently asked to compare and contrast the way that RTD processes its learning with Map Reduce, Big Data systems like Hadoop. The question is very relevant, and in reality you could implement RTD Learning using Map Reduce for many or most cases without significant difference in functionality. Nevertheless, there are a few interesting differences.

RTD learning is incremental. As data to be learned is generated by the decision nodes, the learning server learns on this data incrementally updating its models. A Map Reduce implementation would accumulate all the data and periodically use several nodes to learn in parallel (Map) on portions of the data and accumulating that knowledge into a single view of the truth (Reduce). This implementation would be non-incremental, with the full data processed each batch to produce a brand new model.

If enough nodes are used together with fast storage, and when the volume of data is not extremely high, then it is possible that the latency in the production of model can be managed and new models be produced in a matter of minutes. The computing requirements to maintain these models fresh would be much higher than the current requirements for a typical RTD implementation. For example, a single 2 core node dedicated to learning can cope with a volume of more than 20 million records per day. If the time window used is a quarter, then the total number of records for a model is about 3 billion, considering that each model learns for 2 time windows. As fast as Map Reduce may be, processing three billion records with a few hundred attributes each, for two model overlapping time windows) would probably take quite a long time.

 Is there an alternative? Is it possible to use the Map Reduce principles to process data incrementally similarly to what RTD does?

I believe it is. The batch oriented Map Reduce principles can be modified and extended to work on streams of data, instead of stored shards of data. For example, there could be several Map nodes that continuously accumulate the learning from a number of records each in parallel and pass the accumulated learning to a consolidating Reduce node that incrementally maintains the models. The number of records the Map nodes accumulate to summarize will affect the latency of the data to affect the models, providing interesting configuration options to optimize for throughput or latency. Ultimately the tradeoff could be done dynamically with the Map nodes processing larger windows when they see more load.

An additional consideration is the cost of consolidation of the Map produced summaries, as it needs to be relatively smaller than the resources needed to do the accumulation in the first place, as we do not want the Reduce phase to be a dominant bottleneck. Additionally, it would be possible to have layers of accumulation where several summaries are consolidated.

The fact that RTD models are naturally additive facilitates the distribution of work among the Map nodes.

In spite of this long discussion of theoretical uses for Map Reduce in the RTD context, it is important to note that with parallel learning and suitable configurations RTD learning, as it stands today, can cope with very high volumes of data.

Friday Jan 06, 2012

Introducing Lukas Vermeer

I have the pleasure to introduce a new contributor to this blog, Lukas Vermeer. He has been an RTD practicioner for a long time and I'm very happy he will be contributing from his field experience to this blog.

Michel

Tuesday Jan 03, 2012

Multiple Presentations - Part 3

Continuing the series of analysis of the effect of multiple presentations of the same content on the likelihood of click.

Exponential Decay

The curve of likelihood of click after the peak can be approximated by an exponential decay function. For example, the likelihood of click on the 19th presentation may be 93% the likelihood of the 18th presentation. We will model the beginning of the curve as a constant, ignoring for now the variation in particular for the first presentation. The result looks like this.

Graph showing that an exponential curve matches more or less the empirically measured curve for likelihood against presentation number.

This is a very simple model that will work well in many situations. It is also technically simple as all the memory it requires is remembering the number of times the content has been presented to each specific visitor.

I believe this model will work when:

  • The number of presentations of the same content tends to be high; higher than 5 on average.
  • The time between presentations is not long; within the same session or at most a few hours in between presentations.
  • The content is presented only in a specific channel and location

The model will not work when:

  • The number of presentations for each content is small, but bigger than 1. In these cases the difference between the first presentation and subsequent ones can not be ignored.
  • The time between presentations tends to be longer; a few days in between presentations for example. This time is long enough for the effect of previous presentations to wane, but not long enough for it to disappear completely.
  • The content is presented in different channels or locations. For example if the visitor may be exposed to the same message in an email as well as the main banner of the home page, or different locations within the pages, or pages with significantly different other content.
  • The same message is presented with similar contents, but perhaps not the same image dimensions and text

In order to cope with these shortcomings we need to use a model that will take into account the time between presentations and the impact each presentation has. The impact is affected by factors like the channel, location and visual characteristics of the content.

Monday Dec 12, 2011

Multiple Presentations - Part 2

In the first part of this series we explored the problem of predicting likelihood with multiple presentations of the same content. In this entry we will explore a few more options.

Modeling Options (continued)

Presentation Cap

With presentation caps a person decides on a threshold which is the maximum number of times a specific content is to be used with any given person. For example, "this offer should not be presented more than 5 times to the same customer."

This scheme avoids  the problem of wasting presentations in the long tail, but as any arbitrary threshold, it is not optimal. If the number is too large, then there are going to be wasted presentations and if it is too small, then the effort will quit too early, leaving potential customers "unimpressed."

An additional problem with this scheme stems from the fact that there may be a change inthe situation of the customer. That is, maybe this offer was not relevant a month ago, but now, with changed circumstances it may be relevant. A way to solve this problem is to set an expiration date for the cap.

In summary, for each choice two numbers are given, the maximum number of times to present the same choice to the same person, and the number of days after which the count is reset.

Graph representing clicks against number of presentations. A horizontal purple line represents the estimated likelihood until a cap is reached. the area between the original graph and the purple line is shaded yellow, to indicate over estimation of likelihood. After the threshold is red, indicating underestimation.

Purple line represents the believed likelihood before the cap is reached. Yellow marked area represents overestimation and red underestimation of the likelihood.

It is also clear that if the cap is too low then the untapped potential - the red area - grows tremendously. Conversely, if the cap is too large then the wasted presentations - the yellow area - become more and more significant. Therefore, quite literaly, using a presentation cap is like fitting a square peg in a round hole.

A further problem with this approach is one of modeling. Should you train the model with each presentation or only when the cap is reached?

In the next installment we will explore a possible better approach.

Wednesday Nov 30, 2011

Predicting Likelihood of Click with Multiple Presentations

When using predictive models to predict the likelihood of an ad or a banner to be clicked on it is common to ignore the fact that the same content may have been presented in the past to the same visitor. While the error may be small if the visitors do not often see repeated content, it may be very significant for sites where visitors come repeatedly.

This is a well recognized problem that usually gets handled with presentation thresholds – do not present the same content more than 6 times.

Observations and measurements of visitor behavior provide evidence that something better is needed.

Observations

For a specific visitor, during a single session, for a banner in a not too prominent space, the second presentation of the

same content is more likely to be clicked on than the first presentation. The difference can be 30% to 100% higher likelihood for the second presentation when compared to the first.

That is, for example, if the first presentation has an average click rate of 1%, the second presentation may have an average CTR of between 1.3% and 2%.

After the second presentation the CTR stays more or less the same for a few more presentations. The number of presentations in this plateau seems to vary by the location of the content in the page and by the visual attraction of the content.

After these few presentations the CTR starts decaying with a curve that is very well approximated by an exponential decay. For example, the 13th presentation may have 90% the likelihood of the 12th, and the 14th has 90% the likelihood of the 13th. The decay constant seems also to depend on the visibility of the content.

Chart representing click likelihood as a function of the presentation number. We can see that the first presentation has less likelihood than the second. Then it plateaus and after the sixth presentation it starts an exponential decay.

Modeling Options

Now that we know the empirical data, we can propose modeling techniques that will correctly predict the likelihood of a click.

Use presentation number as an input to the predictive model

Probably the most straight forward approach is to add the presentation number as an input to the predictive model. While this is certainly a simple solution, it carries with it several problems, among them:

  1. If the model learns on each case, repeated non-clicks for the same content will reinforce the belief of the model on the non-clicker disproportionately. That is, the weight of a person that does not click for 200 presentations of an offer may be the same as 100 other people that on average click on the second presentation.
  2. The effect of the presentation number is not a customer characteristic or a piece of contextual data about the interaction with the customer, but it is contextual data about the content presented.
  3. Models tend to underestimate the effect of the presentation number.

For these reasons it is not advisable to use this approach when the average number of presentations of the same content to the same person is above 3, or when there are cases of having the presentation number be very large, in the tens or hundreds.

Use presentation number as a partitioning attribute to the predictive model

In this approach we essentially build a separate predictive model for each presentation number. This approach overcomes all of the problems in the previous approach, nevertheless, it can be applied only when the volume of data is large enough to have these very specific sub-models converge.

In the next couple of entries we will explore other solutions and a proposed modeling framework.

Tuesday Nov 29, 2011

Customer retention - why most companies have it wrong

At least in the US market it is quite common for service companies to offer an initially discounted price to new customers. While this may attract new customers and rob customers from competitors, it is my argument that it is a bad strategy for the company. This strategy gives an incentive to change companies and a disincentive to stay with the company. From the point of view of the customer, after 6 months of being a customer the company rewards the loyalty by raising the price.

A better strategy would be to reward customers for staying with the company. For example, by lowering the cost by 5% every year (compound discount so it does never get to zero). This is a very rational thing to do for the company. Acquiring new customers and setting up their service is expensive, new customers also tend to use more of the common resources like customer service channels. It is probably true for most companies that the cost of providing service to a customer of 10 years is lower than providing the same service in the first year of a customer's tenure. It is only logical to pass these savings to the customer.

From the customer point of view, the competition would have to offer something very attractive, whether in terms of price or service, in order for the customer to switch.

Such a policy would give an advantage to the first mover, but would probably force the competitors to follow suit. Overall, I would expect that this would reduce the mobility in the market, increase loyalty, increase the investment of companies in loyal customers and ultimately, increase competition for providing a better service.

Competitors may even try to break the scheme by offering customers the porting of their tenure, but that would not work that well because it would disenchant existing customers and would be costly, assuming that it is costlier to serve a customer through installation and first year.

What do you think? Is this better than using "save offers" to retain flip-floppers?

Analyst Report on RTD

An interesting analyst report on RTD has been published by MWD, a reference and description can be found in this blog entry

Thursday Nov 17, 2011

Short Season, Long Models - Dealing with Seasonality

Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: 

·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season.

·         Interest in a promotion rises around the time advertising on TV airs

·         Interest in the Sports section of a newspaper rises when there is a big football match

There are several ways of dealing with seasonality in predictions.

Time Windows

If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season.

In order for time windows to be useful in dealing with seasonality it is necessary that:

  1. The time window is significantly shorter than the season changes
  2. There is enough volume of data in the short time windows to produce an accurate model

An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas.

Input Data

If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence.

A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised.

For inputs to properly represent the effect in the model it is necessary that:

  1. The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly.

Proportional Frequency

If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction.

Definitions:

Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate.

F = average frequency as seen by the model.

L = likelihood predicted by the model for a specific person

Lt = predicted likelihood proportionally scaled for time “t”.

If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as:

Lt = L * (Ft / F)

Considering that:

L = (L – F) + F

Substituting we get:

Lt = [(L – F) + F] * (Ft / F)

Which simplifies to:

(i)                  Lt = (L – F) * (Ft / F)  +  Ft

This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”.

The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows:

(ii)                Lt = (L – F) * (Ft / F) * a  +  Ft

It is also possible to use a formula that does not scale the difference, like:

(iii)               Lt = (L – F) * a  +  Ft

While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights:

  1. The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved
  2. If F is equal to Ft then the formula reverts to “L”
  3. If (Ft = 0) then Lt in (i) and (ii) is 0
  4. It is possible for Lt to be above 1.

If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor.

For example, if we say that Y is twice as likely as X, then we can interpret this sentence as:

  • If X is 3%, then Y is 6%
  • If X is 11%, then Y is 22%
  • If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen”

Applying this reasoning to (i) for example we would get:

If (L < F) or (Ft < (1 / ((L/F) + 1))

Then  Lt = L * (Ft / F)

Else

Lt = 1 – (F / L) + (Ft * F / L)

 

Tuesday Aug 23, 2011

Don't do Product Recommendations!

While it is attractive to talk about "product recommendations" as one single functionality, in reality it should be seen as a family of different functions for different situations and purposes.

The following example provides the motivation for the differentiation between the different types of product recommendations.

A couple enters a supermarket and their smartphone connects to the store's computer. You have the opportunity to give them an offer. You want to offer them a 10% discount of one product, which product should it be? You have the following information:

  • Models predict a likelihood of 10% to purchase Fat Free Milk
  • Models predict a likelihood of 2% to purchase Gruyère cheese

Which of these two products would you recommend?

What if in addition, you knew:

  • Average margin for milk is $0.50
  • Average margin for cheese is $1

Would you recommend the milk because it is more likely to be purchased? Or the cheese because it has a higher margin? Or the milk because it has a higher expected margin (likelihood times margin)?

An what if in addition you knew:

  • Average likelihood to purchase Fat Free Milk is 15% across all customers
  • Average likelihood to purchase Gruyère cheese is 0.1% across all customers

I believe that the best of the two products, for this specific situation is the cheese. The reason s are behind the numbers and the goal of the recommendation.

From the statement of the situation, it is reasonable to infer that the goal of the offer is mainly to increase the basket, with the additional benefits of having happy and loyal customers. While at first sight the 10% associated with the milk is a higher number, this number has two problems. First, it is high, and it is quite possible that the customer would have thought of buying it anyway. Secondly, it is lower than the average. Put in words, the customer is "more than 30% less likely to buy non fat milk than the average customer."

The cheese should be the winner because "the customer is 20 times more likely to buy Gruyère cheese than the average customer."


The variables that come into play for the selection of the best product are:

  1. Context of the recommendation. Stage in the process, purpose.
  2. Selection universe. The set of products from which the recommendation can be selected.
  3. Selection criteria

In future posts we will explore how each of these variables affects the way Product Recommendations should work.

Monday Aug 22, 2011

Performance Goals for Human Decisions

I've often asked myself whether we, humans, make decisions in a similar way to what RTD does. Do we balance different KPIs? Do we evaluate the different options and chose the one that maximizes our performance goals?

To answer this question one would have to ask what are your performance goals and how much do you value each one of them. It would seem logical that our decisions would be made in such a rational way that they are totally driven by the evaluation of each alternative and the selection of the best one.

Following this logic, one could surmise that if we were able to discover the performance goals that are relevant for a specific person, and the weights for each one, we could be very good at predicting human behavior. Instead of using the Inductive prediction models like the ones we have today in RTD, we could use Deductive models that mimic the logic of the person to arrive to the predicted behavior.

Fortunately, as learned by modern economists and brilliantly put by Dan Ariely in his book Predictably Irrational, human decisions are typically not the result of rational optimization, but heavily influenced by emotions and instinct.

This is one of the reasons that rule systems perform so poorly in trying to predict human behavior. A rule system would try to detect the reason behind the behavior. Empirical, inductive models work much better because they do not try to discover the pattern behind a behavior, but the common characteristics of people; and while we can not rationally explain many of our behaviors, we do see a lot of commonality. While we are each a unique individual, it is possible to predict our behavior by generalizing from what is observed about people similar to us.

I was recently on a Southwest Airlines flight. As usual, travelers had optimized their seat selection according to what was available and the convenience, mostly preferring seats by the front of the plane, windows and aisle seats. Can we predict whether you will prefer a Window or an Aisle? Absolutely, just look at the history of the seats that the person has chosen in the past. While you may claim that such a model is obvious, it is a good model based on generalization of past experience. I can still not answer the question of what are the motivations for some person to preferring a window seat, but I can predict with great accuracy which one will you prefer on a specific flight.

Once everyone had selected their seat there were about 20 middle seats left in the plane. Just before the door closes, a mother with a child enters the plane in a hurry. She evaluates the situation, and for her the Performance Goal of being beside her child was the most important. Since all the "eligible" choices were not good, she tried to create a new choice, asking the flight attendants for help.

As the attendant was starting to make an announcement to ask for someone to give up their Aisle or Window seat, I saw the situation and immediately offered my aisle seat. Then, of course, I tried to figure out why I decided to do that? Why others didn't? Was it purely emotional or there was a Performance Goal mechanism involved?

Sure there are benefits to giving up you seat in a situation like this. For example I got preferential treatment and free premium drinks from the flight attendants for the rest of the flight, but I did not know about those benefits. Are there other hidden KPIs?

I would like to hear from you. What are the Performance Goals that motivate people to action? Is there a moral framework within which decisions do follow KPI optimization?

About

Issues related to Oracle Real-Time Decisions (RTD). Entries include implementation tips, technology descriptions and items of general interest to the RTD community.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today