Tuesday Jan 26, 2010

Sun's Recommendations Carousel: Results and Learnings

In September, 2009, we released version 2.0 of the Sun Machine Learning Engine (SMILE), Sun's own advertising-oriented predictive analytics system. The primary customer facing manifestation of v2.0 is the yellow-bordered SMILE Recommendations "Carousel," which you see on many sun.com web pages:

SMILE Recommendations Carousel, small

As it's been more than four months since its release, I'll present some of the measured results. Overall, we've been very pleased with the value delivered by the carousel. (There have been some negative reactions, too, but I think that's to be expected when introducing highly-visible new advertising into the mix. More on that later...)

Bottom line, the carousel has exceeded our expectations and far outperformed the small ads we displayed using SMILE v1. Ads (also referred to as "recommendations") in the carousel were clicked by our visitors as often as one million times per week, yielding an average click through rate of over 3.6% (and the rate was much higher even on some sections of the site as opposed to others, depending on the audience). If you're familiar with online display advertising, you'll know that's a pretty phenomenal CTR, and we attribute it to the effectiveness of the SMILE analytics and recommendation engine, strong inventory of content and offers, and highly visible placement.

Clicking on an ad, though, is just the start, so we also measure what happens next, our so-called "key success metrics." These include:

  • Downloads Initiated
    Many ads point to either the Sun Download Center (encouraging visitors to get our free software) or specific software downloads, and SMILE drives on average 14-16,000 downloads per week. This is very beneficial, as getting our software in customers' hands is often their first real engagement with Sun.
  • Offers Obtained
    We provide many white papers that require simple registration or login to obtain, and the carousel often advertises and links to these offers (here's an example offer). They provide valuable contacts and leads to our Sales team, and SMILE drives on average 6-7,000 of these new contacts per week.
  • Teleweb widgetTeleWeb Success
    Here we measure how many visitors initiate a contact using the "TeleWeb widget" (pictured at right) after clicking a SMILE recommendation. Each such contact has a significant potential sales value (that we've measured accurately over time), so it's significant that SMILE delivers 100 or so contacts each week.
  • Sun Startup Essentials (SSE) Applications Submitted
    The SSE program is a win-win for Sun and startup companies, providing great value to each over the long run. SSE created a number of SMILE ads, and they perform very well, driving up to 25% of the weekly applications received into the program.

We measure many other variables and events, and in fact, thanks to recent enhancements, can now do closed-loop reporting with some of our Tele-Sales teams. This takes reporting one step further, providing the actual potential dollar value driven by SMILE. Here's how it works: When contacting a lead generated through a SMILE click, sales reps enter into their CRM system an estimate as to the potential value of the lead. This doesn't include all leads generated, as they go to different teams, and not all of them have enabled closed-loop reporting.  There is also a time lag between lead generation, dissemination, contact, and possible assignment of marketing pipeline value into the system. Even with those caveats, SMILE generated over $2,500,000 in potential leads value in December, 2009, alone! This is a great example of our push towards measurable, "deterministic" marketing and how we can realistically start to calculate ROI on this project.

Our real-world experience also comes with customer feedback, and we see room for improvement in the following areas:

  • First, do a better job of surfacing and explaining how the carousel and the recommendations work. The info has always been there but was "buried" on the All Recommendations Page. We would like to add an "About Recommendations" link to the bottom border of the carousel so that curious and/or concerned (with privacy) visitors can easily learn more about the program, how it works, and how we handle privacy related matters. The link would go to the bottom of the All Recommendations page where we've recently added a new section on recommendations, privacy, and program FAQs.
  • When a visitor closes the carousel, it stays closed for the duration of the browser session. But for users who visit often, this was inadequate. We would like to implement a different solution that keeps the carousel closed longer for users who don't wish to view it.
  • Due to an initial basic mapping system between images, products, and ads, we sometimes show duplicate images and/or lack image variety. We've been adding new images to the system to address this.
  • There are a few more simple enhancements that will also increase variety in the carousel, such as not showing slight ad variations for the same product or offer at the same time, and eliminating ads that link to the page the user is already on.
  • Lastly, on the back-end, we continue to refine and enhance the recommendation engine and methodologies with a goal of always increasing the relevancy of the ads to our visitors.

Will these enhancements see the light of day? Some decisions are pending the closure of the Oracle acquisition, so time will tell.

Regardless of what happens, though, I hope this information helps convey the strong results the program has produced, the success of our predictive technology, how it can be further improved, and the promise such systems hold for the future.

Thursday Oct 15, 2009

Survey Says: "Two-Thirds of Americans Object to Online Tracking"

Thanks to my colleague Paul Strupp for bringing a recent New York Times article to my attention: "Two-Thirds of Americans Object to Online Tracking." This is the conclusion of a recent survey conducted by professors at the University of Pennsylvania and the University of California, Berkeley. As I'm currently the Project Manager for an online marketing system that leverages online tracking, this was obviously of interest to me and our team. The title of this post does a good job of summarizing the general negative attitude of those surveyed, and I won't rehash what's in the article, trusting you'll give it a read if interested. After reading the article as well as the full study, here are some of my thoughts.

My first reaction is similar to this quote in the article from Stuart P. Ingis, a partner at the law firm Venable who represents the industry trade groups' self-regulation coalition: '"Just because many Americans are not in favor of something does not mean it should be banned," he said, citing negative feelings about taxes.' 

I think the tax analogy might be a tad extreme, but you could survey Americans and find a zillion things they don't like but that no one has any intention of banning. Ask me how I like the fact that TV ads are now inserted in the shows themselves (instead of just in between the action, where you can mute or fast-forward through them), and I could give a pretty good rant about how much I dislike that practice (OK, it's a personal pet peeve). But it's part of the price we pay for free TV, and I don't see any calls to make it illegal. I acknowledge that personal privacy and TV commercials aren't necessarily of equal weight in the "big scheme of things," but hopefully you get the idea, so I'll leave it at that.

Bottom line, businesses derive value from these systems, and at Sun, we see measurable positive results from our use of predictive analytics, many of which rely upon online behavior data. (The results from our recent release of Sun Machine Learning Engine v2.0 are even more positive, but I'll cover that in a subsequent post.) While we do aggregate data from Sun-owned domains, we don't aggregate from non-Sun domains like many ad networks do, and users in the study objected slightly less at least when it was all within the same company. 

We also have direct feedback from Sun customers who participated in a recent usability study about our project, and their attitude was quite different. They said they had an expectation that a large sophisticated enterprise web site would track their online visit, and if we used the data to provide them recommendations that were accurate and helpful, they had no issue with it. (The nature of our business was important too -- they said they'd feel differently if we were their financial institution, for example.) They noted how large our site is and that it can be tricky to navigate -- if we can help them find valued information more quickly, they were actually quite supportive. Granted, this was a very small sample and not scientific like the survey, but I just wanted to point out that some in Sun's audience have a different attitude.

That said, I'm as paranoid about my personal information and privacy as any one -- I always disallow third party cookies, opt-out of advertising networks cookies (as best I can), shred everything, etc. So I totally get it. Further, I agree the onus is on the businesses using online tracking to do a much better job of assuaging concerns and communicating. I suggest three best practices:

First, transparency is critical, and we must be up-front about what we do. Sun has an extensive Privacy Policy that I feel is understandable, honest, and forthcoming. We went a step further as well to add specifics about the online tracking we've implemented recently, which you can see on the new "All Recommendations" page:

Privacy statement

It's tricky to try to explain this adequately in as few words as possible (on the assumption that users typically don't want to read that much). Hopefully in this short blurb, we communicate at a high level the way in which we've implemented our recommendations in relation to tracking.

Second, users should own their own data and be given control over it. I can't say we've totally got this right yet, though we do allow a general cookie opt-out which will prevent us from tracking anonymous online behavior. I think technology has a ways to go here, as making your "average" user manage their cookies at this level isn't a great solution. There's room for improvement and advancement in this whole area of customer data management and empowering them to manage it painlessly, though I think Sun is doing pretty well in comparison to many other companies.

Lastly, like any change you want folks to accept, you must answer "What's In It For Me?" It's a negotiation -- we're asking users to let us track their online behavior in order to make recommendations that we believe will benefit them (and us of course), potentially in a number of ways:

  • Fast, (usually) free access to relevant information, including informative white papers and no-cost software downloads the user might not otherwise locate
  • More direct acccess to potentially valuable promotions, specials, and offers 
  • Aggregation of all the info into a single, simple user interface

To explain benefits explicilty would require yet more words, and that's a bit of a conundrum for the web experience. Hopefully the experience itself conveighs the benefits adequately (and customers can always read my blog to get the full scoop ;) .

Again, relating to this personally, I know that by disabling my ad network cookies, I might miss some targeted ads that might resonate with me, but that's my choice. On the other hand, I wouldn't want to deny Netflix the opportunity to recommend movies to me, so I explicitly tell them what I like and don't like and don't worry that they know about every movie I watch. Similarly, I don't try to turn off product recommendations on Amazon that I often find helpful. This is a trade-off that works for me. I'm hopeful that as we get further down this path, we can do a better job all over the Internet of transparency, data management, providing customer data control, and communicating (and delivering) benefits. If we do, maybe someday in the not too distant future, only "One-Third of Americans" will object!

Friday Aug 28, 2009

Predictive Analytics: Measuring the Results

I wrote an overview earlier this month about Predictive Analytics (PA), what it is and what sorts of benefits it offers. I've also written about the Sun Machine Learning Engine (SMILE) Project, putting PA to work on Sun's web sites. Today marks the end of SMILE Phase 1, as we are in the midst of releasing SMILE v2.0. You'll see the results on www.sun.com over the course of the next month, as we gradually enable more and more of the site with our new "Recommendations for you" carousel. Here's what you'll be seeing soon!

SMILE Recommendations Carousel

So I thought today would be a good time to touch on the results from SMILE 1.0. The initial release was simply about serving small text-only ads in the right hand column on many sun.com sites. Here's an example of a SMILE-served ad (outlined in red):


To evaluate results, we calculated standard metrics, such as impressions (number of times the ad was displayed), clicks (number of times the "call to action" link in the ad was clicked), and CTR (click-through rate -- number of clicks divided by number of impressions, as a percentage). I'm not able to share the detailed CTRs at this time, but suffice it to say they're low. That's not unexpected -- these are small ads, easily overlooked or ignored, and you shouldn't expect a lot of clicks on this type of banner ad. 

However, we also calculated Uplift, and that's where we looked to measure the power of our analytics. We did not have recommended ads for all customers, just return visitors with anonymous cookies that we recognized from earlier visits. Thus, we served many default ads, as well as many recommended ads. By comparing the CTRs of both, we can explicitly measure the influence of our predictive system, measured as "uplift." Here's a simple example:

  • On a web web page, we show 100 SMILE-recommended ads that get 15 clicks, for a 15% CTR.
  • On the same page, we show 100 default ads (same size, same location, just not personally targeted), and they get 10 clicks, for a 10% CTR.
  • The SMILE Uplift in this case is 50% ((15-10)/10 \* 100).

We carefully tracked SMILE Uplift for the last five months, and we saw an average uplift of 58.3%. As we serve millions of ad impressions, that translates into 1000's of additional clicks generated by our PA system. The ads often point to downloads or white paper offers that customers sign in to get, and thus we collect 1000's more contacts and what they're interested in, which we can then (hopefully) turn into qualified leads and ultimately new customers. So we can see a definite ROI for this effort. And keep in mind this was "version 1" of the analytics, which we're continuously refining, enhancing, and developing -- we expect ongoing improvements in future results.

Actual weekly Uplift gyrated pretty wildly -- here's a summary chart:

SMILE Uplift chart

You can see general improvement over time as we improved the algorithms, steadying for the most part in the 40-80% range. In the last week, we released SMILE ads on the Sun Download Center, which had (as you can see) an interesting impact on Uplift! SDLC gets a huge volume of visitors, and most users are there to download and nothing else. We also found a large proportion of users there for whom we did not have recommendations (either because they were new or they'd deleted their Sun cookies). The result was a pretty big dip in CTR for the default ads, while we held steady on the recommended ads, thus the skyrocketing Uplift score the last week.

With the release of SMILE 2.0, we're completely changing how we do our measurements (it's a long story), so we'll be tweaking our weekly measurement system and reporting. We'll have new functionality and new measuring capabilities, and I'm looking forward to seeing the results from our newest release.

As I hope these numbers portray, we've demonstrated solid benefit to our emerging PA technology. It's a great start, but there's still a lot of upside potential remaining -- we're optimistic of delivering even more dramatic results in the future. 

<script type="text/javascript"> tweetmeme_url = 'http://blogs.sun.com/gaz/entry/predictive_analytics_measuring_the_results'; tweetmeme_source = 'garyzel'; </script> <script type="text/javascript" src="http://tweetmeme.com/i/scripts/button.js"> </script>

Thursday Aug 06, 2009

If You Predict It, You Own It

As I've written about previously, I'm currently the Project Manager for the Sun Machine Learning Engine (SMILE) project, based on predictive analytics (PA) technology we're developing in-house. While I have a lot of experience building and managing complex web systems such as this, I haven't worked with PA technology before. I set out to learn more about it, for two reasons:

  1. Since I'm the PM for this project, it's generally a good idea to know what I'm doing and talking about!
  2. ROI is very important, both for this project and for ensuring the ongoing application of PA technology in general at Sun. This matters, of course, to Sun's management, and as you might imagine, it can't hurt to convey these benefits to our soon-to-be new owners as well. 

So, I set out to learn more about PA, what it is, and what benefits it offers. In this post, I want to share and consolidate some of my findings -- hopefully this will be helpful to others who are considering or starting similar projects.

Now, before I get much further, credit where credit is due. The title of this post, "If You Predict It, You Own It", is a tag line I like, taken directly from Eric Siegel's Prediction Impact site. I recommend this site for an intro to the subject, as it offers many helpful articles as well as resources, such as the Predictive Analytics World conferences and training programs. 

So what is PA?

That will give you a good, quick intro to PA. What about the results? What's out there we can leverage to help sell such projects within our organizations? I did some of my own research into this and was also fortunate to have assistance from Sun's Digital Libraries & Research staff in locating a few additional publications. Here are some representative quotes/stats/images that make strong "sound bytes" in support of PA!

Optimizing Customer Retention Programs
by Suresh Vittal with Christine Spivey Overby and Emily Bowen, Forrester
October, 2008

  • "Marketers have long relied on analytical techniques to identify and reduce customer churn. For instance, segmentation models help marketers to better profile customers and understand behavior, while cross-sell and upsell modeling deepens relationships and creates barriers to exit."
  • "Marketers who target all types of respondents, not just the positives, risk wasting valuable resources on indifferent customers or at worst even triggering churn. This is especially critical in this climate of pressure upon marketing spend."
  • "Telenor found that by only targeting persuadables, it was able to reduce overall churn by 1.8%. A more telling statistic: These improvements were driven by only targeting 60% of the potential churners. The benefits of targeting smaller groups is clear — cost savings achieved from fewer contacts by telemarketing and lowering of customer fatigue through selective contacts." 
  • "The combination of increased retention rates and lower cost means Telenor will realize an 11-fold increase in uplift campaign ROI when compared with existing programs."
Turning Customer Interactions into Money
Peppers & Rogers Group
©2008 Carlson Marketing Worldwide.
  • "While the Internet and new technologies aren’t crystal balls, the sheer wealth of information that can be gleaned about today’s customers—and then applied toward anticipated future behaviors—is staggering. Failure to take this information into account is like leaving money on the table, or worse. You could simply hand it over to your competitors....Today’s smart companies use data, and the insight gained from it, to predict customer behavior."
  • "In one example, American Airlines used predictive analytics to better understand the relationship between various customer segments and differential flight patterns. They achieved sky-high ROI results of nearly 1,200 percent in a period of two months."
  • "IDC report studied dozens of companies and hundreds of predictive analytics projects. It found that the median ROI for the projects that incorporated predictive technologies was 145 percent, compared with a median ROI of 89 percent for those projects that employed only traditional analytics."
  • Nice summary chart from this article:
The ROI Cycle

"Mob Marketing" Webinar and Presentation
Suresh Vittal, Principal Analyst, Forrester Research
Jack Jia, CEO, Baynote
December, 2008

  • "Relevant and personalized interactions are critical for enhancing customer experience."
  • Baynote quoted the following benefits for their recommendation technology: 
    • 40% Lead Lift
    • 20% Net Revenue Lift (40% profit lift)
    • 400% Engagement Lift
    • 1000% Search Lift 

A vibrant and active amount of commercial activity also lends credence to the power and value of PA, and here's info on some PA providers:

And finally, just last week IBM bought perhaps the "granddaddy" of enterprise PA providers, SPSS, for $1.2 billion in cash -- a very serious endorsement of the power and value of PA! 

As noted, we are taking a DYI approach here, and you might be wondering about our results so far. I'll let you digest this info first, then follow up soon with a post on how SMILE is performing...

<script type="text/javascript"> tweetmeme_url = 'http://blogs.sun.com/gaz/entry/if_you_predict_it_you'; tweetmeme_source = 'garyzel'; </script> <script type="text/javascript" src="http://tweetmeme.com/i/scripts/button.js"> </script>


I helped design, build, and manage download systems at Sun for many years. Recently I've focused on web eMarketing systems. Occasionally, I write about other interests, such as holography and jazz guitar. Follow me on Twitter: http://twitter.com/garyzel


« August 2016

No bookmarks in folder