Cameron, Meet Larry

Last Friday, Oracle announced plans to acquire Tangosol, one of the three leading distributed data cache companies. (The other two being GigaSpaces and GemStone.) Oracle is now preaching about XTP, eXtreme Transaction Processing, which is their name for what Gartner calls E-OLTP, Extreme Online Transaction Processing.

E-OLTP is a special case of a compute grid. In a compute grid, work is matched to workers in an attempt to get the maximum resource utilization. The work-worker matching is done by some kind of scheduler that is essentially trying to solve the classic bin-packing problem. The bin-packing problem is NP complete, which means it can't be solved in a practical way with any reasonable degree of scalability. Translation: as the amount of things to schedule and the number of places to schedule them goes up, the time required to do the scheduling optimally gets rapidly out of hand. The Grid Engine scheduler is very highly tuned to be as scalable as possible, but even it can be overwhelmed with a large enough workload.

In some segments, such as the financial sector, the workload is very large. Moreover, the size of individual pieces of work is very small, making the scheduling overhead that much more significant. This is the land of E-OLTP. Because each job is so small, in the end it really doesn't make much difference where you schedule it. In a couple hundred milliseconds, it'll be done anyway. It turns out that in such an environment it's better to forgo the scheduling part altogether and just feed whatever work comes in to whatever worker can handle it.

Enter the distributed data cache. Instead of the traditional compute grid architecture where a centralized scheduler decides where work should go, the distributed data cache fills the roll of a schedulerless work queue. As work comes in, it gets put in the queue. As workers finish what they're doing, they go to the queue and grab the next piece of work. The distributed data cache makes sure that none of the data is lost and that none of the workers step on each other's toes.

There are a variety of ways to achieve a distributed data cache. GigaSpaces does it through a JavaSpaces implementation. Tangosol does it through a special set of Collections classes. The interesting thing about that solution is that the developer doesn't really need to know that he's working with a distributed data cache. To him or her, it's just another Collections object to store data in. Tangosol transparently takes care of all of the complex issues of maintaining data availability, persistence, and coherency in a data grid.

I spoke with Cameron Purdy, the president of Tangosol and one of the founders, at SuperComputing last year. He's seemed like a really nice guy, and he really believed in the Tangosol technology. He told me that when it comes down to hard numbers, Tangosol beats the pants of GigaSpaces and GemStone. Of course, I wouldn't have expected him to say anything different. I wish him the best of luck with his new best friend, Larry.


Thanks for the mention, but the link in your post to GigaSpaces is broken, it is BTW, obviously we -- and the customers who evaluated us versus Tangosol and selected GigaSpaces -- would beg to differ with Cameron's comment.

Geva Perry


My Blog

Posted by Geva Perry on March 28, 2007 at 08:04 AM PDT #

Know this. The day your business needs grid computing, you are already dead.

Posted by Hakuin on March 28, 2007 at 04:13 PM PDT #

[Trackback] In a comment to a previous post , someone said that "The day your business needs grid computing, you are already dead." That's such a wonderfully absurd statement, I just had to bite. In fact, I'd say the person got it exactly backwards. ...

Posted by DanT's Grid Blog on March 29, 2007 at 03:58 AM PDT #

Hi Daniel,

Thanks for the interesting read. I would expand on your explanation a bit, since the area of "grid" that we focus on is almost entirely data-centric. So our "jobs" are actually "related to" specific data. For example, asking "accounts" or "positions" to calculate their own "risk", instead of creating a "risk calculation job" that goes and gathers various "account" and "position" data. In this manner, we can largely localize the calculations, transactions and analysis to where the data are being managed within the grid.

As far as "beating the pants off of" various competitors, I have no doubt that your memory is correct, and I would dare say that the gap has only widened. On the other hand, this is still a very competitive space, and I am glad to compete with the likes of these companies, because they give me a great reason to get out of bed every morning.

We're nowhere near "done" yet, and besides, I have a CEO title to work to regain now ;-).


Posted by Cameron Purdy on March 31, 2007 at 01:56 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed



« April 2014