Tuesday Sep 07, 2010

Here Comes Oracle OpenWorld!

It's hard to believe that Oracle OpenWorld is almost here.  This will be my first year attending as an Oracle employee -- last year I was still Sun. Preparations are in full swing, and we'll be doing a lot.  The Enterprise Manager team has literally dozens of sessions and demo pods, but I wanted to call your attention to some of the things we'll be doing for Oracle Enterprise Manager Ops Center and Oracle Grid Engine.

First for Grid Engine, now formally called Oracle Grid Engine, we have the following activities:

S316977: Scalable Enterprise Data Processing for the Cloud with Oracle Grid Engine
Dan Templeton (Oracle), Tom White (Cloudera)
Thursday 23-Sep-10 12:00-13:00 Moscone South Rm 310

S317230: Who's Using Your Grid? What's on Your Grid? How to Get More
Dan Templeton, Dave Teszler, Zeynep Koch
Tuesday 21-Sep-10 17:00-18:00 Moscone South Rm 305

Also, if you're a developer attending JavaOne you can get hands on with Grid Engine in the following labs

S314413: Extracting Real Value from Your Data with Apache Hadoop
Dan Templeton (Oracle), Aaron Kimball (Cloudera), Michal Bachorik (Oracle)
Wednesday 22-Sep-10 12:30-14:30 Hilton San Francisco Plaza B

S314233: Scale Your Java Service into the Cloud
Dan Templeton, Michal Bachorik
Time and date are still To Be Announced on this session (watch this space)

Now, if you're interested in Ops Center, we have lots going on too.

First, we have two "pods" in the demo grounds area where we'll be showing Ops Center. These are both in the "west" part of Moscone center and you'll be able to talk live with engineers from the team there and get your questions answered in detail.

  • Booth W-107  Enterprise Manager Ops Center
  • Booth W-094  Hardware, Software...Managed 

Also, we have a bunch of sessions you can attend where we'll be covering Ops Center.  Here are some good ones:

S316976 : Mission Accomplished: Virtualization powered by Oracle Enterprise Manager
Sudip Datta and others
Monday, September 20, 2010 5:00pm  Moscone South/Rm 305

S316975 :  Oracle Enterprise Manager Ops Center for OS and Hardware Management
Steve Wilson, Mike Barrett
Tuesday, September 21, 2010 5:00pm  Moscone South/Rm 270

S317552: Managing Sun SPARC Servers with Oracle Enterprise Manager Ops Center
Gary Combs, Mike Barrett
Thursday, September 23, 2010 10:30AM Moscone South/Rm 252 

Also, we have one hands on lab where you'll be able to actually walk through using some of the main Ops Center features with a live instructor.

S318957 The Future of Datacenter Management: A View of an "Apps to Disk" Solution
Steve Stelting, Mike Barrett
Monday, September 20, 2010 2:00pm  Marriott Marquis/Nob Hll AB  

Now, the most exciting part of Oracle OpenWorld is that we'll be showing the new, forthcoming version of Ops center for the first time.  See below for a sneak peek (click on image to enlarge). 

I'm looking forward to seeing you at Oracle OpenWorld! 

Wednesday Jan 13, 2010

Grid Engine: The World's First Cloud-aware Distributed Resource Manager

Sun has just released a new version of Grid Engine.  Grid Engine is a market leading product in the Distributed Resource Management space, but this new release really brings the product to the next level.  Specifically, it brings it up into the cloud!

So, what's so exciting about this release?  There are a number of things, but I'll focus on two.  First, Dynamic resource reallocation, including the ability to use on-demand resources from Amazon EC2.  Second is deep integration with Apache Hadoop -- one of the most popular workloads in the cloud today.

A new feature in Grid Engine allows you to manage resources across logical clusters (or even clouds).  This could be two collections of systems inside a corporation, or can include non-local cloud resources (such as EC2).  Why would you want to do this?  Let's look at a scenario.

Many auto companies use Grid Engine to coordinate the resources on the Grid/Cluster/Cloud they use for mechanical design and simulation.  Users across the company submit jobs (e.g. a crash simulation) and Grid Engine queues them and dispatches them based on priority and policy.  However, what happens when your submissions start to outpace the ability of your systems to keep up?  In the traditional model, you'd have to buy new hardware and add it to your Grid/Cluster/Cloud.  With the Grid Engine you can now configure rules that allow you to "cloud burst" these workloads out to another cloud.  With Amazon EC2 specifically, you pre-configure a set of AMI images on EC2 that have your application software and register them with Grid Engine.  You also give Grid Engine the credentials to manage your EC2 account.  Then, based on your policy, Grid Engine will:

  • Fire up new EC2 instances on demand (using your supplied AMIs)
  • Automatically set up a secure VPN network tunnel between your network and your EC2 instances
  • Join them to the Grid Engine cluster
  • Dispatch work to them
  • Take them back down once demand has subsided

It's a great example of on-demand resource management, and it has the potential to save customers real money in avoiding over-provisioning their internal clouds.

The next thing that's really exciting is Grid Engine's new integration with Hadoop.  Hadoop is a popular open-source implementation of Map-Reduce.  Map-Reduce is the fundamental building block that power's the internal clouds at Yahoo and Google, and it's commonly used as a way to enable applications that can process huge collections of data.

While Hadoop has seen a large amount of deployment in the web space (at companies like Facebook and others) it's only starting to see adoption in the Enterprise.  This new Grid Engine release can help change that.  Grid Engine is now a key ingredient to make Hadoop enterprise ready.  At a technical level, Hadoop applications can now be submitted to Grid Engine, just like any other kind of parallel computation job.  This means you can now more easily share a single set of physical resources between Hadoop and other tradition applications (financial risk modeling, crash simulations, weather prediction, batch processing -- you name it).  That means reduced cost to the customer.  Beyond that, Grid Engine now has a deep understanding of Hadoop's global file systems (HDFS), which means that Grid Engine can send work to the right part of the cluster (where the data lives locally) to make it ultra-efficient -- even when sharing.  And lastly, Grid Engine has a mature usage accounting and billing feature (ARCo) built-in.  That means you can now track and (internally) charge back for Hadoop jobs -- giving IT a real way to interact with the business.

There's a lot more to this release and you can read all about it over at Dan Templeton's blog, so I won't try to go into all the details.  Let it suffice to say that I'm really excited about this release.  Grid Engine has a future that makes it an increasingly important part of the infrastructure for Cloud Computing going forward.


Thoughts on cloud computing, virtualization and data center management from Steve Wilson, Oracle engineering VP.


« February 2017