Tuesday Jan 18, 2011

Full Speed Ahead

Last week I had the opportunity to do a webcast with Moe Fardoost, our marketing director, on the future direction for the Oracle Grid Engine product. If you're curious about where Grid Engine is headed, take a look. For the very lazy among you, the summary is that we're focused on three major themes: core infrastructure and feature improvements, tighter integrations with other Oracle products, and a richer cloud feature set.

Thursday Dec 23, 2010

Oracle Grid Engine: Changes for a Bright Future at Oracle

For the past decade, Oracle Grid Engine has been helping thousands of customers marshal the enterprise technical computing processes at the heart of bringing their products to market. Many customers have achieved outstanding results with it via higher data center utilization and improved performance. The latest release of the product provides best-in-class capabilities for resource management including: Hadoop integration, topology-aware scheduling, and on-demand connectivity to the cloud.

Oracle Grid Engine has a rich history, from helping BMW Oracle Racing prepare for the America’s Cup to helping isolate and identify the genes associated with obesity; from analyzing and predicting the world's financial markets to producing the digital effects for the popular Harry Potter series of films. Since 2001, the Grid Engine open source project has made Oracle Grid Engine functionality available for free to open source users. The Grid Engine open source community has grown from a handful of users in 2001 into the strong, self-sustaining community that it is now.

Today, we are entering a new chapter in Oracle Grid Engine’s life. Oracle has been working with key members of the open source community to pass on the torch for maintaining the open source code base to the Open Grid Scheduler project hosted on SourceForge. This transition will allow the Oracle Grid Engine engineering team to focus their efforts more directly on enhancing the product. In a matter of days, we will take definitive steps in order to roll out this transition. To ensure on-going communication with the open source community, we will provide the following services:

  • Upon the decommissioning of the current open source site on December 31st, 2010, we will begin to transition the information on the open source project to Oracle Technology Network’s home page for Oracle Grid Engine. This site will ultimately contain the resources currently available on the open source site, as well as a wealth of additional product resources.
  • The Oracle Grid Engine engineering team will be available to answer questions and provide guidance regarding the open source project and Oracle Grid Engine via the online product forum
  • The Open Grid Scheduler project will be continuing on the tradition of the Grid Engine open source project. While the Open Grid Scheduler project will remain independent of the Oracle Grid Engine product, the project will have the support of the Oracle team, including making available artifacts from the original Grid Engine open source project.

Oracle is committed to enhancing Oracle Grid Engine as a commercial product and has an exciting road map planned. In addition to developing new features and functionality to continue to improve the customer experience, we also plan to release game-changing integrations with several other Oracle products, including Oracle Enterprise Manager and Oracle Coherence. Also, as Oracle's cloud strategy unfolds, we expect that the Oracle Grid Engine product's role in the overall strategy will continue to grow. To discuss our general plans for the product, we would like to invite you to join us for a live webcast on Oracle Grid Engine’s new road map. Click here to register.

Next Steps:

Thank you to everyone in the community for their support over the last decade and their continued support going forward!

Wednesday Oct 06, 2010

SWWM Seeks SWISV

I've said it before: being adopted into the Oracle family has been a great thing for the Oracle Grid Engine product. One of the many reasons is that we get to take advantage of the amazing partner program that Oracle has, the Oracle Partner Network.

Over the years, a number of companies have built products that include, build on, or use either the Grid Engine product or the Grid Engine open source project. While we were Sun, there really was little that we could offer these companies in terms of useful partnership opportunities. Now that we're Oracle, there are actually several very active, very interesting programs available for partners. If your company is working with Grid Engine, and you'd like to investigate a closer relationship with Oracle, there's never been a better time!

Here's just a quick overview of some of the programs Oracle has to offer:

  • Oracle Validated Integration -- I love this program. It's a way to has Oracle certify and swear to the fact that your product is validated on Grid Engine and that the combination works as designed. It gives your customers an extra boost of confidence in your product, and it gets your product listed on the OVI partner solutions page. (Note that the program information says it's only for a limited set of Oracle products. Since Grid Engine is now under the Oracle Enterprise Manager product family, we do indeed qualify.)
  • Application-Specific Full Use & Embedded licensing -- We now have the ability to negotiate OEM contracts to include or embed Grid Engine in your product. It was possible before, but now it's actually a normal thing to do. There's even a standard program and process for it, including some very nice discounts. You can find out more about the program on page 54 of the Software Investment Guide.
  • Oracle Partner Network -- The OPN is your one-stop shop for hitching your wagon to the Oracle engine. With multiple levels and a huge number of benefits, the OPN is a great way to develop a closer relationship with Oracle.
  • OPN Specialization for Cloud computing and SaaS -- OPN has this concept of partner specializations. It's a way for you to distinguish yourself by demonstrating your deeper knowledge in specific areas. There's now a specialization for the cloud and SaaS.

If any of these programs sound interesting, you know where to find me. You can also send a Tweet or DM to my partner partner, Susan Wu, susanwu88 on Twitter.

(Don't worry. I'll get back to blogging geeky things again soon.)

Hadoop Lab Now Available

I was really surprised at the turn-out for JavaOne this year. Judging by the packed halls and empty goodie carts, I think the conference organizers were a little surprised as well. Excellent! Well done.

As you may have noticed, I always seem to have my fingers in the JavaOne hands-on labs pie. This year my contribution was to bring Cloudera into the fold to run a Hadoop lab. Needless to say, that generated a lot of interest. Well before the conference, the slot we had for the lab was booked solid. Taking that as a sign, I had the conference organizers give us a second slot on Monday for the lab. That slot was also booked solid before the conference even began. Unfortunately, however, that Monday lab slot ended up getting canceled for <INSERT OFFICIAL REASON HERE>. As a concession to the folks who didn't get to attend because of the cancellation, I got the conference organizers to give me permission to have Cloudera host the lab materials from their site before it's available from the official Oracle JavaOne site.

You can find the semi-official JavaOne Hands-on Lab S314413: Extracting Real Value from Your Data With Apache Hadoop here under the training section of the Cloudera site. The file is not yet linked from anywhere but here, but they're working on it.

If you download the zip file, in it you will find a lab workbook. At the back of the workbook, you will find an appendix that describes how to set up your own lab environment. The lab was written for Solaris 11 Express and NetBeans, but the OS and IDE really play little role in the lab. If you refuse to see the light and accept Solaris as the one true OS, you can still do the lab on some other OS with some other IDE (but it won't be as satisfying).

The lab did run in its originally assigned slot at JavaOne, and it was really successful. Turnout was good and the comments were great! I've already incorporated lots of great feedback from that session into the lab materials that Cloudera is now hosting, but I'm always happy to hear any additional comments and/or feedback. Happy coding!

Tuesday Sep 21, 2010

A Quick Update From the Experts at Oracle OpenWorld

Just wanted to point out this interview that came out yesterday. The summary is: really, honestly, really, Grid Engine is alive and well and has a bright future in front of it. The rumors of Grid Engine's death have been greatly exaggerated.

Wednesday Sep 15, 2010

Grid Engine at Oracle Open World

In case any of you will be visiting Oracle Open World next week, be sure to come check out my sessions. I have two OpenWorld sessions and one JavaOne hands-on lab. (The lab isn't actually directly related to Grid Engine, but there's a tie-in via our Hadoop support.)

S316977: Scalable Enterprise Data Processing for the Cloud with Oracle Grid Engine
Dan Templeton (Oracle), Tom White (Cloudera)
Thursday 23-Sep-10 12:00-13:00 Moscone South Rm 310
S317230: Who's Using Your Grid? What's on Your Grid? How to Get More
Dan Templeton, Dave Teszler, Zeynep Koch
Tuesday 21-Sep-10 17:00-18:00 Moscone South Rm 305
S314413: Extracting Real Value from Your Data with Apache Hadoop
Dan Templeton (Oracle), Sarah Sproehnle (Cloudera), Michal Bachorik (Oracle)
Wednesday 22-Sep-10 12:30-14:30 Hilton San Francisco Plaza B

Also, Melissa McDade's talk will also have some Grid Engine content:

S318115: High-Performance Computing for the Oil and Gas Industry
Dan Hough, Melinda McDade
Wednesday, 22-Sep-10 10:00-11:00 InterContinental San Francisco Telegraph Hill

Wednesday Feb 03, 2010

Self Control

Good day, and welcome to week four of my continuing attempt to cover all the features added in the latest release (6.2u5) of Sun Grid Engine. This week we'll talk about array task throttling.

Sun Grid Engine supports four classes of jobs. Interactive jobs are the equivalent of doing an rsh/rlogin/ssh to a node in the cluster, except that the connection is managed by Sun Grid Engine. Batch jobs are your traditional "go run this somewhere" type of job. They represent a single instance of an executable. Parallel jobs consist of multiple processes working in collaboration. All of the processes need to be scheduled and running at the same time in order for the job to run. Parametric or array jobs are like what you see in Apache Hadoop, where multiple copies of the same executable are run across multiple nodes against different parts of the data set. The important characteristic that distinguishes array jobs from parallel jobs is that the tasks of an array job are completely independent from each other and hence do not need to all be running together.

The way that Sun Grid Engine processes array jobs is particularly efficient. In fact, a common trick to improve cluster throughput is to bundle many batch jobs together to be submitted as a single array job. Because array jobs are so efficient, users use lots of them, sometimes with huge task counts. There is no explicit limit on the number of tasks that an array job can contain. Hundreds of thousands of tasks in a single array job are not uncommon.

There is a problem, however. From the Sun Grid Engine scheduler's perspective, all of the tasks of an array job are equal. That means that if the highest priority job waiting to execute is an array job, then all of that job's tasks are higher priority than any other job (or task) waiting to run. If that job has a million tasks, then the cluster is going to have to process all million of those tasks before anything else will be executed. Now, the policies do come into play here, and if a higher priority job is submitted or if the array job loses priority through some policy (like the fair share policy), then it and its remaining tasks will fall back in the execution order. Nonetheless, this approach makes it possible for a user to unintentionally execute a denial of service attach on the cluster.

For quite some time there has been an option that an administrator can configure to set a limit on the maximum number of tasks that can be simultaneously executed from a single array job (max_aj_instances in sge_conf(5)). That solves the problem, but only in a very general and somewhat suboptimal way. As with any such global setting, the administrator has to make a trade-off between having a limit that works well for the majority and having a limit that doesn't unduly restrict certain users. (The default is 2000 tasks per array job.) Well, it turns out that given the opportunity, most users will willing set such a limit themselves, both to avoid being bonked on the head by the administrator for abusing the cluster, and for reasons of self interest, such as by allowing multiple of their array jobs to share cluster time rather than being processed sequentially. So, with 6.2u5, we've given users exactly that ability.

Let's look at an example:

% qsub -t 1-100000 myjob.sh

will submit an array job that will run the myjob.sh script one hundred thousand times. Each time it runs, an environment variable ($SGE_TASK_ID) will be set to tell that instance which task number it is. The myjob.sh script must be able to translate that task ID into a pointer to its portion of the data set. In a cluster with default settings, up to 2000 of the tasks of this job will be allowed to be running at a time. If the cluster only has 2000 slots, that could be a bad thing.

% qsub -t 1-100000 -tc 20 myjob.sh

submits the same job, except that it places a limit of 20 on the number of tasks allowed to be running simultaneously. In our fictitious 2000-slot cluster, that's a quite neighborly thing to do. If you try to set the limit above the global limit set by the administrator, the global limit prevails.

While this feature is pretty simple, it can mean a large difference in job throughput for some clusters. I know one customer in particular that went way out of their way to implement this feature themselves using clever configuration tricks. The massive headache of hacking together a solution was worth it to them to be able to set per-job task limits.

Thursday Jan 28, 2010

Better Preemption

Continuing with the new feature theme, this week we're talking about slotwise subordination (AKA slotwise preemption). Preemption is the notion that a higher priority job can bump a lower priority job out of the way so it can execute. Pretty simple notion. Some workload managers have an implicit concept of preemption. Sun Grid Engine does not. We have what I like to call "after-market preemption". The net result is the same. In a workload manager with "built-in" preemption, like Platform LSF, it works by temporarily relaxing the slot count limit on a node and then resolving the oversubscription by bumping the lowest job on the totem pole to get the number of jobs back under the slot count limit. In Sun Grid Engine, the same thing happens, except that instead of the scheduler temporarily relaxing the slot count limits, you as the administrator configure the host with more slots than you need and a set of rules that create an artificial lower limit on the job count that is enforced by bumping the lowest priority jobs. It nets out to the same thing. With Sun Grid Engine you have a little more control over the process, but you pay for it with some added complexity.

That set of rules that defines the artificial limit is called subordination. By subordinating one queue to another, you tell the master that jobs running in the subordinated queue are lower priority and should be preempted when necessary. Specifically, all jobs in the subordinated queue are suspended when a threshold number of jobs (usually 1) are scheduled into the queue to which it is subordinated.

Queue subordination in Sun Grid Engine was implemented long ago, when single-socket, single-core machines still roamed the Earth. Back in those days, there was generally only one job running per host, so the queuewise subordination scheme worked out just fine. Now that we're in the era of multi-core machines, suspending the entire subordinate queue tends to be a bad idea. Enter slotwise preemption. In a nutshell, slotwise preemption lets you set a specific limit on the number of jobs allowed to be running on a host, regardless of how many queues and slots there are. If too many jobs land on the host, jobs in the lowest ranking queue(s) will be suspended until the number of running jobs is under the limit.

(Note that slotwise subordination deals only with the running job count. If you want to limit the active job count (running + suspended), you can do that by making the slots complex a host-level resource and setting it to the desired limit.)

Let's look at some examples from the queue_conf(5) man page:

Assume we have a cluster of dual-core machines and two queues that span all the machines, A.q and B.q, each with two slots:

% qconf -sq A.q | grep subordinate_list
subordinate_list      slots=2(B.q:0:sr)
% qconf -sq B.q | grep subordinate_list
subordinate_list      NONE

This configuration says that there are four slots available on each host (2 in each queue), but that only 2 jobs may be running on any host at any given time. If more than 2 jobs end up on a node, it will result in the excess jobs being suspended. Because B.q is subordinated to A.q, the excess jobs will always come from B.q.

Let's talk about the difference between queue-wise and slot-wise suspension for this example. With queue-wise suspension, you'd have two choices: either a single job in A.q would suspend all jobs in B.q, or two jobs in A.q would suspend all jobs in B.q. The choice is either undersubscribing (with one running job in A.q and two suspended jobs in B.q) or oversubscribing (with one running job in A.q and two running jobs in B.q). With slot-wise suspension, a job running in A.q will only suspend a job running in B.q if there are more than two running jobs on the host. We will therefore never oversubscribe, and we'll never undersubscribe as long as there's a job available to run.

Let's look at a more complex example:

% qconf -sq A.q | grep subordinate_list
subordinate_list      slots=2(B.q:1:sr,C.q:2:lr)
% qconf -sq B.q | grep subordinate_list
subordinate_list      NONE
% qconf -sq C.q | grep subordinate_list
subordinate_list      NONE

We've added a third queue, and we now have a very simple tree. Both B.q and C.q are subordinated to A.q, but there are still only 2 slots available for running jobs. If a host is scheduled with more than two running jobs, jobs will be suspended until we get down to two, just like before. What's different is that there's now a pecking order for the subordinated queues. Because B.q has a lower sequence number (1) than C.q (2), it is higher priority. That means we'll suspend jobs from C.q first, before suspending jobs from B.q. What's also different is how we pick the job to suspend. In B.q in both examples, the action is listed as "sr", which means to suspend the shortest running job. In C.q in this example, the action is "lr", which means to suspend the longest running job.

One more example:

% qconf -sq A.q | grep subordinate_list
subordinate_list      slots=3(B.q:0:sr)
% qconf -sq B.q | grep subordinate_list
subordinate_list      slots=2(C.q:0:sr)
% qconf -sq C.q | grep subordinate_list
subordinate_list      NONE

Now we have a tree with more than a two levels: C.q is subordinated to B.q is subordinated to A.q. Between B.q and C.q up to two jobs are allowed to be running, with B.q's jobs taking priority. Among A.q, B.q, and C.q, up to three jobs are allowed to be running, with A.q's jobs taking priority over B.q's jobs, and B.q's jobs taking priority over C.q's jobs. Now look carefully. Where did I specify that C.q should be subordinated to A.q? I didn't. It's implicit. Whenever you have a multi-level subordination tree, a node has its entire subtree subordinated to it, whether it's explicitly specified or not, with priority handled between nodes according to depth in the tree and priority with levels handled according to sequence numbers. Because of this implicit subordination, it does not make sense to ever have a higher slot limit lower down in the tree. The higher-level lower slot limit will always take precedence.

Hopefully slotwise subordination now makes sense, and you can see why it's important. Basically it brings Sun Grid Engine's preemption capabilities up to date with modern hardware, making it more efficient and more useful.

There is, however, one notable caveat I have to point out. With queue-wise suspension, when a subordinated queue has its jobs suspended, the queue itself is also suspended, preventing any other jobs from landing in that queue. That's not the case with slotwise subordination. It's possible for the scheduler to place a job into a subordinated queue where that job will immediately be suspended. Imagine in our first example above that A.q has two running jobs in it while B.q is empty. B.q remains a valid scheduling target, and any job that lands there will immediately be suspended because it violates the slotwise limit. The workaround is to use job load adjustments to make sure that hosts with running jobs are appropriately unattractive scheduling targets. Not a show-stopper, but definitely important to be aware of. We will address the issue in the next couple of releases.

About

templedf

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today