Wednesday Jan 06, 2010

Welcome Sun Grid Engine 6.2 update 5

The Sun Grid Engine 6.2 update 5 release is now available. Don't let the unassuming version number fool you; there's quite a few interesting features packed into this release. Let's talk about them, shall we?

Integration with Apache Hadoop

SGE 6.2u5 gets to claim the title of first workload manager with direct support for Apache Hadoop applications. What does that mean? First, it means that you can submit Hadoop applications to an SGE cluster just like you would any other parallel job. The cluster will take care of setting up the Hadoop jobtracker and tasktrackers for you. Second, it means that the SGE scheduler knows about the HDFS data locality such that it can route Hadoop jobs to nodes where the jobs' data already lives. The net result is that you can now realistically consolidate your Hadoop cluster into your SGE cluster, saving you time, money, and lots of headaches. See the docs for more info. [Also see my next post.]

Topology-aware Scheduling

Many applications benefit greatly by being tied to specific CPU sockets and/or cores. For example, some cache-hungry applications will execute in half the time if run on four cores on different sockets versus running on four cores in the same socket. With SGE 6.2u5, we've added the ability to specify these topology preferences when submitting your jobs. Whenever possible, the scheduler will honor the topology preferences when assigning jobs to nodes. For topology-sensitive applications and clusters with lots of Nehalem boxes, SGE 6.2u5 can speed up application execution considerably. See the docs for more info. [Also see my follow-up post.]

Slotwise Subordination

The SGE preemption model is what I call "after-market preemption" meaning that it's not an inherit aspect of every cluster. You have to take preemption (AKA subordination) into account when designing your cluster layout. Prior to SGE 6.2u5, the preemption model was rather coarse grained. SGE could only suspend an entire queue instance at a time, meaning that one high-priority job might be suspending two or four or sixteen or more lower-priority jobs. With SGE 6.2u5, we're introducing finer grained preemption. Now, rather than declaring that just Queue A is subordinated to Queue B, you can say that between Queues A and B there shouldn't be more than 4 jobs running, and given a conflict, Queue B wins. This new finer-grained preemption model means that you can now use subordination without paying for it with utilization. See the docs for more info. [Also see my follow-up post.]

User-controlled Array Task Throttling

One of the unique things about Sun Grid Engine is that it handles array jobs extremely efficiently. In many cases users will consolidate individual batch jobs together into array jobs to take advantage of that fact. The down side is that all tasks within an array job are considered equal with regard to scheduling policies. If an array job is the highest priority job in the system, all of it's tasks are also higher priority than any other jobs. If that array job has ten thousand tasks (something not uncommon or really even all that stressful for SGE), then all ten thousand tasks will be run before any other jobs (unless another job later becomes higher priority), at least by default. An administrator can configure a global limit to the number of tasks from a single array job that are allowed to execute at a time. Better than nothing, but global policies always leave something to be desired.

With SGE 6.2u5, we've introduced the ability for a user to apply self-imposed limits to his individual array jobs. Why would a user voluntarily set limits? In most cases it turns out that users want to do the right thing and will gladly do so given the chance. Self-imposed limits help the cluster run more smoothly, meaning that everyone gets what they want faster, and no one gets bonked on the head by the administrator. Additionally, if a user has more than one large array job pending, setting self-imposed limits allows them all to make progress instead of completing them serially. For more than one customer I know about, this feature alone will be reason enough to upgrade. [See my follow-up post for more info.]

Extended SGE Inspect

SGE Inspect, the new UI introduced in SGE6.2u3, was previously only a monitoring tool. With SGE 6.2u5, we've added the ability to manage parallel environments. Going forward we will continue adding management functionality. See the docs for more info.

Improved Cloud Connectivity

With SGE 6.2u3, we added the ability through the Service Domain Manager component to automatically provision additional cluster nodes from Amazon EC2 during peak periods. With SGE 6.2u5, we've expanded that functionality a bit and made it easier to use. See the docs for more info.

Improved Power Management

Same story as the cloud connectivity, really. We introduced the ability to automatically power down idle or underused nodes with SGE 6.u3 through the Service Domain Manager component. With SGE 6.2u5, we've fleshed it out a bit more and more it easier to use.


Over the next couple of weeks I'll try to write some posts about these features individually. If you're already Grid Engine savvy, go grab a copy and get started. If you need more info, try starting with the beginner's guide.

Monday Nov 30, 2009

Sun Grid Engine for Dummies

I've recently been asked for a really introductory doc on Sun Grid Engine, and I was dismayed to realize that there really isn't anything like that out there. Even the Beginner's Guide I wrote has some fairly high expectations of the reader's experience level. So, this post will be my attempt at a truly introductory introduction to Sun Grid Engine.

Let's Begin at the Beginning

Servers tend to be used for one of two purposes: running services or processing workloads. Services tend to be long-running and don't tend to move around much. Workloads, however, such as running calculations, are usually done in a more "on demand" fashion. When a user needs something, he tells the server, and the server does it. When it's done, it's done. For the most part it doesn't matter on which particular machine the calculations are run. All that matters is that the user can get the results. This kind of work is often called batch, offline, or interactive work. Sometimes batch work is called a job. Typical jobs include processing of accounting files, rendering images or movies, running simulations, processing input data, modeling chemical or mechanical interactions, and data mining. Many organizations have hundreds, thousands, or even tens of thousands of machines devoted to running jobs.

Now, the interesting thing about jobs is that (for the most part) if you can run one job on one machine, you can run 10 jobs on 10 machines or 100 jobs on 100 machines. In fact, with today's multi-core chips, it's often the case that you can run 4, 8, or even 16 jobs on a single machine. Obviously, the more jobs you can run in parallel, the faster you can get your work done. If one job takes 10 minutes on one machine, 100 jobs still only take ten minutes when run on 100 machines. That's much better than 1000 minutes to run those 100 jobs on a single machine. But there's a problem. It's easy for one person to run one job on one machine. It's still pretty easy to run 10 jobs on 10 machines. Running 1600 jobs on 100 machines is a tremendous amount of work. Now imagine that you have 1000 machines and 100 users all trying to running 1600 jobs each. Chaos and unhappiness would ensue.

To solve the problem of organizing a large number of jobs on a set of machines, distributed resource managers (DRMs) were created. (A DRM is also sometimes called a workload manager. I will stick with the term, DRM.) The role of a DRM is to take a list of jobs to be executed and distributed them across the available machines. The DRM makes life easier for the users because they don't have to track all their jobs themselves, and it makes life easier for the administrators because they don't have to manage users' use of the machines directly. It's also better for the organization in general because a DRM will usually do a much better job of keeping the machines busy than users would on their own, resulting in much higher utilization of the machines. Higher utilization effectively means more compute power from the same set of machines, which makes everyone happy.

Here's a bit more terminology, just to make sure we're all on the same page. A cluster is a group of machines cooperating to do some work. A DRM and the machines it manages compose a cluster. A cluster is also often called a grid. There has historically been some debate about what exactly a grid is, but for most purposes grid can be used interchangeably with cluster. Cloud computing is a hot topic that builds on concepts from grid/cluster computing. One of the defining characteristics of a cloud is the ability to "pay as you go." Sun Grid Engine offers an accounting module that can track and report on fine grained usage of the system. Beyond that, Sun Grid Engine now offers deep integration to other technologies commonly being used in the cloud, such as Apache Hadoop.

How Does It Work?

A Sun Grid Engine cluster is composed of execution machines, a master machine, and zero or more shadow master machines. The execution machines all run copies of the Sun Grid Engine execution daemon. The master machine runs the Sun Grid Engine qmaster daemon. The shadow master machines run the Sun Grid Engine shadow daemon. In the event that the master machine fails, the shadow daemon on one of the shadow master machines will become the new master machine. The qmaster daemon is the heart of the cluster, and without it the no jobs can be submitted or scheduled. The execution daemons are the work horses of the cluster. Whenever a job is run, it's run by one of the execution daemons.

To submit a job to the cluster, a user uses one of the submission commands, such as qsub. Jobs can also be submitted from the graphical user interface, qmon, but the command-line tools are by far more commonly used. In the job submission command, the user includes all of the important information about the job, like what it should actually run, what kind of execution machine it needs, how much memory it will consume, how long it will run, etc. All of that information is then used by the qmaster to schedule and manage the job as it goes from pending to running to finished. For example, a qsub submission might look like: qsub -wd /home/dant/blast -i /home/dant/seq.tbl -l mem_free=4G ddbdb. This job searches for DNA sequences from the input file /home/dant/seq.tbl in the ddbdb sequence database. It requests that it be run in the /home/dant/blast directory, that the /home/dant/seq.tbl file be piped to the job's standard input, and that it run on a machine that has at least 4GB of free memory.

Once a job has been submitted, it enters the pending state. On the next scheduling run, the qmaster will rank the job in importance versus the other pending jobs. The relative importance of a job is largely determined by the configured scheduling policies. Once the jobs have been ranked by importance, the most important jobs will be scheduled to available job slots. A slot is the capacity to run a job. Generally, the number of slots on an execution machine is set to equal the number of CPU cores the machine has; each core can run one job and hence represents one slot. Every available slot is filled with a pending job, if one is available. If a job requires a resource or a slot on a certain type of machine that isn't currently available, that job will be skipped over during that scheduling run.

Once the job has been scheduled to an execution machine, it is sent to the execution daemon on that machine to be run. The execution daemon executes the command specified by the job, and the job enters the running state. Once the job is running, it is allowed to continue running until it completes, fails, is terminated, or is requeued (in which case we start over again). Along the way the job may be suspended, resumed, and/or checkpointed any number of times. (Sun Grid Engine does not handle checkpointing itself. Instead, Sun Grid Engine will trigger whatever checkpointing mechanism is available to a job, if any is available.)

After a job has completed or failed, the execution daemon cleans up after it and notifies the qmaster. The qmaster records the job's information in the accounting logs and drops the job from its list of active jobs. If the submission client was synchronous, the qmaster will notify the client that the job ended. Information about completed jobs is available through the qacct command-line tool or the Accounting and Reporting Console's web console.

In addition to traditional style batch jobs, as in the BLAST example above, Sun Grid Engine can also manage interactive jobs, parallel jobs, and array jobs. An interactive job is like logging into a remote machine, except that Sun Grid Engine decides to which machine to connect the user. While the user is logged in, Sun Grid Engine is monitoring what the user is doing for the accounting logs. A parallel job is a distributed job that runs across multiple nodes. Typically a parallel job relies on a parallel environment, like MPI, to manage its inter-process communication. An array job is similar to a parallel job except that it's processes don't communicate; they're all independent. Rendering an image is a classic array job example. The main difference between a parallel job and an array job is that a parallel job needs to have all of its processes running at the same time, whereas an array job doesn't; it could be run serially and would still work just fine.

What's So Special About Sun Grid Engine?

If any old DRM (and there are quote a few out there) solves the problem, why should you be particularly interested in Sun Grid Engine? Well, there are a few reasons. My top reasons (in no particular order) why Sun Grid Engine is so great are:

  • Scalability — Sun Grid Engine is a highly scalable DRM system. We have customers running clusters with thousands of machines, tens of thousands of CPU cores, and/or processing tens of millions of jobs per month.
  • Flexibility — Sun Grid Engine makes it possible to customize the system to exactly fit your needs.
  • Advanced scheduler — Sun Grid Engine does more than just spread jobs evenly around a group of machines. The Sun Grid Engine qmaster supports a variety policies to fine-tune how jobs are distributed to the machines. Using the scheduling policies, you can configure Sun Grid Engine to make its scheduling decisions match your organization's business rules.
  • Reliability — Something that I hear regularly from customers is that Sun Grid Engine just works and that it keeps working. After the initial configuration, Sun Grid Engine takes very little effort to maintain.

The Sun Grid Engine software has a long list of features that make it a powerful, flexible, scalable, and ultimately useful DRM system. With both open source and supported product options, Sun Grid Engine offers a very low barrier to entry and enterprise class functionality and support.

Typical Use Cases

One of the easiest ways to understand Sun Grid Engine is to see it in action. To that end, let's look at some typical use cases.

  • Mentor Graphics, a leading EDA software vendor, uses the Sun Grid Engine software to manage its regression tests. To test their software, they submit the tests as thousands of jobs to be run on the cluster. Sun Grid Engine makes sure that every machine is busy running tests. When a machine completes a test run, Sun Grid Engine assigns it another, until all of the tests are completed.

    In addition to using Sun Grid Engine to manage the physical machines, they also use Sun Grid Engine to manage their software licenses. When a test needs a software license to run, that need is reflected in the job submission. Sun Grid Engine makes sure that no more licenses are used than are available.

    This customer has a diverse set of machines, including Solaris, Linux, and Windows. In a single cluster they process over 25 million jobs per month. That's roughly 10 jobs per second, 24/7. (In reality, their workload is bursty. At some times they may see more than 100 jobs per second, and at other times they may see less than 1.)

  • Complete Genomics is using Grid Engine to manage the computations needed to do sequencing of the human genome. Their sequencing instruments are like self-contained robotic laboratories and require a tremendous amount of computing power and storage. Using Grid Engine as the driver for their computations, this customer intends to transform the way disease is studied, diagnosed and treated by enabling cost-effective comparisons of genomes from thousands of individuals. They currently have a moderate sized cluster, with a couple hundred machines, but they intend to grow that cluster by more than an order of magnitude.

  • Rising Sun Pictures uses Grid Engine to orchestrate its video rendering process to create digital effects for blockbuster films. Each step in the rendering process is a job with a task for every frame. Sun Grid Engine's workflow management abilities make sure that the rendering steps are performed in order for every frame as efficiently as possible.

  • A leading mobile phone manufacturer runs a Sun Grid Engine cluster to manage their product simulations. For example, they run drop test simulations with new phone designs using the Sun Grid Engine cluster to improve the reliability of their phones. They also run simulations of new electronics designs through the Sun Grid Engine cluster.

  • D.E. Shaw is using Sun Grid Engine to manage their financial calculations, including risk determination and market prediction. This company's core business runs through their Sun Grid Engine cluster, so it has to just work. The IT team managing the cluster offers their users a 99% availability SLA.

    Also, this company uses many custom-developed financial applications. The configurability of the Sun Grid Engine software has allowed them to integrate their applications into the cluster with little or no modifications.

  • Another Wall Street financial firm is using a Sun Grid Engine cluster to replace their home-grown workload manager. Their workload manager is written in Perl and was sufficient for a time. They have, however, now outgrown it and need a more scalable and robust solution. Unfortunately, all of their in-house applications are written to use their home-grown workload manager. Fortunately, Sun Grid Engine offers a standardized API called DRMAA that is available in Perl (as well as C, Python, Ruby, and the Java™ platform). Through the Perl binding of DRMAA, this customer was able to slide the Sun Grid Engine software underneath their home-grown workload manager. The net result is that the applications did not need to be modified to let the Sun Grid Engine cluster take over managing their jobs.

  • The Texas Advanced Computing Center at the University of Texas is #9 on the November 2009 Top500 list and uses Sun Grid Engine to manage their 63,000-core cluster. With a single master managing roughly 4000 machines and over 3000 users working on over 1000 projects spread around throughout 48 of the 50 US states, the TACC cluster weighs in as the largest (known) Sun Grid Engine cluster in production. Even though the cluster offers a tremendous amount of compute power to the users of the Teragrid research network (579 GigaFLOPS to be exact), the users and Sun Grid Engine master manage to keep the machines in the cluster at 99% utilization.

    The TACC cluster is used by researchers around the country to run simulations and calculations for a variety of fields of study. One noteworthy group of users has run a 60,000-core parallel job on the Sun Grid Engine cluster to do real-time face recognition in streaming video feeds.

Atypical Use Cases

One of the best ways to show Sun Grid Engine's flexibility is to take a look a some unusual use cases. These are by no means exhaustive, but they should serve to give you an idea of what can be done with the Sun Grid Engine software.

  • A large automotive manufacturer uses their Sun Grid Engine cluster in an interesting way. In addition to using it to process traditional batch jobs, they also use it to manage services. Service instances are submitted to the cluster as jobs. When additional service instances are needed, more jobs are submitted. When too many are running for the current workload, some of the service instances are stopped. The Sun Grid Engine cluster makes sure that the service instances are assigned to the most appropriate machines at the time.

  • One of the more interesting configuration techniques for Sun Grid Engine is called a transfer queue. A transfer queue is a queue that, instead of processing jobs itself, actually forwards the jobs on to another service, such as another Sun Grid Engine cluster or some other service. Because the Sun Grid Engine software allows you to configure how every aspect of a job's life cycle is managed, the behavior around starting, stopping, suspending, and resuming a job can be altered arbitrarily, such as by sending jobs off to another service to process. More information about transfer queues can be found on the open source web site.

  • A Sun Grid Engine cluster is great for traditional batch and parallel applications, but how can one use it with an application server cluster? There are actually two answers, and both have been prototyped as proofs of concept.

    The first approach is to submit the application server instances as jobs to the Sun Grid Engine cluster. The Sun Grid Engine cluster can be configured to handle updating the load balancer automatically as part of the process of starting the application server instance. The Sun Grid Engine cluster can also be configured to monitor the application server cluster for key performance indicators (KPIs), and it can even respond to changes in the KPIs by starting additional or stopping extra application server instances.

    The second approach is to use the Sun Grid Engine cluster to do work on behalf of the application server cluster. If the applications being hosted by the application servers need to execute longer-running calculations, those calculations can be sent to the Sun Grid Engine cluster, reducing the load on the application servers. Because of the overhead associated with submitting, scheduling, and launching a job, this technique is best applied to workloads that take at least several seconds to run. This technique is also applicable beyond just application servers, such as with SunRay Virtual Desktop Infrastructure.

  • A research group at a Canadian university uses Sun Grid Engine in conjunction with Cobbler to do automated machine profile management. Cobbler allows a machine to be rapidly reprovisioned to a pre-configured profile. By integrating Cobbler into their Sun Grid Engine cluster, they are able to have Sun Grid Engine reprovision machines on demand to meet the needs of pending jobs. If a pending job needs a machine profile that isn't currently available, Sun Grid Engine will pick one of the available machines and use Cobbler to reprovision it into the desired profile.

    A similar effect can be achieved through virtual machines. Because Sun Grid Engine allows jobs' life cycles to be flexibly managed, a queue could be configured that starts all jobs in virtual machines. Aside from always having the right OS profile available, jobs started in virtual machines are easy to checkpoint and migrate.

  • With the 6.2 update 5 release of the Sun Grid Engine software, Sun Grid Engine can manage Apache Hadoop workloads. In order to do that effectively, the qmaster must be aware of data locality in the Hadoop HDFS. The same principle can the applied to other data repository types such that the Sun Grid Engine cluster can direct jobs (or even data disguised as a job) to the machine that is closest (in network terms) to the appropriate repository.

  • One of the strong points of the Sun Grid Engine software is the flexible resource model. In a typical cluster, jobs are scheduled against things like CPU availability, memory availability, system load, license availability, etc. Because the Sun Grid Engine resource model is so flexible, however, any number of custom scheduling and resource management schemes are possible. For example, network bandwidth could be modeled as a resource. When a job requests a given bandwidth, it would only be scheduled on machines that can provide that bandwidth. The cluster could even be configured such that if a job lands on a resource that provides higher bandwidth than the job requires, the bandwidth could be limited to the requested value (such as through the Solaris Resource Manager).

Further Reading

For more information about Sun Grid Engine, here are some useful links:

Beta Testing the Sun Grid Engine Hadoop Integration

In case you haven't heard yet, the upcoming release of Sun Grid Engine will include an integration with Apache Hadoop that will allow Map/Reduce jobs to be submitted to a Sun Grid Engine cluster while minding HDFS data locality. The 6.2u5 release will be out by the end of the year, but it's currently in the beta testing phase. And that's where you come in.

I'm looking for some volunteers to test the integration. To that end, this blog post will provide instructions for how to get the beta code checked out and built. The Hadoop integration is actually only loosely dependent on the Sun Grid Engine software itself. While it's planned to be part of u5, the integration should be usable with a cluster as old to 6.2u2, although I would really recommend at least 6.2u4.

In a nutshell, the integration consists of two components. The first is the hadoop parallel environment that allows Map/Reduce jobs to be started as parallel jobs in a Sun Grid Engine cluster. The second is the integration with HDFS, called Herd, that makes the Sun Grid Engine scheduler aware of the locations of the HDFS data blocks. Herd has two parts. One part is a load sensor that runs on every execution machine and reports the HDFS blocks on that machine. The other part is a JSV that translates HDFS data paths included in the job submission into a list of HDFS blocks needed by the job.

How to check out the source code

  1. Make sure you have a functional CVS client.
  2. cvs -d login
  3. cvs -d checkout gridengine/source

Technically, the above will only check out the source directory, but for the Hadoop integration, that's all you need. The Hadoop integration lives in three places. First, the scripts live in source/dist/hadoop. Second, the Herd code lives at source/libs/herd. Third, the JSV Java language binding upon which the Herd code depends lives at source/libs/jjsv.

How to build the source code

  1. Make sure you're using at least Ant 1.6.3 and the Java Standard Edition 6 platform.
  2. Copy the source/ file to
  3. Edit the file to include the corrects paths for the Java Standard Edition 6 platform and junit 3.8.
  4. Change to the gridengine/source directory.
  5. ant jjsv
  6. ant herd

After the above steps, you will find herd.jar at source/CLASSES/herd/herd.jar and JSV.jar at source/CLASSES/jjsv/JSV.jar.

How to install the integration

  1. Copy herd.jar and JSV.jar to the $SGE_ROOT/lib directory.
  2. Copy the source/dist/hadoop directory to somewhere accessible by all the execution nodes.

How to configure the integration

  1. Get HDFS up and running on your cluster. The most useful configuration will be to have every execution host be a data node, and to only have execution hosts as data nodes. Also, because of the way Hadoop does authentication and authorization, you'll need to make sure that either HDFS has security disabled or that root and the SGE admin user are in the HDFS super user group.
  2. Copy your Hadoop configuration directory to <hadoop>/conf, where <hadoop> is the directory that you copied in step 2 of How to install the integration.
  3. Delete the <hadoop>/conf/mapred.xml, <hadoop>/conf/masters, and <hadoop>/conf/slaves files.
  4. Edit the <hadoop>/ file to contain the paths to the Java platform, the Hadoop install directory, and the Hadoop configuration directory you just created (<hadoop>/conf).
  5. Change into the <hadoop> directory.
  6. ./ -i
  7. Add the hadoop parallel environment to one or more of your queues

The script will install the hadoop parallel environment and the complexes needed by Herd. It will also start the Herd load sensor on all the execution hosts. At this point, you should be ready to go. Wait for a couple of minutes to give all of the execution hosts a chance to start running the load sensor and reporting values. You can run qhost -F hdfs_primary_rack to check that the load sensor is functioning correctly. Every execution host should report an hdfs_primary_rack value. If one or more machines have not reported a value within about five minutes, see the troubleshooting section below.

Using the integration

To submit a job that uses the hadoop parallel environment, use -pe hadoop <n>, where <n> is the number of nodes. The hadoop parallel environment uses an allocation rule that guarantees that no more than one task tracker per job will run on a single host. To tell the scheduler what data the job needs, request the hfds_input resource with a value of the HDFS path to the job's data. The data path must be an absolute path.

Here's an example. Say I want to use the grep example to find occurrences of the word 'Sun' in a series of documents. First, I'd copy those document into HDFS to /user/dant/sungrep (e.g. bin/hadoop fs -copyFromLocal ~/Documents/\* /user/dant/sungrep). I would then submit the job with echo `pwd`/bin/hadoop --config \\$TMPDIR/conf jar `pwd`/hadoop-0.20.1-examples.jar grep sungrep output Sun | qsub -pe hadoop 16 -l hdfs_input=/user/dant/sungrep -jsv <hadoop>/

Let's look at that in a little more detail. First, we're echoing the Hadoop command and piping it to qsub. Why? Well, when the integration runs, it creates a conf directory in the job's temp directory that is properly set up for the assigned hosts. Until the job runs, though, we don't know where the temp directory is. We get it's path from the $TMPDIR variable once the job starts. We therefore need to wrap the Hadoop command in a script. We could either write a script that contains the command, or we could let qsub write one for us by piping the command to qsub's stdin. Note that we used --config \\$TMPDIR/conf in the command. The backslash is important because it prevents the shell on the submission host from interpreting the $TMPDIR variable.

Next, the qsub command uses -pe hadoop 16 to request 16 nodes. When this job is run, a job tracker will be started on the "master" host, and a task tracker will be started on each of the 16 assigned nodes. The master host is the host where the parallel job's master task is started. After the job tracker and task trackers are running, the grep job itself will be started, launched from the master host. The hadoop PE is a tight integration with an allocation rule of "1". In order to run a Hadoop job on top of SGE, you must use the PE, even if it's only a single-node job.

The qsub command also uses -l hdfs_input=/user/dant/sungrep -jsv <hadoop>/ The -l resource request tells SGE what data will be used by the job. It must be specified as an absolute path. The -jsv switch actually translates the resource request for hdfs_input into requests for specific racks and blocks. Without the -jsv switch, the job would never run because no node offers the hdfs_input resource. (No node offers it because it doesn't really exist. It's just a placeholder for the JSV to replace with rack and block requests. In programming terms, it's a reference injection point.) The resource request and JSV can be left out of the qsub command. If they're left out, the scheduler will not take the HDFS data locality into consideration when scheduling the job.

You can also use the Hadoop integration to set up the job tracker and task trackers and then submit jobs to them directly. Instead of echoing the Hadoop command to qsub, echo sleep 300000 instead. That will cause the job tracker and task trackers to be set up, but instead of running a job, it will just sleep for a long time. You can then run qstat -j <jobid> | grep context to show the job's context. One of the context variables will be the URL for the job tracker. Using that URL, you can set up a Hadoop configuration to talk to the job tracker so that you can submit jobs to it from the command line.

It is also highly recommended that the use of the Hadoop integration be coupled with exclusive host access. The Hadoop task trackers all assume that they have exclusive access to their nodes. If you don't use exclusive host access with the Hadoop integration, you'll end up oversubscribing the nodes.


Hopefully everything will work perfectly the first time. If for some reason it doesn't, here are some tips to help diagnose the problem:

The execds aren't reporting any hdfs resources, i.e. qhost -F | grep hdfs shows nothing.
Sometimes it takes several minutes for the nodes to start reporting the hdfs resources. If after several minutes there's still nothing, pick an execution host and check if the load sensor is running: jps -l. Look for com.sun.grid.herd.HerdJsv. Note that it might be running as root or as the SGE admin user. Also note that jps may only show you your own processes. If the load sensor isn't running, look for log files in /tmp. They will be called sge_hadoop_loadsensor.out and sge_hadoop_<n>.log. The .out file is the output from starting the load sensor. The .log files are the logging output from the load sensor. One will be the log file from the load sensor framework, and the other will be the log file from the Herd load sensor. (You can control the logging verbosity from the file in the <hadoop> directory.) The most common problem is that the load sensor is started as the user root on most platforms (for a reason I don't yet understand), but that HDFS usually is not. With HDFS, the user who started it is the super user, and only the super user can query the kind of information that the load sensor needs. As stated in the configuration section, you must either disable HDFS security or set a super user group that contains root (and probably the SGE admin user). The next most common problems are that the path to Hadoop or the Java platform is not correct in or that the conf directory contains bad configuration information. You can test the load sensor manually by changing into the <hadoop> directory and running If it works, it will "hang". Press enter, and it should spit out the hdfs resource values for that host. Type QUIT and press enter to exit the load sensor.
The job tracker and/or task trackers aren't starting.
The first place to look is the PE output and error files. The output from starting the job tracker should be found there. The next place to look is the log files. The log files are written where the Hadoop configuration says to put them. Make sure that wherever that is, all the users have access to it from all the nodes. Inability to write the log file is a common reason why the job tracker and/or task trackers won't start. In addition to the usual Hadoop log files, the integration also write a hadoop-<adminuser>-sge-<hostname>.log file. That file contains the output from starting the task trackers from the master host. Another common reason for the job tracker and/or task trackers not to start is that the path to the Java platform isn't correctly configured in the file.

Monday Aug 10, 2009

Another Undocumented Feature

In reading the comments for Issue 409, I came across another undocumented feature I hadn't seen before. Apparently, if you pass a variable to your job through qsub or qrsh with the -v switch, and if that variable starts with SGE_COMPLEX_, the SGE_COMPLEX_ part will be stripped off, and the remainder will be treated as a resource request whose value will be placed in the job's environment.

An example will make it easier to explain. If your job is able to run on multiple architectures, but you always select on which architecture you're running it when you submit, you could add "-v SGE_COMPLEX_arch" to the qsub submission parameters, and the job's environment would then contain the value of arch that was requested as the -l arch=... resource request. In action, it would look like:

% qrsh -l arch=sol-amd64 -v SGE_COMPLEX_arch echo \\$SGE_COMPLEX_arch

Nice, but why is it useful? Well, maybe your script is capable of operating in multiple environments, but it needs to know about how it was submitted. For example, maybe the script changes your application's startup parameters based on the memory limits. The script could use this feature to get the memory limits from the submission parameters and act accordingly. Of course, it could also get the memory limits from ulimit(1), so maybe not the best example. Licenses may be a better example. The OS is blissfully unaware of license assignments. The only way for your script to find out about how many licenses were requested for it would be to use this feature (or do some clever digging with qstat).

You might have noticed by now that you could get the same effect by just passing in the requested complex value as an environment variable, e.g. "qsub -l arch=sol-amd64 -v arch=sol-amd64 ..." The difference between using the SGE_COMPLEX_ feature and using an environment variable explicitly is that with the SGE_COMPLEX_ feature you don't have to know what the requested value was, i.e. you can add it to an sge_request(5) file or write it into your script. And now we come to the real value. If you have a job that needs to know about its submission parameters, you can embed submission directives to add the needed complexes' values to the environment. Pretty handy. Whenever you can, it's a good idea to make your jobs and scripts as self-contained as possible.

Update: It would appear that this feature no longer works in 6.2. The last version I was able to verify it in was 6.1u2. Not such a big deal, though, because with 6.2 we introduced JSVs, which let you do the same thing and a tremendous amount more.

Thursday Jul 30, 2009

Sun HPC Software Workshop '09 -- Early Bird's Almost Over!

Just wanted to remind everyone that the early bird registration for the Sun HPC Software Workshop '09, Sept 7-10 in Regensburg, Germany, ends tomorrow (31 July 2009). It's your last chance to sign up at the discounted rate. After tomorrow, you will still be able to register, but the cost of registration will be higher.

In a nutshell, the Sun HPC Software Workshop '09 is a combination of our annual Grid Engine Workshop, a European edition of the popular Lustre Users Group meeting, and a conference on developing applications and services for HPC and cloud environments. The Workshop lasts three days, with a presentation track representing each of these topics. One the day before the main Workshop starts, we're also holding deeper technology seminars: a Lustre Deep Dive, a Grid Engine admin training, and a class on parallel application development taught by Ruud van der Pas. The Workshop and the preceding seminars are an excellent opportunity to learn more about these technologies and connect with the product engineers, partners, and other community members.

There is an open Call for Presentations for the Workshop, but it also closes tomorrow. If you're interested in proposing a talk for the Workshop (and getting a discounted registration fee if it's accepted), send a title, duration, and brief summary to the email address listed on the Agenda page. But, hurry. We'll be making our final decisions and notifying the speakers soon.

I look forward to seeing you there!

Tuesday Jul 21, 2009

Lies, Damned Lies, & DRMs

Some of our competitors seem to be very fond of spreading the rumor that the Sun Grid Engine product team has been laid off and/or that the product has been discontinued. It would appear that since they can't claim to have a better, more scalable, or more cost-effective product, they're willing to go with lying through their teeth to make the sale. Since I keep getting asked this question, I figured it would be worthwhile to post an official response.

To plagiarize Mark Twain, the rumors of our death have been greatly exaggerated. We're still here and going strong. The team is now roughly four times the size it was when I joined six years ago. It spans six offices in five countries on three continents. The product has a road map that reaches out past 2012 (which is as far as we're willing to speculate). We have a massive (if not leading) share in both the open source and licensed DRM system markets, and we're not planning to go away any time soon.

Of course, with the deal with Larry pending, nothing is certain. The only comment I can make there is "no comment." That said, for now at least, it's business as usual. We're still writing code, preparing releases, doing trainings, holding our annual Workshop, etc. Look for the next update this quarter. Look for the next release next year. And look for a whole lot more good stuff coming from our team over the next several updates and releases. With the features that have been added in the 6.2, 6.2u2 and 6.2u3 releases, Sun Grid Engine is in a great position. With what's coming up, I'd resort to lying too, if I worked for one of our competitors.

Monday Jul 20, 2009

European Students: Want a Free Laptop?

Are you a student in Europe\*? Do you want a new Toshiba laptop? Willing to write some code to get it? Good. Read on.

The OpenSolaris HPC team is currently running a programming contest for European students that was launched at ISC in Hamburg last month. The contest is to write the most performant and scalable implementation of a distributed hash table. Submission can be from teams of up to three people. The top prize is a new Toshiba laptop for each member of the winning team.

For more information, check out the contest site. Better hurry, though, because the contest deadline is coming up quick!

\* Contest participation is limited to legal residents of a specific list of European countries. See the contest site for details.


1. DESCRIPTION OF THE CONTEST: The Sun HPC Software Student Programming Challenge ISC 2009 ("Contest") is designed to promote the use of the Sun HPC Software, Developer Edition 1.0 for OpenSolaris among students by having them compete to design and implement the most scalable and best-performing implementation of a common parallel algorithm. Prizes will be awarded to those who submit the best entries as determined by the judges in accordance with these Official Rules.

2. ELIGIBILITY: This contest is open only to teams of 1 to 3 currently-enrolled, full- or part-time, undergraduate or graduate, university or college students, who are the legal age of majority in their country, province or state of legal residence and residents of Denmark, France, Germany, Italy, Poland, Russia, Spain, Sweden, Switzerland, and the United Kingdom. Void in Puerto Rico, Quebec and where prohibited by law. Persons in any of the following categories are not eligible to participate or win the prize(s) offered: (a) Employees or agents of Sun Microsystems, their parent companies, affiliates and subsidiaries, participating advertising and promotion agencies, application development partner companies, and prize suppliers; (b) immediate family members (defined as parents, children, siblings and spouse, regardless of where they reside) and/or those living in the same household as any person in (a) above; and (c) employees of any government entity. You must also have access to the Internet and a valid email address in order to enter or win.

3. HOW TO ENTER: This contest begins at 12:01 P.M. Pacific Time (PT) Zone in the United States (e.g. San Francisco time) which is 5:01 A.M. Greenwich Mean Time (GMT) on the 29th of June 2009 and ends at 11:59 P.M. (PT) which is 4:59 A.M. (GMT) on 10th of August 2009 ("Contest Period"). IMPORTANT NOTICE TO ENTRANTS: ENTRANTS ARE RESPONSIBLE FOR DETERMINING THE CORRESPONDING TIME ZONE IN THEIR RESPECTIVE JURISDICTIONS.

4. THE SUBMISSION: Create an implementation of a fault-tolerant distributed hash table as described at The implementation must be written in C for the OpenSolaris 2009.06 operating environment using the Sun HPC ClusterTools 8.1 OpenMPI implementation and must be submitted as a Sun Studio 12 project. All Entries must include a valid and complete Sun Studio 12 project that builds without errors on an unmodified instance of the Sun HPC Software, Developer Edition 1.0 for OpenSolaris. Entries may be submitted either electronically or via mail. All Entries must be comprised of original work of the submitter(s). No participant may submit an Entry as a member of more than one team.

Electronic Entries must include a 1-3 page written summary of the implementation approach and the name(s) of the submitter(s). The electronic file must be a gzipped tar file that includes the Sun Studio 12 project directory, including all required files, and must be no larger than 5MB in size. If the electronic file is larger than 5MB in size, it must be submitted by mail in accordance with the instructions below. The electronic entry must be sent via email to and received no later than 11:59 PM (PDT) on August 10th, 2009 in the United States.

Mailed Entries must include a 1-3 page written summary of the implementation approach and the name(s) of the submitter(s), and a CD or DVD containing the project code as described above. All mailed Entries must be sent to Sun HPC Software Programming Challenge, c/o Sun Microsystems, Inc., 17 Network Circle, Menlo Park, CA 94025, MS-MPK17-207, and must be received no later than 11:59 PM (PDT) on August 10th, 2009 in the United States.

All Entries must be in English. Registration or Entries that are in any other language will not be considered. Entries that are lewd, obscene, pornographic, disparaging of the Sponsor or otherwise contain objectionable material may be disqualified in the Sponsor's sole and unfettered discretion.

5. JUDGING: All Entries will be judged by a panel of experts based on the following equally weighted judging criteria: data retrieval throughput for requests coming from a single node, data retrieval throughput for parallel requests coming from multiple nodes, ability to withstand processing node failure, and scalability with respect to number of processing nodes and number of data items. In the event of a tie, the person or team among the tied Entries with the highest score in scalability with respect to number of processing nodes and number of data items will be declared the winner. In the event that no entries are received, no prize will be awarded. Decisions of judges are final and binding. Winner will be notified by email.

6. PRIZES AND APPROXIMATE RETAIL VALUE: First prize: Toshiba OpenSolaris laptop valued at approximately $2,000. Second and third prizes: Apple iPod valued at approximately $150. Up to three Toshiba laptops and six Apple iPods may be awarded. Prize includes round-trip coach air transportation for one person from major airport nearest winner's residence and hotel accommodations for one person for four nights. Hotel accommodations at Sponsor's discretion. Certain black out dates apply. In the event the Sun HPC Software Workshop is cancelled or postponed for any reason, Sponsor reserves the right to award the remainder of the prize with no further obligation to the winner. All other expenses not specified herein are the responsibility of the winner. ALL TAXES AND ANY APPLICABLE WITHOLDING AND REPORTING REQUIREMENTS ARE THE SOLE RESPONSIBILITY OF THE WINNER. Cash prizes will be awarded in US Dollars. All costs associated with currency exchange are the sole responsibility of the winner.

7. CONDITIONS OF PARTICIPATION. Sponsor reserves the right to substitute a prize for an item of equal or greater value in the event all or part of a prize becomes unavailable. Prizes are awarded without warranty of any kind from Sponsor, express or implied, without limitation, except where this would be contrary to federal, state, provincial, or local laws or regulations. All federal, state, provincial and local laws and regulations apply. Submission of entry into this Contest deems that entrants agree to be bound by the terms of these Official Rules and by the decisions of Sponsor, which are final and binding on all matters pertaining to this Contest. Return of any prize/prize notification may result in disqualification and selection of an alternate winner. Any potential winner who cannot be contacted within 15 days of attempted first notification will forfeit his/her prize. Potential prize winner(s) may be required to sign and return an Affidavit or Declaration of Eligibility/Liability & Publicity Release within 30 days following the date of first attempted notification. Failure to comply within this time period may result in disqualification and selection of an alternate winner. Travel companion of winner must also execute an Affidavit of Eligibility/Liability & Publicity Release prior to ticketing and must possess required travel documents (e.g. valid photo I.D.) prior to departure. Once the travel schedule has been arranged, it cannot be altered and failure of winner to follow such schedule shall not obligate Sponsor in any way to provide the winner with alternate arrangements. The intellectual and industrial property rights to the contest submission, if any, will remain with the participants, except that these terms do not supersede any other assignment or grant of rights according to any other separate agreements between participants and other parties. As a condition of entry, participants agree that Sun shall have the right to use, copy, modify and make available the application or code in connection with the operation, conduct, administration, and advertising and promotion of the Contest via communication to the public, including, but not limited to the right to make screenshots, animations and video clips available to the public for promotional and publicity purposes. Notwithstanding the foregoing, ownership of and all intellectual and industrial property rights in and to the application and code shall remain with the participant. Acceptance of the prize constitutes permission for, and winners consent to, Sponsor and its agencies to use a winner's name and/or likeness and entry for advertising and promotional purposes without additional compensation, unless prohibited by law. To the extent permitted by law, entrants, agree to hold Sponsor, its parent, subsidiaries, agents, directors, officers, employees, representatives and assigns harmless from any injury or damage caused or claimed to be caused by participation in the Contest and/or use or acceptance of any prize won, except to the extent that any death or personal injury is caused by the negligence of the Sponsor. Sponsor is not responsible for any typographical or other error in the printing of the offer, administration of the Contest or in the announcement of the prize. A participant may be prohibited from participating in this Contest if, in the Sponsor's sole discretion, it reasonably believes that the participant has attempted to undermine the legitimate operation of this Contest by cheating, deception, or other unfair playing practices or annoys, abuses, threatens or harasses any other participants, the Sponsor or associated agencies. In the event a winner/potential winner's employer has a policy, which prohibits the awarding of a prize to an employee, the prize will be forfeited and an alternate winner will be selected.

8. NO RECOURSE TO JUDICIAL OR OTHER PROCEDURES: To the extent permitted by law, the rights to litigate, to seek injunctive relief or to make any other recourse to judicial or any other procedure in case of disputes or claims resulting from or in connection with this contest are hereby excluded, and any participant expressly waives any and all such rights.

Participants agree that these Official Rules are governed by the laws of California, USA.

9. DATA PRIVACY: Participants agree that personal data, especially name and address, may be processed, stored and otherwise used for the purposes and within the context of the contest and any other purposes outlined in these Official Rules. The data may also be used by the Sponsor in order to check participants' identity, their postal address and telephone number, or to otherwise verify their eligibility to participate in the Contest and to receive any prize. Participants have a right to access, review, rectify or cancel any personal data held by the Sponsor by writing to Sponsor (Attention: Daniel Templeton) at the address listed below. If participant's data is not provided or is canceled participants' Entries will be ineligible.

10. WARRANTY AND INDEMNITY: Entrants certify that their entry is original and that they are the sole and exclusive owner and right holder of the submitted entry and that they have the right to submit the Entry in the Contest. Each participant agrees not to submit any Entry that (1) infringes any 3rd party proprietary, intellectual property, industrial property, personal rights or other rights, including without limitation, copyright, trademark, patent, trade secret or confidentiality obligation; or (2) otherwise violates applicable law in any countries in the world. To the maximum extent permitted by law, each participant indemnifies and agrees to keep indemnified the Sponsor its parent, subsidiaries, agents, directors, officers, employees, representatives and assigns harmless at all times from and against any liability, claims, demands, losses, damages, costs and expenses resulting from any act, default or omission of the participant and/or a breach of any warranty set forth herein. To the maximum extent permitted by law, each participant indemnifies and agrees to keep indemnified the Sponsor, its parent, subsidiaries, agents, directors, officers, employees, representatives and assigns harmless at all times from and against any liability, actions, claims, demands, losses, damages, costs and expenses for or in respect of which the Sponsor will or may become liable by reason of or related or incidental to any act, default or omission by a participant under these Official Rules including without limitation resulting from or in relation to any breach, non-observance, act or omission whether negligent or otherwise, pursuant to these official rules by a participant.

11. ELIMINATION: Any false information provided within the context of the Contest by any participant concerning identity, postal address, telephone number, ownership of right or non-compliance with these rules or the like may result in the immediate elimination of the participant from the Contest. Sponsor further reserves the right to disqualify any Entry that it believes in its sole and unfettered discretion infringes upon or violates the rights of any third party or otherwise does not comply with these official rules.

12. INTERNET: Sponsor is not responsible for electronic transmission errors resulting in omission, interruption, deletion, defect, delay in operations or transmission. Sponsor is not responsible for theft or destruction or unauthorized access to or alterations of entry materials, or for technical, network, telephone equipment, electronic, computer, hardware or software malfunctions or limitations of any kind. Sponsor is not responsible for inaccurate transmissions of or failure to receive entry information by Sponsor on account of technical problems or traffic congestion on the Internet or at any Web site or any combination thereof, except to the extent that any death or personal injury is caused by the negligence of the Sponsor. If for any reason the Internet portion of the program is not capable of running as planned, including infection by computer virus, bugs, tampering, unauthorized intervention, fraud, technical failures, or any other causes which corrupt or affect the administration, security, fairness, integrity, or proper conduct of this Contest, Sponsor reserves the right at its sole discretion to cancel, terminate, modify or suspend the Contest. Sponsor reserves the right to select winners from eligible entries received as of the termination date. Sponsor further reserves the right to disqualify any individual who tampers with the entry process. Caution: Any attempt by a contestant to deliberately damage any Web site or undermine the legitimate operation of the game is a violation of criminal and civil laws and should such an attempt be made, Sponsor reserves the right to seek damages from any such contestant to the fullest extent of the law.

13. If any provision(s) of these Official Rules are held to be invalid or unenforceable, all remaining provisions hereof will remain in full force and effect.

14. WINNER'S LIST: For winner's name, log onto on or about August 14th, available for a period of up to 60 days.

15. SPONSOR: The Sponsor of this Contest is Sun Microsystems, Inc., 4220 Network Circle, Santa Clara, CA 95054.

Sun HPC Software Workshop '09

Every year, usually in the autumn, we have a Grid Engine workshop, usually at the Grid Engine home base in Regensburg, Germany. (Last year was an exception in that we held the conference in the spring in Oakland. What were we thinking?) This year will be no exception. September 7-10 at the Best Western Premier in Regensburg, Germany, we'll be holding the next Grid Engine workshop. What is exceptional about this year, though, is that we're expanding the scope to be about all of Sun's HPC software offerings.

This year, the workshop will offer three separate tracks. One track will be essentially the Grid Engine workshop that we all know and love. The second track will be focused on Open Storage technologies, like Lustre, SAM-QFS, ZFS, etc. The last track will be about development tools and technologies for HPC and the cloud, including Sun's HPC developer tools, Hadoop, Fortress, the Sun Cloud, etc.

If you're interested in any of these technologies, especially Grid Engine and/or Lustre, this is a conference you won't want to miss. And as an added incentive, the conference falls squarely in the middle of the Regensburger Herbstdult, which is the city's autumn festival. In US terms, it's a lot like a county fair with beer tents. In general, think mini-Oktoberfest. Monday (Sept. 7th) night, we'll take a delegation of folks from the conference over to the Dult for an evening of socializing over a few liters of beer. (I have empirically proven my limit to be 2.5L in a sitting.)

The Call For Presentations for the conference is open until the end of July. If you're doing something interesting with one, some, or all of these technologies, we'd love to hear from you. We have presentation slots open that are 15, 25, and 55 minutes long. In addition, if your talk is selected for the Workshop, you will get a discounted registration fee. For details, click on the Call for Presentations tab on the Workshop site.

And as if all that wasn't tempting enough, Monday, September 7th, the first day of the Workshop, will be devoted to deep-dive seminars. These will include a full-day Grid Engine administration training, a Lustre internals deep dive, and a parallel programming class. These is an additional fee for attending the seminars, and there are a limited number of seats. If you're interested, sign up now!

I hope to see you there! (Look for updates on Twitter via the #sunhpc09 hash tag.)

Friday Jul 17, 2009


So, you may have noticed that I haven't been blogging much lately. That's partially because I'm completely swamped and partially because I've started tweeting a lot of the things that I would have normally blogged. I'm finding that for posting links or sending out tips and tricks, Twitter is lower overhead than blogging.

One of the things I've been doing on Twitter is a Grid Engine Tip of the Day. Most of the time it's something that I just answered on the mailing list. In general, though, it's intended to be little things that you might not have realized or known about.

Something I really like about Twitter is that it's more conversational. Yes, you can leave comments on my blog posts, and yes it technically fills the same purpose, but it just seems so much more natural to just ask a question on Twitter. Of course, before asking a question is helpful, I have to build a following. To that end, I've started doing something else on Twitter. When people respond to the questions I ask, I've been sending the first ones "thank-you gifts", which thus far has been 4GB USB memory sticks with the OpenSolaris logo. Think of it as positive reinforcement\*.

I would love to hear your opinions here or on Twitter about the use of Twitter for conveying the kind of information that I'm prone to try to covey

\*: I reserve the right to be completely arbitrarily about when and to whom I send something and what I send them, if I send them anything. Proper positive reinforcement demands randomness.

Tuesday Mar 31, 2009

Strange Times

I just saw the news that Rackable Systems just bought sgi. Oh, how the mighty have fallen. Reminds me of when Wizards of the Coast bought TSR. I hope that's the last acquisition news we here for a while...

By the way, I've started tweeting links like this one, rather than blogging them. If you don't want to miss out on any of the fun, follow me there. (I've also started tweeting a Grid Engine tip of the day.)

Thursday Mar 19, 2009

Rube Goldberg Gone Wild

I just can't not post this. Assuming they're not cheating by editing the film, this is easily the largest Rube Goldberg machine I've ever seen, and they're really creative about the elements they used. They do lose points for not using live animals, though.

(Anyone have a better link for this video? I'm sure it's on YouTube or Google Video somewhere, but at the moment, I can't get it to play again, so I don't have details with which to search for it.)

Podcast: New Installer in Sun Grid Engine 6.2 Update 2

I just posted a new podcast on the new installer in Sun Grid Engine 6.2u2. Check it out.

Monday Mar 16, 2009

New Installer in Sun Grid Engine 6.2 Update 2

In my previous post, I talked about the new installer that is included with Sun Grid Engine 6.2u2. Lubos, one of our core team (as opposed to Service Domain Manager or QA) engineers in Prague, has just posted a couple of videos of the new installer. The first one shows how to make sure the new installer can be used with the machines you're planning to use for your cluster. Because the new installer can install an entire cluster at once, it has to be able to contact all the machines destined for the cluster, and that's where the setup comes in. The second one actually shows off the new installer. Lubos also has some screenshots of the new installer posted.

Thursday Mar 05, 2009

Sun Grid Engine 6.2 Update 2 Is Out!

Sun Grid Engine 6.2u2 is now available. If you're not excited, you should be. First off, don't let the name fool you. 6.2u2 is not just bug fixes. It's a full feature release, and contains some great features. What features? Glad you asked.

First and foremost, job submission verifiers (JSVs). It's a feature we added specifically for TACC, but it's one that will be useful for almost everyone. In fact, I suspect that we'll discover it's the answer to some of the classic Sun Grid Engine problems. What is it? Before 6.2u2, there was no way to prevent a job from being submitted. It was (and still is) possible to choose not to schedule a job after it's been submitted, but before 6.2u2, that's all you could do. With 6.2u2 and JSV, you now have the option to insert a step between submission and acceptance. With that step, you can choose to accept or reject the job submission, but you can also choose to modify the job before accepting it, and that's where the magic comes in.

The verification step is handled through scripts or binaries. There's a new submission option, -jsv, that adds a JSV to the submission. That means you can pick up JSVs from anywhere that you can stash a submission option: most notably the global sge_request file, your user sge_request file, and the directory's sge_request file, but also DRMAA native specification, DRMAA job category, the enigmatic -@ switch, and, of course, the command line itself. The -jsv switch is cumulative, so if you have one in several of those places, several JSVs will be run for your submission. It's worth noting that all of the above listed JSV sources are controlled by the user, except the global sge_request file, and even that can be overridden with the -clear switch.

So far, we've only talked about the client side. JSVs can also come in on the server side. In the global host configuration an administrator can configure a single JSV. Unlike on the client side where every JSV is started from scratch with every job submission, on the server side the JSV is started once and queried repeatedly. The reason is that on the client side, performance isn't a big issue, but on the server side, the cost of forking and execing the JSV for every job submission can have a huge impact. By keeping the JSV running, we save that cost. The big advantage of the server-side JSV is that users can't circumvent it. If you really need to enforce a policy with a JSV, the server side is that place to do it.

Now, if you're thinking fast, you might question the point of the server-side JSV when users can change everything about the job using qalter after it's submitted. Well, so did we. When you configure a server-side JSV, users are no longer allowed to modify jobs after submission unless you specifically grant the ability to do so, and even then it's limited to the job attributes that you allow them to modify.

JSV is a huge topic, and I could probably go on for days about it. Instead I'll save it for a white paper and move on.

The next big feature in 6.2u2 is the new installer. You now have the option of using the old interactive text-based installer or a new graphical installer. The graphical installer has several important advantages. First, it lets you install an entire cluster at once. It actually sits on top of the auto-installer and reuses that same functionality to install remote nodes. The graphical installer, however, will first verify that all the nodes are reachable before the installation starts, so the installation won't quietly hang on an unreachable node. It also accepts wildcarded host name and IP address ranges, which makes installing a huge cluster much simpler.

The third major feature is that we've added support for Microsoft Windows Vista (Ultimate and Enterprise) and Server 2003R2 and 2008. Both 32-bit and 64-bit version are available. Harald (who you should encourage to start blogging!) worked really hard on ironing out the issues with the changes in the OS. We still rely on SFU for the Windows execution daemons, except that it's now called SUA.

The fourth big feature is job-level parallel job resource requests. Before 6.2u2, whenever a parallel job requested a resource, SGE would implicitly multiply that resource request by the number of assigned slaves (because each slave requests the resource on the host where it runs). That makes sense with, say, memory, where requesting 4GB really means that every slave should have 4GB. It doesn't make any sense for other things, like some software licenses. Now with 6.2u2, the administrator can flag a resource as job level, meaning that it is not multiplied by the number of assigned slaves when requested by a parallel job. In most cases, a resource that shouldn't be multiplied in for one job, shouldn't be multiplied for any job. There may be exceptions to the rule, but I doubt there will be many. I'd love to hear your feedback, though.

The last two new features aren't so much features as improvements. Starting with 6.2u2, the 64-bit Linux binaries use the jemalloc library instead of the default Linux malloc. The performance and memory footprint impact is significant, in some cases as much as 20% improvement. Also, starting with 6.2u2, the Linux binaries use poll() instead of select() in the commlib. For some flavors of Linux, the use of select() made it difficult to scale past a couple thousand hosts. With the commlib now using poll(), I've seen SGE scale well over 6000 Linux nodes.

And on top of all that, there is the usual pile of bug fixes. A handful of qmaster and scheduler issues cropped up recently in 6.2 and 6.2u1, but with 6.2u2 those should all now be resolved.

I highly recommend giving 6.2u2 a try, if for no reason other than JSV. Let me know what you think!

Sunday Feb 22, 2009

I Like This Guy

At the Omniture Summit '09 last week I listened to a keynote presentation by George Colony, head honcho over at Forrester. I found his style very entertaining and his points mostly on target. I recently took a look at his blog, and I think I really like this guy. His blog is definitely worth a read. (I also love the irony that in the brave new world of social media, I, Joe Nobody, can announce with a straight face to my faceless readership that I approve of the founder of Forrester. I also approve of Peter Gabriel and Scott McNealy, by the way.)




« July 2016