Friday May 29, 2009

Rur Valley Journal: What's Up at Jülich?

What's up at Jülich? The latest 200+ TeraFLOPs of Sun-supplied HPC compute power is now up and running!

The JuRoPa (Jülich Research on PetaFLOP Architectures) system at the Jülich Research Center in Jülich, Germany has just come online this week. A substantial part the system is built with the Sun Constellation System architecture, which marries highly dense blade systems with an efficient, high-performance QDR InfiniBand fabric in an HPC cluster configuration.

We delivered 23 cabinets filled with a total of 1104 Sun Blade x6275 servers or 2208 nodes. Each of these nodes is a dual-socket Nehalem-EP system running at 2.93 GHz. The systems are connected with quad data-rate (QDR) 4X InfiniBand using a total of six of our latest 648-port QDR switches. As usual, we use 12X InfiniBand cables to route three 4X connections, thereby greatly reducing the number of cables and connectors, and increasing the reliability of the fabric. For more detail on the Nehalem-EP blades and other components used in this system, see this blog entry.

I've annotated one of the official photos below. Marc Hamilton has many more photos on his blog, including some cool "underground shots" at Jülich.



Wednesday Apr 15, 2009

Tickless Clock for OpenSolaris

I've been talking a lot to people about the convergence we see happening between Enterprise and HPC IT requirements and how developments in each area can bring real benefits to the other. I should probably do an entire blog entry on specific aspects of this convergence, but for now I'd like to talk about the Tickless Clock OpenSolaris project.

Tickless kernel architectures will be familiar to HPC experts as one method for reducing application jitter on large clusters. For those not familiar with the issue, "jitter" refers to variability in the running time of application code due to underlying kernel activity, daemons, and other stray workloads. Since MPI programs typically run in alternating compute and communication phases and develop a natural synchonization as they do so, applications can be slowed down significantly when some nodes arrive late at these synchronization points. The larger the MPI job, the more likely the this type of noise will cause a problem. Measurements have shown surprisingly large slowdowns associated with jitter.

Jitter can be lessened by reducing the number of daemons running on a system, by turning off all non-essential kernel services, etc. Even with these changes, however, there are other sources of jitter. One notable source is the clock interrupt used in virtually all current operating systems. This interrupt, which fires 100 times per second, is used to periodically perform housekeeping chores required by the OS. This interrupt is a known contributor to jitter. It is for this reason that IBM has implemented a tickless kernel on their Blue Gene systems to reduce application jitter.

Sun is starting a Tickless Clock project in OpenSolaris to completely remove the clock interrupt and switch to an event-based architecture for OpenSolaris. While I expect this will be very useful for HPC users of OpenSolaris, HPC is not the primary motivator of this project.

As you'll hear in the video interview with Eric Saxe, Senior Staff Engineer in Sun's Kernel Engineering group, the primary reasons he is looking at Tickless Clock are power management and virtualization. For power management, it is important that when the system is idle, it really IS idle and not waking up 100 times per second to do nothing since this wastes power and will prevent the system from entering deeper power saving states. For virtualization, since multiple OS instances may share the same physical server resources, it is important that guest OSes that are idle really do stay idle. Again, waking up 100 times per second to do nothing will steal cycles from active guest OS instances, thereby reducing performance in a virtualized environment.

While it is true I would argue that both power management and virtualization will become increasingly important to HPC users (more of that convergence thing), it is interesting to me to see that these traditional enterprise issues are stimulating new projects that will benefit both enterprise and HPC customers in the future.

Interested in getting involved with implementing a tickless architecture for OpenSolaris? The project page is here.


Tuesday Apr 14, 2009

You Say Nehalem, I Say Nehali


Let me say this first: NEHALEM, NEHALEM, NEHALEM. You can call it the Intel Xeon Series 5500 processor if you like, but the HPC community has been whispering about Nehalem and rubbing its collective hands together in anticipation of Nehalem for what seems like years now. So, let's talk about Nehalem.

Actually, let's not. I'm guessing that between the collective might of the Intel PR machine, the traditional press, and the blogosphere you are already well-steeped in the details of this new Intel processor and why it excites the HPC community. Rather than talk about the processor per se, let's talk instead about the more typical HPC scenario: not about a single Nehalem, but rather piles of Nehali and how to best to deploy them in an HPC cluster configuration.

Our HPC clustering approach is based on the Sun Constellation architecture, which we've deployed at TACC and other large-scale HPC sites around the world. These existing systems house compute nodes in the Sun Blade 6000 System chassis which holds four blade shelves, each with twelve blade systems for a total of 48 blades per chassis. Constellation also includes matching InfiniBand infrastructure, including InfiniBand Network Express Modules (NEMs) and a range of InfiniBand switches that can be used to build petascale compute clusters. You can see several of the 82 TACC Ranger Constellation chassis in this photo, interspersed with (black) inline cooling units.

As part of our continued focus on HPC customer requirements, we've done something interesting with our new Nehalem-based Vayu blade (officially, the Sun Blade X6275 Server Module): each blade houses two separate nodes. Here is a photo of the Vayu blade:

Each of the nodes is a diskless, two-socket Nehalem system with 12 DDR3 DIMM slots per node (up to 96 GB per node) and on-board QDR InfiniBand. It's actually not quite correct to call this node diskless because it does include two Sun Flash Module slots (one per node) that each provide up to 24 GB of FLASH storage through a SATA interface. I am sure our HPC customers will use what amounts to an ultra-fast disk for some interesting applications.

Using Vayu blades, each Sun Constellation chassis can now support a total of 2 nodes/blade \* 12 blades/shelf \* 4 shelves/chassis = 96 nodes with a peak floating-point performance of about 9 TFLOPs. While a chassis can support up to 96 GB / node \* 96 nodes = 9.2 TB of memory, there are some subtleties involved in optimizing and configuring memory for Nehalem systems so I recommend reading John Nerl's blog entry for a detailed discussion of this topic.

For a quick visual tour of Vayu, see the annotated photo below. The major components are: A = Nehalem 4-core processor; B = Memory DIMMs; C = Mellanox Connect-X QDR InfiniBand ASIC; D = Tylersburg I/O chipset; E = Base Management Controller (BMC) / Service Processor (one per node); F = Sun Flash Modules (SATA, 24GB per node.) The connector at the top right supports two PCIe2 interfaces, two InfiniBand interfaces, and 2 GbE interfaces. The GbE logic is hiding under the BMC daughter board.

To accommodate this high-density blade approach we've developed a new QDR InfiniBand NEM (officially called the SunBlade 6048 QDR IB Switched NEM), which is shown below. This Network Express Module plugs into a blade shelf's midplane and forms the first level of InfiniBand fabric in a cluster configuration. Specifically, the two on-board 36-port Mellanox QDR switch chips act as leaf switches from which larger configurations can be built. Of the 72 total switch ports available, 24 are used for the Vayu blades and 9 from each switch chip are used to interconnect the two switches, leaving a total of 72 - 24 - 2\*9 = 30 QDR links available for off-shelf connections. These links leave the NEM through 10 physical connectors, each of which carries three QDR X4 links over a single cable. As we discussed when the first version of Constellation was released, aggregating 4X links into 12X cables results in significant customer benefits related to reliability, density, and complexity. In any case, these cables can be connected to Constellation switches to form larger, tree-based fabrics. Or they can be used in switchless configurations to build torus-based topologies by using the ten cables to carry X, Y, and Z traffic between shelves and across chassis. As mentioned, for example, here. The NEM also provides GbE connectivity for each of the 24 nodes in the blade shelf.

Looking back at the TACC photo, we can now double the compute density shown there with our newest Constellation-based systems using the new Vayu blade. Oh, and by the way, we can also remove those inline cooling units and pack those chassis side-by-side for another significant increase in compute density. I'll leave how we accomplish that last bit for a future blog entry.

Friday Apr 03, 2009

HPC in Second Life (and Second Life in HPC)



We held an HPC panel session yesterday in Second Life for Sun employees interested in learning more about HPC. Our speakers were Cheryl Martin, Director of HPC Marketing; Peter Bojanic, Director for Lustre; Mike Vildibill, Director of Sun's Strategic Engagement Team (SET); and myself. We covered several aspects of HPC: what it is, why it is important, and how Sun views it from a business perspective. We also talked about some of the hardware and software technologies and products that are key enablers for HPC: Constellation, Lustre, MPI, etc.

As we were all in-world at the time, I thought it would be interesting to ponder whether Second Life itself could be described as "HPC" and whether we were in fact holding the HPC meeting within an HPC application. Having viewed this excellent SL Architecture talk given by Ian (Wilkes) Linden, VP of Systems Engineering at Linden Lab, I conclude that SL is definitely an HPC application. Consider the following information taken from Ian's presentation.


As you can see, the geography of SL has been exploding in size over the last 5-6 years. As of Dec 2008 that geography is simulated using more than 15K instances of the SL simulator process that in addition to computing the physics of SL also run an average of 30 million simultaneous server-side scripts to create additional aspects of the SL user experience. And look at the size of their dataset: 100TB is very respectable from an HPC perspective. And a billion files! Many HPC sites are worrying what will happen when they get to that level of scale while Linden Lab is already dealing with it. I was surprised they aren't using Lustre, since I assume their storage needs are exploding as well. But I digress.


The SL simulator described above would be familiar to any HPC programmer. It's a big C++ code. The problem space (the geography of SL) has been decomposed into 256m X 256m chunks that are each assigned to once instance of the simulator. Each simulator process runs on its own CPU core and "adjacent" simulator instances exchange edge data to ensure consistency across sub-domain boundaries. And it's a high-level physics simulation. Smells like HPC to me.


Friday Mar 20, 2009

Amazon EC2: More Reserved Than Ever

Back in September, I expressed skepticism that a purely on-demand model for cloud computing would be sufficient for businesses to seriously commit to the cloud model as a way to run their businesses. Apparently, Amazon is an avid reader of the Navel (joke!), because they recently announced a new resource model -- Reserved Instances -- that in large part addresses the issue I raised. Specifically, it is now possible to pay up-front to reserve an instance for use over some period of time. In addition, when the instance is actually used, the rate is lower than for purely on-demand instances.

This hybrid model will appeal to those customers who worry about resource availability as demand for cloud computing resources continues to grow and certainly to those who have developed a business- or mission-critical reliance on access to these remote resources.


Wednesday Mar 18, 2009

Australian Supercomputing: Who's Da BOM?

The Australian press has an article today about a new Sun supercomputer to be installed at the Australian Bureau of Meteorology (BOM.) The new 1.5 TFLOP machine, which will be ten times more powerful than their current system, is said to be the largest in the southern hemisphere. The article is here.


More Free HPC Developer Tools for Solaris and Linux

The Sun Studio team just released the latest version of our HPC developer tools with so many enhancements and additions it's hard to know where to start this blog entry. I suppose with the basics: As usual, all of the software is free. And available for both Solaris and Linux, specifically Solaris, OpenSolaris, RHEL, SuSE, and Ubuntu. Frankly, Sun would like to be your preferred provider for high-performance Fortran, C, and C++ compilers and tools. Given the performance and capabilities we deliver for HPC with Sun Studio, that seems a pretty reasonable goal to me. We think the price has been set correctly to achieve that as well. :-)

I have to admit to being confused by the naming convention for this release, but it goes something like this. The release is an EA (Early Access) version of Sun Studio 12 Update 1 -- the first major update to Sun Studio 12 since it was released in the summer of 2007. Since Sun Studio's latest and greatest bits are released every three months as part of the Express program, this release can also be called Sun Studio Express 3/09. Different names, same bits. Don't worry about it -- just focus on the fact that they make great compilers and tools. :-)

Regardless of what they call it, the release can be downloaded here. Take it for a spin and let the developers know what you think on the forum or file a request for enhancement (RFE) or a bug report here.

For the full list of new features, go here. For my personal list of favorite new features, read on.

  • Full OpenMP 3.0 compiler and tools support. For those not familiar, OpenMP is the industry standard for directives-based threaded application parallelization. Or, the answer to the question, "So how do I use all the cores and threads in my spiffy new multicore processor?"
  • ScaLAPACK 1.8 is now included in the Sun Performance Library! It works with Sun's MPI (Sun HPC ClusterTools), which is based on Open MPI 1.3. The Perflib team has also made significant performance enhancements to BLAS, LAPACK, and the FFT routines, including support for the latest Intel and AMD processors. Nice.
  • MPI performance analysis integrated into the Sun Performance Analyzer. Analyzer has been for years a kick-butt performance tool for single-process applications. It has now been extended to help MPI programmers deal with message-passing related performance problems.
  • Continued, aggressive attention paid to optimizing for the latest SPARC, Intel, and AMD processors. C, C++, and Fortran performance will all benefit from these changes.
  • A new standalone GUI debugger. Go ahead, graduate from printf() and try a real debugger. It won't bite.

As I mentioned above, full details on these new features and many, many more are all documented on this wiki page. And, again, the bits are here.

Friday Jan 09, 2009

HPC and Virtualization: Oak Ridge Trip Report


Just before Sun's Winter Break, I attended a meeting at Oak Ridge National Laboratory in Tennessee with Stephen Scott, Geoffroy Vallee, Christian Engelmann, Thomas Naughton, and Anand Tikotekar, all of the Systems Research Team (SRT) at ORNL. Attending from Sun were Tim Marsland, Greg Lavender, Rebecca Arney, and myself. The topic was HPC and virtualization, an area the SRT has been exploring for some time and one I've been keen on as well as it has become clear v12n has much to offer the HPC community. This is my trip report.

I arrived at Logan Airport in Boston early enough on Monday to catch an earlier flight to Dulles, narrowly avoiding the five-hour delay that eventually afflicted my original flight. The flight from Boston to Knoxville via Dulles went smoothly and I arrived without difficulty to a rainy and chilly Tennessee evening. I was thrilled to have made it through Dulles without incident since more often than not I have some kind of travel difficulty when my trips pass through IAD (more on that later.) The 25 mile drive to the Oak Ridge DoubleTree was uneventful.

Oak Ridge is still very much a Lab town from what I could see, much like Los Alamos, but certainly less isolated. Movie reviews in the Oak Ridge Observer are rated with atoms rather than stars. Stephen Scott, who leads the System Research Team (SRT) at ORNL, mentioned that the plot plan for his house is stamped "Top Secret -- Manhattan Project" because the plan shows the degree difference between "ORNL North" and "True North", an artifact of the time when period maps of the area deliberately skewed the position of Oak Ridge to lessen the chance that a map could be used to successfully bomb ORNL targets from the air during the war.

We spent all day Tuesday with Stephen and most of the System Research Team. Tim talked about what Sun is doing with xVM and our overall virtualization strategy and ended with a set of questions that we spent some time discussing. Greg then talked in detail about both Crossbow and InfiniBand, specifically with respect to aspects related to virtualization. We spent the rest of the day hearing about some of the work on resiliency and virtualization being done by the team. See the end of this blog entry for pointers to some of the SRT papers as well as other HPC/virtualization papers I have found to be interesting.

Resiliency isn't something the HPC community has traditionally cared much about. Nodes were thin and cheap. If a node crashed, restart the job, replace the node, use checkpoint-restart if you can. Move on; life on the edge is hard. But the world is changing. Nodes are getting fatter again--more cores, more memory, more IO. Big SMPs in tiny packages with totally different economics from traditional large SMPs. Suddenly there is enough persistent state on a node that people start to care how long their nodes stay up. Capabilities like Fault Management start to look really interesting, especially if you are a commercial HPC customer using HPC in production.

In addition, clusters are getting larger. Much larger, even with fatter nodes. Which means more frequent hardware failures. Bad news for MPI, the world's most brittle programming model. Certainly, some more modern programming models would be welcome, but in the meantime what can be done to keep these jobs running longer in the presence of continual hardware failures? This is one promise of virtualization. And one reason why a big lab like ORNL is looking seriously at virtualization technologies for HPC.

Live migration -- the ability to shift running OS instances from one node to another -- is particularly interesting from a resiliency perspective. Linking live migration to a capable fault management facility (see, for example, what Sun has been doing in this area) could allow jobs to avoid interruption due to an impending node failure. Research by the SRT (see the Proactive Fault Tolerance paper, below) and others has shown this is a viable approach for single-node jobs and also for increasing the survivability of MPI applications in the presence of node failures. Admittedly, the current prototype depends on Xen TCP tricks to handle MPI traffic interruption and continuation, but with sufficient work to virtualize the InfiniBand fabric, this technique could be extended to that realm as well. In addition, the use of an RDMA-enabled interconnect can itself greatly increase the speed of live migration as is demonstrated in the last paper listed in the reference section below.

We discussed other benefits of virtualization. Among them, the use of multiple virtual machines per physical node to simulate a much larger cluster for demonstrating an application's basic scaling capabilities in advance of being allowed access to a real, full-scale (and expensive) compute resource. Such pre-testing becomes very important in situations in which large user populations are vying for access to relatively scarce, large-scale, centralized research resources.

Geoffroy also spoke about "adapting systems to applications, not applications to systems" by which he meant that virtualization allows an application user to bundle their application into a virtual machine instance with any other required software, regardless of the "supported" software environment available on a site's compute resource. Being able to run applications using either old versions of operating systems or perhaps operating systems with which a site's administrative staff has no experience, does truly allow the application provider to adapt the system to their application without placing an additional administrative burden on a site's operational staff. Of course, this does push the burden of creating a correct configuration onto the application provider, but the freedom and flexibility should be welcomed by those who need it. Those who don't could presumably bundle their application into a "standard" guest OS instance. This is completely analogous to the use and customization of Amazon Machine Instances (AMIs) on the Amazon Elastic Compute Cloud (EC2) infrastructure.

Observability was another simpatico area of discussion. DTrace has taken low-cost, fine-grained observability to new heights (new depths, actually). Similarly, SRT is looking at how one might add dynamic instrumentation at the hypervisor level to offer a clearer view of where overhead is occurring within a virtualized environment to promote user understanding and also offer a debugging capability for developers.

A few final tidbits to capture before closing. Several other research efforts are looking at HPC and virtualization. Among them V3VEE (University of New Mexico and Northwestern University), XtreemOS (a bit of a different approach to virtualization for HPC and Grids). SRT is also working on a virtualized version of OSCAR called OSCAR-V.

The Dulles Vortex of Bad Travel was more successful on my way home. My flight from Knoxville was delayed with an unexplained mechanical problem that could not be fixed in Knoxville, requiring a new plane to be flown from St. Louis. I arrived very late into Dulles, about 10 minutes before my connection to Boston was due to leave from the other end of the terminal. I ran to the gate, arriving two minutes before the flight was scheduled to depart and it was already gone-- no sign of the gate agents or the plane. Spent the night at an airport hotel and flew home first thing the next morning. Dulles had struck again--this was at least the third time I've had problems like this when passing through IAD. I have colleagues that refuse to travel with me through this airport. With good reason, apparently.

Reading list:

Proactive Fault Tolerance for HPC with Xen Virtualization, Nagarajan, Mueller, Engelmann, Scott

The Impact of Paravirtualized Memory Hierarchy on Linear Algebra Computational Kernels and Software, Youseff, Seymour, You, Dongarra, Wolski

Performance Implications of Virtualizing Multicore Cluster Machines, Ranadive, Kesavan, Gavrilovska, Schwan

High Performance Virtual Machine Migration with RDMA over Modern Interconnects, Huang, Gao, Liu, Panda


Thursday Dec 18, 2008

Beta Testers Wanted: Sun Grid Engine 6.2 Update 2

A busy day for fresh HPC bits, apparently...

The Sun Grid Engine team is looking for experienced SGE users interested in taking their latest Update release for a test drive. The Update includes bug fixes, but also some new features as well. Two features in particular caught my eye: a new GUI-based installer and optimizations to support very large Linux clusters (think TACC Ranger.)

Full details are below in the official call for beta testers. The beta program will run until February 2nd, 2009. Look no further for something to do during the upcoming holiday season. :-)


Sun Grid Engine 6.2 Update 2 Beta (SGE 6.2u2beta) Program

This README contains important information about the targeted audience of this beta release, new functionality, the duration of this SGE beta program and your possibilities to get support and provide feedback.

  1. Audience of this beta program
  2. Duration of the beta program and release date
  3. New functionality delivered with this release
  4. Installing SGE 6.2u2beta in parallel to a production cluster
  5. Beta program feedback and evaluation support
  1. Audience of this beta program

    This Beta is intended for users who already have experience with the Sun Grid Engine software or DRM (Distributed Resource Management) systems of other vendors. This beta adds new features to the SGE 6.2 software. Users new to DRM systems or users who are seeking a production ready release should use the Sun Grid Engine 6.2 Update 1 (SGE 6.2u1) release which is available from here.

    For the shipping SGE 6.2u1 release we are offering a free 30 day evaluation email support.

  2. Duration of the Beta program and release date

    This beta program lasts until Monday, February 2, 2009. The final release of Sun Grid Engine 6.2 Update 2 is planned for March 2009.

  3. New functionality delivered with this release

    Sun Grid Engine 6.2 Update 2 (SGE 6.2u2) is a feature update release for SGE 6.2 which adds the following new functionality to the product:

    • a GUI based installer helping new users to more easily install the software. It complements the existing CLI based installation routine.
    • new support for 32-bit and 64-bit editions of Microsoft Windows Vista (Enterprise and Ultimate Edition), Windows Server 2003R2 and Windows Server 2008.
    • a client and server side Job Submission Verifier (JSV) allows an administrator to control, enforce and adjust jobs requests, including job rejection. JSV scripts can be written in any scripting language, e.g. Unix shells, Perl or TCL.
    • consumable resource attributes can now be requested per job. This makes resource requests for parallel jobs much easier to define, especially when using slot ranges.
    • on Linux, the use of the 'jemalloc' malloc library improves performance and reduces memory requirements.
    • the use of the poll(2) system call instead of select(2) on Linux systems improves scalability of qmaster in extremely huge clusters.
  4. Installing SGE 6.2u2 in parallel to a production cluster

    Like with every SGE release it is safe to install multiple Grid Engine clusters running multiple versions in parallel if all of the following settings are different:

    • directory
    • ports (environment variables) for qmaster and execution daemons
    • unique "cluster name" - from SGE 6.2 the cluster name is appended to the name of the system wide startup scripts
    • group id range ("gid_range")

    Starting with SGE 6.2 the Accounting and Reporting Console (ARCo) accepts reporting data from multiple Sun Grid Engine clusters. Following the installation directions for ARCo and using a unique cluster name for this beta release there is no risk of losing or mixing reporting data from multiple SGE clusters.

  5. Beta Program Feedback and Evaluation Support

    We welcome your feedback and questions on this Beta. Weask you to restrict your questions to this Beta release only. If you need general evaluation support for the Sun Grid Engine software please subscribe to the free evaluation support by downloading and using the shipping version of SGE 6.2 Update 1.

    The following email aliases are available:


Fresh Bits: InfiniBand Updates for Solaris 10

Fresh InfiniBand bits for Solaris 10 Update 6 have just been announced by the IB Engineering Team:

The Sun InfiniBand Team is pleased to announce the availability of the Solaris InfiniBand Updates 2.1. This comprises updates to the previously available Solaris InfiniBand Updates 2. InfiniBand Updates 2 has been removed from the current download pages. (Previous versions of InfiniBand Updates need to be carefully matched to the OS Update versions that they apply to.)

The primary deliverable of Solaris InfiniBand Updates 2.1 is a set of updates of the Solaris driver supporting HCAs based on Mellanox's 4th generation silicon, ConnectX. These updates include the fixes that have been added to the driver since its original delivery, and functionality in this driver is equivalent to what was delivered as part of OpenSolaris 2008.11. In addition, there continues to be a cxflash utility that allows Solaris users to update firmware on the ConnectX HCAs. This utility is only to be used for ConnectX HCAs.

Other updates include:

  • uDAPL InfiniBand service provider library for Solaris (compatible with Sun HPC ClusterTools MPI)
  • Tavor and Arbel/memfree drivers that are compatible with new interfaces in the uDAPL library
  • Documentation (README and man pages)
  • A renamed flash utility for Tavor-, Arbel memfull, Arbel memfree, and Sinai based HCAs. Instead of "fwflash" this utility is rename "ihflash" to avoid possible namespace conflicts with a general firmware flashing utility in Solaris

All are compatible with Solaris 10 10/08 (Solaris 10, Update 6), for both SPARC and X86.

You can download the package from the "Sun Downloads" A-Z page by visiting http://www.sun.com/download/index.jsp?tab=2 and scrolling down or searching for the link for "Solaris InfiniBand (IB) Updates 2.1" or alternatively use this link.

Please read the README before installing the updates. This contains both installation instructions and other information you will need to know before running this product.

Please note again that this Update package is for use on Solaris 10/08 (Solaris 10, Update 6) only. A version of the Hermon driver has also been integrated into Update 7 and will be available with that Update's release.

Congratulations to the Solaris IB Hermon project team and the extended IB team for their efforts in making this product available!


Thursday Nov 20, 2008

Random Notes from the IDC HPC Breakfast Briefing

I went to the IDC HPC Breakfast briefing yesterday morning because they are usually pretty interesting. This one felt mostly like a rehash of earlier material and was somewhat disappointing as a result. I did hear a few things I thought were worth passing on and here they are.

I made the above graph based on a table that was flashed quickly on the screen during the briefing. If N was specified, I didn't catch it. It is amazing (depressing?) to see how few ISV applications actually scale beyond 32 processors, even after all these years. I showed the graph to Dave Teszler, US Practice Manager for HPC, and he confirmed that he sees lots of commercial HPC customers who buy large clusters, but who really use them as throughput machines where the unit of throughput might be a 32-process job or smaller. In other words, just because a customer buys a 1024-node cluster and is known to use MPI, one cannot assume they are running 1024-process MPI jobs as one can with other kinds of customers like the National Labs or other large supercomputing centers.

Other notes jotted during the meeting:

  • Over the last four years HPC has shown a yearly growth rate of 19%
  • Blades are making inroads into all segments, driven largely by concerns about power, cooling, and density
  • HPC is growing partly because "live engineering" and "live science" costs continue to escalate, making simulation much more effective for delivering faster "time to solution."
  • Global competitiveness continues to drive HPC growth by offering businesses ways to differentiation through better R&D and product design using HPC techniques
  • x86 was described as being a weak architecture for HPC due to the very wide range of application requirements seen in HPC. this and poor delivered performance on multicore is causing customers to buy more more processors for technical computing than they would otherwise.
  • The power issue is not the for enterprise and HPC. For enterprise challenge is how to reduce their power consumption whereas for HPC it is a constraint on growth.
  • Software is still seen as the #1 roadblock for HPC
  • Better management software is needed because HPC clusters are hard to set up and operation and because new buyers need "ease of everything."
  • Current economic uncertainty has delayed IDC forecasting, but do see real weakness in CAE. By contrast Oil/Gas, Climate/Weather, University, and DCC (Digital Content Creation) all still appear healthy. The outlook for Finance, Government, Bio/Life, and EDA is unknown at this point.

Bjorn to be Wild! Fun at Supercomputing '08

It's been crazy-busy here at the Sun booth at Supercomputing '08 in Austin, but we do get to have some fun as well. This is Bjorn Andersson, Director of HPC for Sun. He is Bjorn to be Wild.

This photo reminded my friend Kai of Fjorg 2008. Worth a look.



Wednesday Nov 19, 2008

Sun Supercomputing: Red Sky at Night, Sandia's Delight

Yesterday we officially announced that Sun will be supplying Sandia National Laboratories its next generation clustered supercomputer, named Red Sky. Douglas Doerfler from the Scalable Architectures Department at Sandia spoke at the Sun HPC Consortium Meeting here in Austin and gave an overview of the system to assembled customers and Sun employees. As Douglas noted, this was the world premiere Red Sky presentation.

The system is slated to replace Thunderbird and other aging cluster resources at Sandia. It is a Sun Constellation system using the Sun Blade 6000 blade architecture, but with some differences. First, the system will use a new diskless two-node Intel blade to double the density of the overall system. The initial system will deliver 160 TFLOPs peak performance in a partially populated configuration with expansion available to 300 TFLOPs.

Second, the interconnect topology is a 3D torus rather than a fat-tree. The torus will support Sandia's secure red/black switching requirement with a middle "swing" section that can be moved to either the red or black side of the machine as needed with the required air gap.

Primary software components include CentOS, Open MPI, OpenSM, and Lash for deadlock-free routing across the torus. The filesystem will be based on Lustre. oneSIS will be used for diskless cluster management, including booting over InfiniBand.


Monday Nov 17, 2008

How to Observe Performance of OpenMP Codes

A great benefit of the OpenMP standard is that it allows a programmer to specify parallelization strategies, leaving the implementation details to the compiler and its runtime system. A downside of this is that the programmer loses some understanding and visibility into what is actually happening, making it difficult to find and fix performance problems. This is precisely the issue discussed by Professor Barbara Chapman from the University of Houston during her talk at the Sun HPC Consortium Meeting here in Austin today.

Prof. Chapman briefly described the work she has been doing using the OpenUH compiler as a research base. The older POMP project had used source-level instrumentation and source-to-source translation to produce codes that allowed some access to performance information, but the approach wasn't very popular. Instead, instrumentation has now been directly implemented in the compiler and inserted much later in the compilation process. This allowed the instrumentation to be both improved and also reduced to a more selective set of probe points, greatly reducing the overhead of instrumentation.

Professor Chapman touched on a few application examples in which this selective implementation approach has resulted in significant performance improvements with little work needed to pinpoint the problem areas within the code. In one example, application performance was easily increased by between 20 and 25% over a range of problem sizes. In another case involving an untuned OpenMP code, the instrumentation quickly pointed to incorrect usage of shared arrays and initialization problems related to first-touch memory allocation.

A second thrust of this research work is to take advantage of the fact that the OpenMP runtime layer is basically in charge as the application executes. Because it controls execution, it can also be used to gather runtime performance information as part of a performance monitoring system.

Both of these techniques contribute to giving the programmer tools to performance debug their codes at the semantic level at which it was initially written, which is critically important as more and more HPC (and other) users attempt to extract good parallel performance from existing and future multi-core chips.


Project Thebes Update from Georgetown University

The big news from Arnie Miles, Senior Systems Architect at Georgetown University, is that the Thebes Middleware Consortium has moved from concept to code with a new prototype of a service provider based on DRMAA that mediates access to an unmodified Sun Grid Engine instance from a small Java-based client app.

In addition, the Thebes Consortium has just released a first draft of an XML schema that attempts to create a language that harmonizes how jobs and resources are described in a resource-sharing environment and sits above the specific approaches taken by existing systems like Ganglia, Sun Grid Engine, PBS, Condor, LSF, etc.) The proposal will soon be submitted to OGF for consideration.

The next nut to crack is the definition of a resource discovery network, which is under development now. The team hopes to be able to share their work on this at ISC in Hamburg in June of next year.


About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today