Wednesday Nov 19, 2008

Sun Supercomputing: Red Sky at Night, Sandia's Delight

Yesterday we officially announced that Sun will be supplying Sandia National Laboratories its next generation clustered supercomputer, named Red Sky. Douglas Doerfler from the Scalable Architectures Department at Sandia spoke at the Sun HPC Consortium Meeting here in Austin and gave an overview of the system to assembled customers and Sun employees. As Douglas noted, this was the world premiere Red Sky presentation.

The system is slated to replace Thunderbird and other aging cluster resources at Sandia. It is a Sun Constellation system using the Sun Blade 6000 blade architecture, but with some differences. First, the system will use a new diskless two-node Intel blade to double the density of the overall system. The initial system will deliver 160 TFLOPs peak performance in a partially populated configuration with expansion available to 300 TFLOPs.

Second, the interconnect topology is a 3D torus rather than a fat-tree. The torus will support Sandia's secure red/black switching requirement with a middle "swing" section that can be moved to either the red or black side of the machine as needed with the required air gap.

Primary software components include CentOS, Open MPI, OpenSM, and Lash for deadlock-free routing across the torus. The filesystem will be based on Lustre. oneSIS will be used for diskless cluster management, including booting over InfiniBand.


Monday Nov 17, 2008

So, What About Java for HPC?

About ten years ago the HPC community attempted to embrace Java as a viable approach for high performance computing via a forum called Java Grande. That effort ultimately failed for various reasons, one of which was the difficulty of achieving acceptable performance for interesting HPC workloads. Today at the HPC Consortium Meeting here in Austin, Professor Denis Caromel from the University of Nice made the case that Java is ready now for serious HPC use. He described the primary features of ProActive Java, a joint project between INRIA and University of Nice CNRS, and provided some performance comparisons against Fortran/MPI benchmarks.

As background, Denis explained that the goal of ProActive is to enable parallel, distributed, and multi-core solutions with Java using one unified framework. Specifically, the approach should scale from a single, multi-core node to a large, enterprise-wide grid environment.

ProActive embraces three primary areas: Programming, Optimizing, and Scheduling. The programming approach is based on the use of active objects to create a dataflow-like asynchronous communication framework in which objects can be instantiated in either separate JVMs or within the same address space in the case of a multi-core node. Objects are instantiated asynchronously on the receiver side and then represented immediately on the sender side by "future objects" which will be populated asynchronously when the remote computation completes. Accessing future events whose contents have not yet arrived causes a "wait by necessity" which implements the dataflow synchronization mechanism.

ProActive also supports a SPMD programming style with many of the same primitives found in MPI -- e.g., barriers, broadcast, reductions, scatter-gather, etc.

Results for several NAS parallel benchmarks were presented, in particular CG, MG, and EP. On CG, the ProActive version performed at essentially the same speed as the Fortran/MPI version over a range of problem sizes from 1-32 processes. Fortran did better on MG and this seems to relate to issues around large memory footprints, which the ProActive team is looking at in more detail. With EP, Java was faster or significantly faster in virtually all cases.

Work continues to lower messaging latency, to optimize in-node data transfers by sending pointers rather than data, and to reduce message-size overhead.

When asked how ProActive compares to X10, Denis pointed out that while X10 does share some concepts with ProActive, X10 is a new language while ProActive is designed to run on standard Java JVMs and to enable to use of standard Java for HPC.

A full technical paper about ProActive in PDF format is available here.


A Customer's View of Sun's HPC Consortium Meeting

One of our customers, Gregg TeHennepe from Jackson Laboratory, has been blogging about his attendance at Sun's HPC Consortium meeting here in Austin. For his perspectives and for some excellent photos of the Ranger supercomputer at TACC, check out his blog, Mental Burdocks.

Sunday Nov 16, 2008

What's Been Happening with Sun's Biggest Supercomputer?

Karl Schulz – Associate Director, HPC, Texas Advanced Computing Center gave an update on Ranger, including current usage statistics as well as some of the interesting technical issues they've confronted since bringing the system online last year.

Karl started with a system overview, which I will skip in favor of pointing to an earlier Ranger blog entry that describes the configuration in detail. Note, however, that Ranger is now running with 2.3 GHz Barcelona processors.

As of November 2008, Ranger has more than 1500 allocated users who represent more than 400 individual research projects. Over 300K jobs have been run so far on the system, consuming a total of 220 million CPU hours.

When TACC brought their 900 TeraByte Lustre filesystem online, they wondered how long it would take to fill it. It took six months. Just six months to generate 900 TeraBytes of data. Not surprising, I guess, when you hear that users generate between 5 and 20 TeraBytes of data per day on Ranger. Now that they've turned on their file purging policy files currently currently reside on the filesystem for about 30 days before they are purged, which is quite good as supercomputing centers go.

Here are some of the problems Karl described.

OS jitter. For those not familiar, this phrase refers to a sometimes-significant performance degradation seen by very large MPI jobs that is caused by a lack of natural synchronization between participating nodes due to unrelated performance perturbations on individual nodes. Essentially some nodes fall slightly behind, which slows down MPI synchronization operations, which can in turn have a large effect on overall application performance. The worse the loss of synchronization, the longer certain MPI operations take to complete, and the larger the overall application performance impact.

A user reported bad performance problems with a somewhat unusual application that performed about 100K MPI_AllReduce operations with a small amount of intervening computation between each AllReduce. When running on 8K cores, a very large performance difference was seen when running 15 processes per node versus 16 processes per node. The 16-process-per-node runs showed drastically lower performance.

As it turned out, the MPI implementation was not at fault. Instead, the issue was traced primarily to two causes. First, an IPMI daemon that was running on each node. And, second, another daemon that was being used to gather fine-grained health monitoring information to be fed into Sun Grid Engine. Once the IPMI daemon was disabled and some performance optimization work was done on the health daemon, the 15- and 16-process runs showed almost identical run times.

Karl also showed an example of how NUMA effects at scale can cause significant performance issues. In particular, it isn't sufficient to deal with processor affinity without also paying attention to memory affinity. Off-socket memory access can kill application performance in some cases, as in the CFD case shown during the talk.


Using SPARC and Solaris for HPC: More of this, please!

Ken Edgecombe – Executive Director of HPCVL spoke today at the HPC Consortium Meeting in Austin about experiences with SPARC and HPC at his facility.

HPCVL has a massive amount of Sun gear, the newest of which includes a cluster of eight Sun SPARC Enterprise M9000 nodes, our largest SMP systems. Each node has 64 quad-core, dual-threaded SPARC64 processors and includes 2TB of RAM. With a total of 512 threads per node, the cluster has a peak performance of 20.5 TFLOPs. As you'd expect, these systems offer excellent performance for problems with large memory footprints or for those requiring extremely high bandwidths and low latencies between communicating processors.

In addition to their M9000 cluster, HPCVL has another new resource that consists of 78 Sun SPARC Enterprise T5140 (Maramba) nodes, each with two eight-core Niagara2+ processors (a.k.a. UltraSPARC T2plus). With eight threads per core, these systems make almost 10,000 hardware threads available to users at HPCVL.

Ken described some of the challenges of deploying the T5140 nodes in his HPC environment. The biggest issue is that researchers invariably first try running a serial job on these systems and then report they are very disappointed with the resulting performance. No surprise since these systems run at less that 1.5 GHz as compared to competing processors that run at over twice that rate. As Ken emphasized several times, the key educational issue is to re-orient users to thinking less about single-threaded performance and more about "getting more work done." In other words, throughput computing. For jobs that can scale to take advantage of more threads, excellent overall performance can be achieved my consuming more (slower) threads to complete the job in a competitive time. This works if one can either extract more parallelism from a single application, or run multiple instances of applications to make efficient use of the threads within these CMT systems. With 256 threads per node, there is a lot of parallelism available for getting work done.

As he closed, Ken reminded attendees of the 2009 High Performance Computing Symposium which will be held June 14-17 in Kingston, Ontario at HPCVL.

Saturday Nov 15, 2008

Gregg TeHennepe: Musician, Blogger, Senior Manager and Research Liaison -- Customer!

When the band at this evening's HPC Consortium dinner event invited guests to join them on stage to do some singing, I didn't think anything of it...until some time later when I heard a new voice and turned to see someone up there who sounded good and looked like he was having a good time. He was wearing a conference badge, but I couldn't see whether he was a customer or Sun employee.

Since I was taking photos, it was easy to snap a shot and then zoom in on his badge to see who it was. Imagine my surprise as the name came into focus and I saw that it was Gregg TeHennepe, one of our customers from the Jackson Laboratory where he is a senior manager and research liaison. I was surprised because I hadn't realized it was Gregg in spite of the fact that I had eaten breakfast with him this morning and talked with him several times during the day about the fact that he intends to blog the HPC Consortium meeting, which so far as I know marks the first time a customer has blogged this event.

My surprise continued, however. When I googled Gregg just now to remind myself of the name of his blog, I found he is a member of Blue Northern, a five-piece band from Maine that traditional, original, and contemporary acoustic music. So, yeah. I guess he does sound good. :-)

Gregg's blog is Mental Burdocks. I'll save you the trip to the dictionary and tell you that a burdock is one of those plants with hook-bearing flowers that get stuck on animal coats (and other kinds of coats) for seed dispersal.


Monday Nov 03, 2008

Sun HPC Consortium: Austin, Texas

Sun always holds an HPC Consortium event for customers at the Supercomputing and International Supercomputing conference and this year is no exception. We had an excellent program in Dresden at ISC08 and have a strong agenda lined up for our meeting in Austin at SC08. The Consortium will begin at 1pm on Saturday, November 15th at the Hilton Airport Austin and end at 1pm on Monday, the 17th.

As usual, the Consortium meeting will be packed with talks by Sun technical experts, by customers sharing their Sun HPC experiences and by key HPC partners. The full agenda can be seen here. This event is consistently rated very highly by customer attendees and it is not too late to register.

If you are a Sun HPC customer interested in meeting Sun technical experts and fellow customers in a collaborative, informative, and fun atmosphere, please join us in Austin for the upcoming Sun HPC Consortium meeting.

About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today