Monday Nov 17, 2008

Dealing with Data Proliferation at Clemson

Jim Pepin, CTO for Clemson University talked today at the HPC Consortium Meeting about the challenges and problems created by emerging technology and socal trends as seen through the lens of a university environment.

As a preamble, Jim noted that between 1970 and now the increases in compute and storage capabilities have pretty much kept pace with each other. Networking bandwidth, however, has lagged by about two orders of magnitude. This has a variety of ramifications for local/centralized data storage decisions (or constraints.)

In many ways, storage is moving closer to end-users. Examples include personal storage like iPods, phones, local NAS boxes, etc, as well as more research-oriented data collection efforts related to the proliferation of new sensors and instrumentation. There is data everywhere in vast quantities and widely distributed across a typical university environment.

Particular issues of concern at Clemson include how to back up these distributed and rapidly-growing pools of storage, how to handle security, how to protect data while still being able to open networks, and how to deal with a wide diversity of systems and data-generating instruments.


So, What About Java for HPC?

About ten years ago the HPC community attempted to embrace Java as a viable approach for high performance computing via a forum called Java Grande. That effort ultimately failed for various reasons, one of which was the difficulty of achieving acceptable performance for interesting HPC workloads. Today at the HPC Consortium Meeting here in Austin, Professor Denis Caromel from the University of Nice made the case that Java is ready now for serious HPC use. He described the primary features of ProActive Java, a joint project between INRIA and University of Nice CNRS, and provided some performance comparisons against Fortran/MPI benchmarks.

As background, Denis explained that the goal of ProActive is to enable parallel, distributed, and multi-core solutions with Java using one unified framework. Specifically, the approach should scale from a single, multi-core node to a large, enterprise-wide grid environment.

ProActive embraces three primary areas: Programming, Optimizing, and Scheduling. The programming approach is based on the use of active objects to create a dataflow-like asynchronous communication framework in which objects can be instantiated in either separate JVMs or within the same address space in the case of a multi-core node. Objects are instantiated asynchronously on the receiver side and then represented immediately on the sender side by "future objects" which will be populated asynchronously when the remote computation completes. Accessing future events whose contents have not yet arrived causes a "wait by necessity" which implements the dataflow synchronization mechanism.

ProActive also supports a SPMD programming style with many of the same primitives found in MPI -- e.g., barriers, broadcast, reductions, scatter-gather, etc.

Results for several NAS parallel benchmarks were presented, in particular CG, MG, and EP. On CG, the ProActive version performed at essentially the same speed as the Fortran/MPI version over a range of problem sizes from 1-32 processes. Fortran did better on MG and this seems to relate to issues around large memory footprints, which the ProActive team is looking at in more detail. With EP, Java was faster or significantly faster in virtually all cases.

Work continues to lower messaging latency, to optimize in-node data transfers by sending pointers rather than data, and to reduce message-size overhead.

When asked how ProActive compares to X10, Denis pointed out that while X10 does share some concepts with ProActive, X10 is a new language while ProActive is designed to run on standard Java JVMs and to enable to use of standard Java for HPC.

A full technical paper about ProActive in PDF format is available here.


A Customer's View of Sun's HPC Consortium Meeting

One of our customers, Gregg TeHennepe from Jackson Laboratory, has been blogging about his attendance at Sun's HPC Consortium meeting here in Austin. For his perspectives and for some excellent photos of the Ranger supercomputer at TACC, check out his blog, Mental Burdocks.

Sunday Nov 16, 2008

If You Doubt the Utility of GPUs for HPC, Read this

Professor Satoshi Matsuoka from the Tokyo Institute of Technology gave a really excellent talk this afternoon about using GPUs for HPC at the HPC Consortium Meeting here in Austin.

As you may know, the Tokyo Institute of Technology is the home of TSUBAME, the largest supercomputer in Asia. It is an InfiniBand cluster of 648 Sun Fire x4600 compute nodes, many with installed Clearspeed accelerator cards.

The desire is to continue to scale TSUBAME into a petascale computing resource over time. However, power is a huge problem at the site. The machine is responsible for roughly 10% of the overall power consumption of the Institute and therefore they cannot expect their power budget to grow over time. The primary question, then, is how to add significant compute capacity to the machine while working within a constant power budget.

It was clear from their analysis that conventional CPUs would not allow them to reach their performance goals while also satisfying the no-growth power constraint. GPUs--graphical processing units like those made by nVidia--looked appealing in that they claim extremely high floating point capabilities and deliver this performance at a much better performance/watt ratio that conventional CPUs. The question, though, is whether GPUs can be used to significantly accelerate important classes of HPC computations or whether they are perhaps too specialized to be considered for inclusion in a general-purpose compute resource like TSUBAME. Professor Matsuoka's talk focused on this question.

The talk approached the question by presenting performance speed-up results for a selection of important HPC applications or computations based on algorithmic work done by Prof. Matsuoka and other researchers at the Institute. These studies were done in part because GPU vendors do a very poor job of describing exactly what GPUs are good for and what problems are perhaps not handled well by GPUs. By assessing the capabilities over a range of problem areas, it was hoped that conclusions could be drawn about the general utility of the GPU approach for HPC.

The first problem examined was a 3D protein docking analysis that performs an all-to-all analysis of 1K proteins to 1K proteins. Based on their estimates, a single protein-protein interaction analysis requires about 200 TeraOps while the full 1000x1000 problem requires about 200 ExaOps. In order to maximally exploit GPUs for this problem, a new 3D FFT algorithm was developed that in the end delivered excellent performance and a 4x better performance/watt over IBM's BG/L system, which itself is much more efficient than a more conventional cluster approach.

In addition, other algorithmic work delivered speedups of 45X over single conventional CPUs for CFD, which is typically limited by available bandwidth. Likewise, a computation involving phase separation liquid delivered a speedup of 160X over a conventional processor.

Having looked at single node performance and compared it to a single-node GPU approach and found that GPUs do appear to able to deliver interesting performance and performance/watt for an array of useful problem types so long as new algorithms can be created to exploit the specific capabilities of these GPUs, the next question was whether these results could be extended to multi-GPU and cluster environments.

To test this, the team worked with the RIKEN Himeno CFD benchmark, which is considered the worst memory bandwidth-limited code one will ever see. It is actually worse than any real application one would ever encounter. If this could be parallelized and used with GPUs to advantage, then other less difficult codes should also benefit from the GPU approach.

To test this, the code was parallelized to run using multiple GPUs per node and with MPI as the communication mechanism between nodes. Results showed about a 50X performance improvement over a conventional CPU cluster on a small-sized problem.

A multi-GPU parallel sparse solver was also created which showed a 25X-35X improvement over conventional CPUs. This was accomplished using double precision implemented using mixed-precision techniques.

While all of these results seemed promising, could such a GPU approach be deployed at scale in a very large cluster rather than just within a single node or across a modest-sized cluster? The Institute decided to find out by teaming with nVidia and Sun to enhance TSUBAME by adding Tesla GPUs to some (most) nodes.

Installing the Tesla cards into the system went very smoothly and resulted in three classes of nodes: those with both Clearspeed and Tesla installed, those with only Tesla installed, and those Opteron nodes with neither kind of accelerator installed.

Could this funky array of heterogeneous nodes be harnessed to deliver an interesting LINPACK number? It turns out that it could, with much work and in spite of the fact that there was limited bandwidth in the upper links of the InfiniBand fabric and that they had limited PCIx/PCIe bandwidth available in the nodes (I believe due to the number and types of slots available in the x4600 and the number of required devices in some of the TSUBAME compute nodes.)

As a result of the LINPACK work (which could have used more time--it was deadline-limited) the addition of GPU capability in TSUBAME allowed its LINPACK number to be raised from 67.7 TFLOPs, which was reported in June, to a new high of 77.48 TFLOPs which shows an impressive increase.

With the Tesla cards installed, TSUBAME can now be viewed as a 900 TFLOPs (single precision) or 170 TFLOPs (double precision) machine. A machine that has either 10K cores or 300K SIMD cores if one counts the components embedded within each installed GPU.

The conclusion is pretty clearly that GPUs can be used to significant advantage on an interesting range of HPC problem types, though it is worth noting that it also appears that significantly clever, new algorithms may also need to be developed to map these problems efficiently onto GPU compute resources.


A Pan-European Approach to High Performance Computing

Dr. Thomas Lippert, Director of the the Institute for Advanced Simulation and Head of Jülich Supercomputing Center, spoke at the HPC Consortium Meeting today about PRACE, an important effort to establish a pan-European HPC infrastructure beyond anything available today.

To quote from the PRACE website,

"The Partnership for Advanced Computing in Europe prepares the creation of a persistent pan-European HPC service, consisting several tier-0 centres providing European researchers with access to capability computers and forming the top level of the European HPC ecosystem. PRACE is a project funded in part by the EU’s 7th Framework Programme."

Dr. Lippert explained that the reasoning behind PRACE is to improve the strategic competitiveness of researchers and of industrial development and to strengthen and revitalize HPC across Europe. The vision for how this will be accomplished includes a multi-tiered hierarchy of centers that starts at the top with a small number of tier-0 European sites, coupled with tier-1 sites at the national level, and also with tier-2 sites at the regional level.

This is a huge program, now in the planning stages. It is expected to require funding at the level of 200-400M Euros per year over 4-5 years with a similar amount allocated for operating costs.

[Dr. Lippert also discussed the next generation supercomputer to be built soon at Jülich, but I have omitted that information here to avoid sharing any inappropriate information.]


What's Been Happening with Sun's Biggest Supercomputer?

Karl Schulz – Associate Director, HPC, Texas Advanced Computing Center gave an update on Ranger, including current usage statistics as well as some of the interesting technical issues they've confronted since bringing the system online last year.

Karl started with a system overview, which I will skip in favor of pointing to an earlier Ranger blog entry that describes the configuration in detail. Note, however, that Ranger is now running with 2.3 GHz Barcelona processors.

As of November 2008, Ranger has more than 1500 allocated users who represent more than 400 individual research projects. Over 300K jobs have been run so far on the system, consuming a total of 220 million CPU hours.

When TACC brought their 900 TeraByte Lustre filesystem online, they wondered how long it would take to fill it. It took six months. Just six months to generate 900 TeraBytes of data. Not surprising, I guess, when you hear that users generate between 5 and 20 TeraBytes of data per day on Ranger. Now that they've turned on their file purging policy files currently currently reside on the filesystem for about 30 days before they are purged, which is quite good as supercomputing centers go.

Here are some of the problems Karl described.

OS jitter. For those not familiar, this phrase refers to a sometimes-significant performance degradation seen by very large MPI jobs that is caused by a lack of natural synchronization between participating nodes due to unrelated performance perturbations on individual nodes. Essentially some nodes fall slightly behind, which slows down MPI synchronization operations, which can in turn have a large effect on overall application performance. The worse the loss of synchronization, the longer certain MPI operations take to complete, and the larger the overall application performance impact.

A user reported bad performance problems with a somewhat unusual application that performed about 100K MPI_AllReduce operations with a small amount of intervening computation between each AllReduce. When running on 8K cores, a very large performance difference was seen when running 15 processes per node versus 16 processes per node. The 16-process-per-node runs showed drastically lower performance.

As it turned out, the MPI implementation was not at fault. Instead, the issue was traced primarily to two causes. First, an IPMI daemon that was running on each node. And, second, another daemon that was being used to gather fine-grained health monitoring information to be fed into Sun Grid Engine. Once the IPMI daemon was disabled and some performance optimization work was done on the health daemon, the 15- and 16-process runs showed almost identical run times.

Karl also showed an example of how NUMA effects at scale can cause significant performance issues. In particular, it isn't sufficient to deal with processor affinity without also paying attention to memory affinity. Off-socket memory access can kill application performance in some cases, as in the CFD case shown during the talk.


Attention Supercomputing Weirdos (You Know Who You Are)

When Karl Schulz, Assistant Director at TACC spoke today at the HPC Consortium Meeting, he asked everyone to do their part--within legal limits--to help Keep Austin Weird. Having been a part of the HPC community for many years, I'm pretty sure we are collectively more than up to the task. The phrase "core competency" comes to mind. :-)



Using SPARC and Solaris for HPC: More of this, please!

Ken Edgecombe – Executive Director of HPCVL spoke today at the HPC Consortium Meeting in Austin about experiences with SPARC and HPC at his facility.

HPCVL has a massive amount of Sun gear, the newest of which includes a cluster of eight Sun SPARC Enterprise M9000 nodes, our largest SMP systems. Each node has 64 quad-core, dual-threaded SPARC64 processors and includes 2TB of RAM. With a total of 512 threads per node, the cluster has a peak performance of 20.5 TFLOPs. As you'd expect, these systems offer excellent performance for problems with large memory footprints or for those requiring extremely high bandwidths and low latencies between communicating processors.

In addition to their M9000 cluster, HPCVL has another new resource that consists of 78 Sun SPARC Enterprise T5140 (Maramba) nodes, each with two eight-core Niagara2+ processors (a.k.a. UltraSPARC T2plus). With eight threads per core, these systems make almost 10,000 hardware threads available to users at HPCVL.

Ken described some of the challenges of deploying the T5140 nodes in his HPC environment. The biggest issue is that researchers invariably first try running a serial job on these systems and then report they are very disappointed with the resulting performance. No surprise since these systems run at less that 1.5 GHz as compared to competing processors that run at over twice that rate. As Ken emphasized several times, the key educational issue is to re-orient users to thinking less about single-threaded performance and more about "getting more work done." In other words, throughput computing. For jobs that can scale to take advantage of more threads, excellent overall performance can be achieved my consuming more (slower) threads to complete the job in a competitive time. This works if one can either extract more parallelism from a single application, or run multiple instances of applications to make efficient use of the threads within these CMT systems. With 256 threads per node, there is a lot of parallelism available for getting work done.

As he closed, Ken reminded attendees of the 2009 High Performance Computing Symposium which will be held June 14-17 in Kingston, Ontario at HPCVL.

Saturday Nov 15, 2008

Gregg TeHennepe: Musician, Blogger, Senior Manager and Research Liaison -- Customer!

When the band at this evening's HPC Consortium dinner event invited guests to join them on stage to do some singing, I didn't think anything of it...until some time later when I heard a new voice and turned to see someone up there who sounded good and looked like he was having a good time. He was wearing a conference badge, but I couldn't see whether he was a customer or Sun employee.

Since I was taking photos, it was easy to snap a shot and then zoom in on his badge to see who it was. Imagine my surprise as the name came into focus and I saw that it was Gregg TeHennepe, one of our customers from the Jackson Laboratory where he is a senior manager and research liaison. I was surprised because I hadn't realized it was Gregg in spite of the fact that I had eaten breakfast with him this morning and talked with him several times during the day about the fact that he intends to blog the HPC Consortium meeting, which so far as I know marks the first time a customer has blogged this event.

My surprise continued, however. When I googled Gregg just now to remind myself of the name of his blog, I found he is a member of Blue Northern, a five-piece band from Maine that traditional, original, and contemporary acoustic music. So, yeah. I guess he does sound good. :-)

Gregg's blog is Mental Burdocks. I'll save you the trip to the dictionary and tell you that a burdock is one of those plants with hook-bearing flowers that get stuck on animal coats (and other kinds of coats) for seed dispersal.


Spur: Terascale Visualization at TACC

Kelly Gaither, Associate Director at TACC – University Texas Austin, talked today about Spur, a new scalable visualization system that is directly connected to Ranger, the 500 TFLOP Sun Constellation System at the Texas Advanced Computing Center (TACC) in Austin.

Spur, which was a joint collaboration between TACC and Sun, bolts the visualization system directly into the InfiniBand fat tree used to create the 65K-core Ranger cluster. This allows both compute and visualization to be used effectively together, essential for truly interactive data explorations.

Spur is actually a cluster of eight Sun compute nodes, each with four nVidia QuadroPlex GPUs installed. The overall system includes 32 GPUs, 128 cores, close to 1 TB RAM and can support up to 128 simultaneous Shared Visualization clients. Sun Grid Engine is used to schedule jobs onto Spur nodes.

Spur went into production in October and within a week was being used about 120 hours per week for interactive HPC visualization. Users who access Spur via the TeraGrid find the resource invaluable because the alternative--computing on Ranger and then transferring results back to their home site to visualize it just isn't feasible due to the sheer volume of data involved. One user estimated it would take him a week to transfer the data to his local facility whereas with Spur he is able to compute at TACC, render the visualization at TACC using Spur, and then use Sun Shared Visualization software to display the visualization at his local site.


A Customer View of the Sun Blade 6000 System

Prof. James Leylek – Executive Director, CU-CCMS at Clemson spoke today at the HPC Consortium Meeting in Austin about his experiences with the installation and acceptance of their new Sun Sun Blade 6000 system. CU-CCMS installed 31 SunBlade 6000 chassis with 10 blades per chassis each with two Intel quad-core CPUs for a total of 3440 cores, connected with DDR InfiniBand. The entire cluster has 14 TB RAM and delivers about 34.4 TFLOPs peak performance. After deciding on the system, the next major task was to specify an acceptance test for the cluster.

Initially, CU-CCMS decided that an uninterrupted 72-hour LINPACK run would be an appropriate full-system acceptance test. External advisors, however, suggested that such a long run on a system of this size would be infeasible. As it turns out, the full-scale LINPACK ran to 48 hours without any problems. And then to 72 hours. And then to 130 hours.

The entire project was completed in two weeks and two days with two local resources assigned to the acceptance process. End result? A very happy HPC customer. We like that.

A Soft Landing in Austin, Texas


I arrived last night in Austin for the Sun HPC Consortium meeting this weekend and for Supercomputing '08 next week. I joined several colleagues for a casual dinner at a local home hosted by Deirdré and her daughter Ross and attended by several of Ross' friends. It was a fun and relaxing way to ease into this trip, which year to year proves to be pretty exhausting since the Consortium runs all weekend and is followed immediately by Supercomputing, which we can count on to deliver a week of high-energy, non-stop sensory overload.

Thanks to Deirdré for the invitation and to Ross, Mo, Trishna, April, Griffin, and Terry for the entertaining evening and the extremely wide-ranging conversation (wow!) And a special thanks to Ross for hosting and for introducing me to a Sicilian pesto that's to die for! YUM. In return, I hope they enjoyed my "opening the apple" demo. :-)

Our internal training session for Sun's HPC field experts (HPC ACES) will finish in about an hour, at which point we will break for lunch and the return for the start of the HPC Consortium meeting.

Let the games begin!


Thursday Nov 13, 2008

Big News for HPC Developers: More Free Stuff

'Tis the Season. Supercomputing season, that is. Every November the HPC community--users, researchers, and vendors--attend the world's biggest conference on HPC: Supercomputing. This year SC08 is being held in Austin Texas, to which I'll be flying in a few short hours.

As part of the seasonal rituals vendors often announce new products, showcase new technologies and generally strut their stuff at the show and even before the show in some cases. Sun is no exception as you will see if you visit our booth at the show and if you take note of two announcements we made today that should be seen as a Big Deal to HPC developers. The first concerns MPI and the second our Sun Studio developer tools.

The first announcement extends Sun's support of Open MPI to Linux with the release of ClusterTools 8.1. This is huge news for anyone looking for a pre-built and extensively tested version of Open MPI for RHEL 4 or 5, SLES 9 or 10, OpenSolaris, or Solaris 10. Support contracts are available for a fee if you need one, but you can download the CT 8.1 bits here for free and use them to your heart's content, no strings attached.

Here are some of the major features supported in ClusterTools 8.1:

  • Support for Linux (RHEL 4&5, SLES 9&10), Solaris 10, OpenSolaris
  • Support for Sun Studio compilers on Solaris and Linux, plus the GNU/gcc toolchain on Linux
  • MPI profiling support with Sun Studio Analyzer (see SSX 11.2008), plus support for VampirTrace and MPI PERUSE
  • InfiniBand multi-rail support
  • Mellanox ConnectX Infiniband support
  • DTrace provider support on Solaris
  • Enhanced performance and scalability, including processor affinity support
  • Support for InfiniBand, GbE, 10GbE, and Myrinet interconnects
  • Plug-ins for Sun Grid Engine (SGE) and Portable Batch System (PBS)
  • Full MPI-2 standard compliance, including MPI I/O and one sided communication

The second event was the release of Sun Studio Express 11/08, which among other enhancements adds complete support for the new OpenMP 3.0 specification, including tasking. If you are questing for ways to extract parallelism from your code to take advantage of multicore processors, you should be looking seriously at OpenMP. And you should do it with the Sun Studio suite, our free compilers and tools which really kick butt on OpenMP performance. You can download everything--the compilers, the debugger, the performance analyzer (including new MPI performance analysis support) and other tools for free from here. Solaris 10, OpenSolaris, and Linux (RHEL 5/SuSE 10/Ubuntu 8.04/CentOS 5.1) are all supported. That includes an extremely high-quality (and free) Fortran compiler among other goodies. (Is it sad that us HPC types still get a little giddy about Fortran? What can I say...)

The full list of capabilities in this Express release are too numerous to list here, so check out this feature list or visit the wiki.


Monday Nov 03, 2008

Sun HPC Consortium: Austin, Texas

Sun always holds an HPC Consortium event for customers at the Supercomputing and International Supercomputing conference and this year is no exception. We had an excellent program in Dresden at ISC08 and have a strong agenda lined up for our meeting in Austin at SC08. The Consortium will begin at 1pm on Saturday, November 15th at the Hilton Airport Austin and end at 1pm on Monday, the 17th.

As usual, the Consortium meeting will be packed with talks by Sun technical experts, by customers sharing their Sun HPC experiences and by key HPC partners. The full agenda can be seen here. This event is consistently rated very highly by customer attendees and it is not too late to register.

If you are a Sun HPC customer interested in meeting Sun technical experts and fellow customers in a collaborative, informative, and fun atmosphere, please join us in Austin for the upcoming Sun HPC Consortium meeting.

Monday Oct 13, 2008

The Death of Clock Speed


Sun just introduced yet another chip multi-threading (CMT) SPARC system, the Sun SPARC Enterprise T5440. To me, that's a Batoka (sorry, branding police) because I can't keep the model names straight. In any case, this time we've put 256 hardware threads and up to 1/2 TeraByte of memory into a four-RU server. That works out to a thread-count of about 36 threads per vertical inch, which doesn't compete with fine Egyptian cotton, but it can beat the crap out of servers with 3X faster processor clocks. If you don't understand why, then you should take the time to grok this fact. Clock speed is dead as a good metric for assessing the value of a system. For years it has been a shorthand proxy for performance, but with multi-core and multi-threaded processors becoming ubiquitous that has all changed.

Old-brain logic would say that for a particular OpenMP problem size, a faster clock will beat a slower one. But it isn't about clock anymore--it is about parallelism and latency hiding and efficient workload processing. Which is why the T5440 can perform anywhere from 1.5X to almost 2X better on a popular OpenMP benchmark against systems with almost twice the clock rate of the T5540. Those 256 threads and 32 separate floating point units are a huge advantage in parallel benchmarks and in real-world environments in which heavy workloads are the norm, especially those that can benefit from the latency-hiding offered by a CMT processor like the UltraSPARC T2 Plus. Check out BM Seer over the next few days for more specific, published benchmark results for the T5440.

Yes, sure. If you have a single-threaded application and don't need to run a lot of instances, then the higher clock rate will help you significantly. But if that is your situation, then you have a big problem since you will no longer see clock rate increasing as it has in the past. You are surely starting to see this now with commodity CPUs slowly increasingly clock rates and embrace of multicore, but you can look to our CMT systems to understand the logical progression of this trend. It's all going parallel, baby, and it is past time to start thinking about how you are going to deal with that new reality. When you do, you'll see the value of this highly threaded approach for handling real-world workloads. You can see my earlier post on CMT and HPC for more information, including a pointer to an interesting customer analysis of the value of CMT for HPC workloads.

A pile of engineers have been blogging about both the hardware and software aspects of the new system. Check out Allan Packer's stargate blog entry for jump coordinates into the Batoka T5440 blogosphere.


About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today