Monday Oct 19, 2009

Workshop on High Performance Virtualization


As discussed in several earlier posts (here and here), virtualization technology is destined to play an important role in HPC. If you have an opinion about that or are interested in learning more, consider attending or submitting a technical paper or position paper to the Workshop on High Performance Virtualization: Architecture, Systems, Software, and Analysis, which is being held in conjunction with HPCA-16, the 16th International Symposium on High Performance Computing Architecture, and also in conjunction with PPoPP 2010, the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. Apparently, Bangalore is the place to be in early January 2010!

Time is short: Workshop submissions are due November 10, 2009.


Thursday Aug 27, 2009

Parallel Computing: Berkeley's Summer Bootcamp

Two weeks ago the Parallel Computing Laboratory at the University of California Berkeley ran an excellent three-day summer bootcamp on parallel computing. I was one of about 200 people who attended remotely while another large pile of people elected to attend in person on the UCB campus. This was an excellent opportunity to listen to some very well known and talented people in the HPC community. Video and presentation material is available on the web and I would recommend it to anyone interested in parallel computing or HPC. See below for details.

The bootcamp, which was called the 2009 Par Lab Boot Camp - Short Course on Parallel Programming covered a wide array of useful topics, including introductions to many of the current and emerging HPC parallel computing models (pthreads, OpenMP, MPI, UPC, CUDA, OpenCL, etc.), hands-on labs for in-person attendees, and some nice discussions on parallelism and how to find it with an emphasis on the motifs (patterns) of parallelism identified in The Landscape of Parallel Computing Research: A View From Berkeley. There was also a presentation on performance analysis tools and several application-level talks. It was an excellent event.

The bootcamp agenda is shown below. Session videos and PDF decks are available here.

talk title speaker
Introduction and Welcome Dave Patterson (UCB)
Introduction to Parallel Architectures John Kubiatowicz (UCB)
Shared Memory Programming with Pthreads, OpenMP and TBB Katherine Yelick (UCB & LBNL), Tim Mattson (Intel), Michael Wrinn (Intel)
Sources of parallelism and locality in simulation James Demmel (UCB)
Architecting Parallel Software Using Design Patterns Kurt Keutzer (UCB)
Data-Parallel Programming on Manycore Graphics Processors Bryan Catanzaro (UCB)
OpenCL Tim Mattson (Intel)
Computational Patterns of Parallel Programming James Demmel (UCB)
Building Parallel Applications Ras Bodik (UCB), Ras Bodik (UCB), Nelson Morgan (UCB)
Distributed Memory Programming in MPI and UPC Katherine Yelick (UCB & LBNL)
Performance Analysis Tools Karl Fuerlinger (UCB)
Cloud Computing Matei Zaharia (UCB)

Monday Jul 13, 2009

Performance Facts, Performance Wisdom

I was genuinely excited to see that members of Sun's Strategic Applications Engineering team have started a group blog about performance called BestPerf. These folks are the real deal -- they are responsible for generating all of the official benchmark results published by Sun -- and they collectively have a deep background in all things related to performance. I like the blog because while they do cover specific benchmark results in detail, they also share best practices and include broader discussions about achieving high performance as well. There is a lot of useful material for anyone seeking a better understanding of performance.

Here are some recent entries that caught my eye.

Find out how a Sun Constellation system running SLES 10 beat IBM BlueGene/L on a NAMD Molecular Dynamics benchmark here.

See how the Solaris DTrace facility can be used to perform detailed IO analyses here.

Detailed Fluent cluster benchmark results using the Sun Fire x2270 and SLES 10? Go here.

How to use Solaris containers, processor sets, and scheduling classes to improve application performance? Go here.


Wednesday Apr 15, 2009

Tickless Clock for OpenSolaris

I've been talking a lot to people about the convergence we see happening between Enterprise and HPC IT requirements and how developments in each area can bring real benefits to the other. I should probably do an entire blog entry on specific aspects of this convergence, but for now I'd like to talk about the Tickless Clock OpenSolaris project.

Tickless kernel architectures will be familiar to HPC experts as one method for reducing application jitter on large clusters. For those not familiar with the issue, "jitter" refers to variability in the running time of application code due to underlying kernel activity, daemons, and other stray workloads. Since MPI programs typically run in alternating compute and communication phases and develop a natural synchonization as they do so, applications can be slowed down significantly when some nodes arrive late at these synchronization points. The larger the MPI job, the more likely the this type of noise will cause a problem. Measurements have shown surprisingly large slowdowns associated with jitter.

Jitter can be lessened by reducing the number of daemons running on a system, by turning off all non-essential kernel services, etc. Even with these changes, however, there are other sources of jitter. One notable source is the clock interrupt used in virtually all current operating systems. This interrupt, which fires 100 times per second, is used to periodically perform housekeeping chores required by the OS. This interrupt is a known contributor to jitter. It is for this reason that IBM has implemented a tickless kernel on their Blue Gene systems to reduce application jitter.

Sun is starting a Tickless Clock project in OpenSolaris to completely remove the clock interrupt and switch to an event-based architecture for OpenSolaris. While I expect this will be very useful for HPC users of OpenSolaris, HPC is not the primary motivator of this project.

As you'll hear in the video interview with Eric Saxe, Senior Staff Engineer in Sun's Kernel Engineering group, the primary reasons he is looking at Tickless Clock are power management and virtualization. For power management, it is important that when the system is idle, it really IS idle and not waking up 100 times per second to do nothing since this wastes power and will prevent the system from entering deeper power saving states. For virtualization, since multiple OS instances may share the same physical server resources, it is important that guest OSes that are idle really do stay idle. Again, waking up 100 times per second to do nothing will steal cycles from active guest OS instances, thereby reducing performance in a virtualized environment.

While it is true I would argue that both power management and virtualization will become increasingly important to HPC users (more of that convergence thing), it is interesting to me to see that these traditional enterprise issues are stimulating new projects that will benefit both enterprise and HPC customers in the future.

Interested in getting involved with implementing a tickless architecture for OpenSolaris? The project page is here.


Sunday Nov 16, 2008

If You Doubt the Utility of GPUs for HPC, Read this

Professor Satoshi Matsuoka from the Tokyo Institute of Technology gave a really excellent talk this afternoon about using GPUs for HPC at the HPC Consortium Meeting here in Austin.

As you may know, the Tokyo Institute of Technology is the home of TSUBAME, the largest supercomputer in Asia. It is an InfiniBand cluster of 648 Sun Fire x4600 compute nodes, many with installed Clearspeed accelerator cards.

The desire is to continue to scale TSUBAME into a petascale computing resource over time. However, power is a huge problem at the site. The machine is responsible for roughly 10% of the overall power consumption of the Institute and therefore they cannot expect their power budget to grow over time. The primary question, then, is how to add significant compute capacity to the machine while working within a constant power budget.

It was clear from their analysis that conventional CPUs would not allow them to reach their performance goals while also satisfying the no-growth power constraint. GPUs--graphical processing units like those made by nVidia--looked appealing in that they claim extremely high floating point capabilities and deliver this performance at a much better performance/watt ratio that conventional CPUs. The question, though, is whether GPUs can be used to significantly accelerate important classes of HPC computations or whether they are perhaps too specialized to be considered for inclusion in a general-purpose compute resource like TSUBAME. Professor Matsuoka's talk focused on this question.

The talk approached the question by presenting performance speed-up results for a selection of important HPC applications or computations based on algorithmic work done by Prof. Matsuoka and other researchers at the Institute. These studies were done in part because GPU vendors do a very poor job of describing exactly what GPUs are good for and what problems are perhaps not handled well by GPUs. By assessing the capabilities over a range of problem areas, it was hoped that conclusions could be drawn about the general utility of the GPU approach for HPC.

The first problem examined was a 3D protein docking analysis that performs an all-to-all analysis of 1K proteins to 1K proteins. Based on their estimates, a single protein-protein interaction analysis requires about 200 TeraOps while the full 1000x1000 problem requires about 200 ExaOps. In order to maximally exploit GPUs for this problem, a new 3D FFT algorithm was developed that in the end delivered excellent performance and a 4x better performance/watt over IBM's BG/L system, which itself is much more efficient than a more conventional cluster approach.

In addition, other algorithmic work delivered speedups of 45X over single conventional CPUs for CFD, which is typically limited by available bandwidth. Likewise, a computation involving phase separation liquid delivered a speedup of 160X over a conventional processor.

Having looked at single node performance and compared it to a single-node GPU approach and found that GPUs do appear to able to deliver interesting performance and performance/watt for an array of useful problem types so long as new algorithms can be created to exploit the specific capabilities of these GPUs, the next question was whether these results could be extended to multi-GPU and cluster environments.

To test this, the team worked with the RIKEN Himeno CFD benchmark, which is considered the worst memory bandwidth-limited code one will ever see. It is actually worse than any real application one would ever encounter. If this could be parallelized and used with GPUs to advantage, then other less difficult codes should also benefit from the GPU approach.

To test this, the code was parallelized to run using multiple GPUs per node and with MPI as the communication mechanism between nodes. Results showed about a 50X performance improvement over a conventional CPU cluster on a small-sized problem.

A multi-GPU parallel sparse solver was also created which showed a 25X-35X improvement over conventional CPUs. This was accomplished using double precision implemented using mixed-precision techniques.

While all of these results seemed promising, could such a GPU approach be deployed at scale in a very large cluster rather than just within a single node or across a modest-sized cluster? The Institute decided to find out by teaming with nVidia and Sun to enhance TSUBAME by adding Tesla GPUs to some (most) nodes.

Installing the Tesla cards into the system went very smoothly and resulted in three classes of nodes: those with both Clearspeed and Tesla installed, those with only Tesla installed, and those Opteron nodes with neither kind of accelerator installed.

Could this funky array of heterogeneous nodes be harnessed to deliver an interesting LINPACK number? It turns out that it could, with much work and in spite of the fact that there was limited bandwidth in the upper links of the InfiniBand fabric and that they had limited PCIx/PCIe bandwidth available in the nodes (I believe due to the number and types of slots available in the x4600 and the number of required devices in some of the TSUBAME compute nodes.)

As a result of the LINPACK work (which could have used more time--it was deadline-limited) the addition of GPU capability in TSUBAME allowed its LINPACK number to be raised from 67.7 TFLOPs, which was reported in June, to a new high of 77.48 TFLOPs which shows an impressive increase.

With the Tesla cards installed, TSUBAME can now be viewed as a 900 TFLOPs (single precision) or 170 TFLOPs (double precision) machine. A machine that has either 10K cores or 300K SIMD cores if one counts the components embedded within each installed GPU.

The conclusion is pretty clearly that GPUs can be used to significant advantage on an interesting range of HPC problem types, though it is worth noting that it also appears that significantly clever, new algorithms may also need to be developed to map these problems efficiently onto GPU compute resources.


Monday Oct 13, 2008

The Death of Clock Speed


Sun just introduced yet another chip multi-threading (CMT) SPARC system, the Sun SPARC Enterprise T5440. To me, that's a Batoka (sorry, branding police) because I can't keep the model names straight. In any case, this time we've put 256 hardware threads and up to 1/2 TeraByte of memory into a four-RU server. That works out to a thread-count of about 36 threads per vertical inch, which doesn't compete with fine Egyptian cotton, but it can beat the crap out of servers with 3X faster processor clocks. If you don't understand why, then you should take the time to grok this fact. Clock speed is dead as a good metric for assessing the value of a system. For years it has been a shorthand proxy for performance, but with multi-core and multi-threaded processors becoming ubiquitous that has all changed.

Old-brain logic would say that for a particular OpenMP problem size, a faster clock will beat a slower one. But it isn't about clock anymore--it is about parallelism and latency hiding and efficient workload processing. Which is why the T5440 can perform anywhere from 1.5X to almost 2X better on a popular OpenMP benchmark against systems with almost twice the clock rate of the T5540. Those 256 threads and 32 separate floating point units are a huge advantage in parallel benchmarks and in real-world environments in which heavy workloads are the norm, especially those that can benefit from the latency-hiding offered by a CMT processor like the UltraSPARC T2 Plus. Check out BM Seer over the next few days for more specific, published benchmark results for the T5440.

Yes, sure. If you have a single-threaded application and don't need to run a lot of instances, then the higher clock rate will help you significantly. But if that is your situation, then you have a big problem since you will no longer see clock rate increasing as it has in the past. You are surely starting to see this now with commodity CPUs slowly increasingly clock rates and embrace of multicore, but you can look to our CMT systems to understand the logical progression of this trend. It's all going parallel, baby, and it is past time to start thinking about how you are going to deal with that new reality. When you do, you'll see the value of this highly threaded approach for handling real-world workloads. You can see my earlier post on CMT and HPC for more information, including a pointer to an interesting customer analysis of the value of CMT for HPC workloads.

A pile of engineers have been blogging about both the hardware and software aspects of the new system. Check out Allan Packer's stargate blog entry for jump coordinates into the Batoka T5440 blogosphere.


Thursday Jun 12, 2008

TACC Ranger Upgrade: Another 80 TeraFLOPs, Please


As was announced yesterday, the 508 TFLOP Ranger supercomputer at TACC will be upgraded this weekend. All 15,744 2.0 GHz quad-core AMD processors will be replaced with 2.3 GHz processors, effectively adding another 80 TFLOPs to the system's peak performance rating. Ranger is currently the largest Sun Constellation System in the world and the largest open, scientific computing system in the world.


About

Josh Simons

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today