Sun HPC Consortium, Afternoon II

This is the final set of customer talks from the Sun HPC Consortium meeting in Seattle. About 150 Sun customers and numerous Sun employees have spent long days this weekend sharing information about customer experiences and needs on the one hand and information about Sun products and product futures on the other. All of this in advance the Supercomputing '05 conference proper, which kicks off officially tomorrow.

Phil Williams, University of Nottingham, UK

Dr. Phil Williams, an EPSRC Advanced Research Fellow at the University of Nottingham gave a talk titled Science and Performance at Nottingham. His talk was paired with a presentation by Michael Rudgyard of Streamline Computing. I've combined by notes from these talks here.

Phil presented some background information on Nottingham, including the fact that they have campuses in Malaysia and China, in addition to their multiple locations in the UK.

In their recent, large cluster procurement, Nottingham was looking for a partner who could deliver the most equipment for their budget. In addition to delivering a large cluster, the partner needed also to be able to deliver smaller, scaled down "clones" of this system for use at multiple locations within the University.

Phil mentioned how surprised they were when some of the vendors responded to the procurement proposal by showing up at their required technology demonstration sessions with sales people who were completely unable to answer technical questions from Nottingham personnel. [Amazing that anyone serious about being in HPC would not know that these customers are technically very savvy and should be engaged by people who can speak to them about their issues at a technical level.]

Seven vendors responded to their procurement request and six were invited to bid. The seventh, whose initial proposal exceeded the stated Nottingham budget by a factor of 2.5X, was not invited to the next level. Three vendors were selected to submit best and final offers and Sun was selected in September.

The Jupiter system, which was delivered in conjunction with Streamline Computing, consists of 1024 dual-processor v20z (Opteron) systems interconnected with Gigabit ethernet. The full system, which comprised 19 full racks, was built in Sun's Linlithgow facility as were the 16 smaller, clone systems.

This system is now #109 on TOP500 list at 3.14 TFLOP with a 72% efficiency rating on LINPACK. It is currently Sun's #1 system on the list and is the most efficient TOP500 gigabit installation in the TOP500.

Greg S. Johnson, Texas Advanced Computing Center, USA

Our final customer speaker of the event was Greg Johnson from the Texas Advanced Computing Center which is part of the University of Texas at Austin. His talk was titled Visualization at TACC.

TACC's mission and passion is distributed visualization. In addition, they are distributed visualization partners in the Teragrid, which provides scalable high-end visualization resources to users across the US.

Greg outlined the goals of distributed visualization. The first is providing access to high performance visualization of a power that is well beyond that available at a typical researcher's desktop. Second is location transparency of resources. And third is an improved end-user experience.

The challenges of distributed visualization were as follows. First, latency over the wide area network. Second, delivering quality of service at the user interface. And, third, WAN bandwidth (1280 pixels by 1024 pixels by 12 bytes/pixel by 24 frames/sec is about 360 MB/s uncompressed--can do much better with compression technology.) distrib viz challenges: latency (WAN and GPU readback)

Greg then described the overall architecture of the Sun Terascale Visualization System, which is the cornerstone of TACC's distributed visualization strategy. It was developed in a collaborative effort between TACC and Sun.

The major components of the system are a Sun Fire 25K SMP with 64 UltraSPARC IV processors and 500 GB of memory. In addition, a large number of NVIDIA Quadro FX3000G cards, Myrinet, and 3DLabs U22 cards are used as well in associated systems.

The main takeaways from Greg were that modern networks permit the wide area routing of geometry and pixels at interactive rates and that the access models for visualization resources in the mainstream are shifting in the direction of those of HPC.


Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Josh Simons

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today