Thursday Sep 04, 2008

Sun HPC ClusterTools 8.0

As I mentioned yesterday the next release of ClusterTools is out today... below is an official announcement.

Sun HPC ClusterTools 8.0 is based on Open MPI 1.3 and it is the first release of ClusterTools to include support for Linux platforms as well as Solaris. It also provides support for MPI application profiling with Sun Studio Performance Analyzer (available in Studio Express 7/2008). MPI is the dominant programming paradigm for cluster applications in High Performance Computing (HPC), and HPC ClusterTools provides a production-quality MPI -- supported by Sun -- which is crucial to today's compute-intensive HPC workloads.

Key features of HPC ClusterTools 8.0 include:

  • Support for Linux (RHEL 4&5, SLES 9&10), Solaris 10, OpenSolaris
  • Support for Sun Studio compilers and tools and GNU/gcc toolchains on both Solaris and Linux OSs
  • MPI profiling support with Sun Studio Analyzer, plus support for VampirTrace and MPI PERUSE
  • Infiniband multi-rail support
  • Mellanox ConnectX Infiniband support
  • DTrace provider support on Solaris
  • Enhanced performance and scalability, including processor affinity support
  • Support for InfiniBand, GbE, 10GbE, and Myrinet interconnects
  • Plug-ins for Sun Grid Engine (SGE) and Portable Batch System (PBS)
  • Full MPI-2 standard compliance, including MPI I/O and one sided communication

For what it's worth.

Wednesday Apr 16, 2008

Michigan -> Amsterdam -> Edinburgh -> Cluster Hacking -> return(home);

Last week I was on some travel for work that took my from my home around the world to Edinburgh, UK to work on a large cluster of Sun v20z nodes. This is an internal cluster that will further enhance what we can test and try out with our products. More on the cluster to come... now for the adventure.

My manager called and asked if I wanted to fly over to Scotland to setup Nessie... that is right, we have a corny nickname for the cluster that matches the region of the world in which it is located. Of course I said that I would travel the world to setup a massive cluster so that we could make Sun HPC ClusterTools a more rock solid product. I have a passion for playing with the hardware as well as the software and have not been able to do that for some time. After the right folks all signed off I was given the green light to pack the bags and book the travel. So off I went.

I left the United States late one evening and landed in Amsterdam, Netherlands the next day on my birthday. On the flight over I watched, The Simpsons Movie, and American Gangster. I had a 11 hour layover so I headed into the city to walk around a bit. My plan was to first find a nice little cafe to have a coffee at, notice I said cafe instead of coffeehouse there is a difference. I chatted a bit with the barista and got some good directions on where to head to just browse around the city. I was pointed down a small side street to reach the canals and which ones might be better to see. While walking around with the day pack, the earbuds in and a camera out... I am sure I stood out like a tourist. Now after some wondering around a bit along with the lack of sleep I think I was basically lost, well that would mean that I knew where I was to begin with. I was just about to ask for directions when I saw the train station. I stopped by a small pub close the train station for a birthday drink. After that I headed back to the airport for a nap. Late that day I arrived in Edinburgh and off to the hotel for a couple hours of sleep before I headed off to the Sun facility to get cranking on the cluster setup.

After arriving a little late for my first day of cluster hacking (man I needed even more sleep) I was hooked up with some of the local folks on sight, given the quick ESD overview and pointed to the lab. The cluster will consist of 1024 v20z nodes, each with two AMD procs. The nodes are broken down 32 to a rack and there will thus be 32 racks. By the time I left we had the SPs up on 576 and Solaris installed or ready to be installed on 480 of them... ran out of hardware to get the three other racks up with Solaris. The other 14 racks are waiting for a power issue in the lab to be resolved and then they can come up online. I don't take all the credit, the local folks all ready had the racks in place and power to them and I was joined by a colleague from the London area.

While spending the week in Edinburgh I did venture out for dinner each night and a little sight seeing. Note, that almost all of the shops close at 18:00, plan accordingly. I had some great food while out in the city including some local favorites... fish and chips, bangers and mash. I did not partake of traditional Scottish breakfast... I stuck to my americano and muffin from one of the cafes right by the hotel. The Edinburgh castle was amazing and all of the sites along the royal mile. A full collection of photos from the trip can be found over here.

The travel home was a little eventful, I was delayed leaving Edinburgh do to a security issue with baggage on the flight I was on. By the time it was resolved all the bags were removed from the plane, laid out on the tarmac, each person had to claim their bags, re-board the plane, the extra bag was identified, removed and the flight finally took off... two hours late and two connections in Amsterdam down the drain. I stand behind the pilots decision to ensure that the correct bags were accounted for and unknown bags were not onboard in this day and age of travel. All delays aside, KLM is my new favorite airline. Their level of customer service is second to none. I have not flown Southwest or Jet Blue but the KLM folks know how to do it right for their passengers in flight, the counter crew could be a little more friendly. On the flights home I watched The Bourne Ultimatum and The Kite Runner. I finally made it home 17 hours later than I was set to and a couple hours late for my little girls birthday party. No worries, I was here for the presents and the cake but missed the pinata.

All and all a great trip and a wonderful experience... I would head back anytime to work on the cluster and to help bring the over half up when the power issue is worked out.

Thursday Mar 27, 2008

Sun HPC CT 8 (EA1)

The ClusterTools 8 (CT8) Early Access 1 (EA1) release is now available.

The CT8 EA1 software is a set of MPI libraries and tools for launching parallel MPI jobs on Solaris (SPARC and x86/x64). New in CT8 EA1 is MPI profiling support via VampirTrace and MPI PERUSE, Infiniband multi-rail communication, support for C++ applications built with libstlport4 or C++ standard, as well as other fixes and features contributed to Open MPI by the community.

CT8 EA1 is based on the upcoming Open MPI 1.3 release.

For what it's worth.

Monday Mar 17, 2008

Open MPI Heterogeneity

For a while now Open MPI has had heterogeneity support back into the source base. In the current version of Sun HPC ClusterTools 7.1 that is based on Open MPI, the heterogeneity testing did not fair so well base on that rev. of the source base. Big thanks to George and Brian for the work that was put into getting the code to work and I am sure many more along the way for additional support. Fast forward to the current source base that I have built and install on some nodes in the lab, Solaris/SPARC and Solaris/x64, and I kicked off the heterogeneity testing again. I have to say that I am happy to say that the limited testing of MPI jobs that I kicked off all passed. So I was seeing MPI communication on the mixed architecture cluster in the lab. From a ClusterTools perspective this is not officially qualified but looks to be something we might see in a future release. So what's next? Back to the lab to test out some other configurations like Solaris/SPARC, Solaris,x64 and Linux/x64.

So why is this anything to worry about? Think about places that have several "clusters" sitting around that they have had for a while... each time they built a new cluster it was a different OS or a different architecture... now they take an start using the entirety of their resources with a version of Open MPI that they can kick off large scale MPI jobs on. Why waste idle CPU cycles when you can put them to use.

For what it's worth.

Wednesday Mar 12, 2008

get CT in a new way

Just a quick update on work, I have been working on getting ClusterTools ready for delivery through Indiana.

For what it's worth.

Friday Feb 15, 2008

Which port?

So the work is done some by my little coding hands and some by JS, a huge thanks to JS for his work and leadership on this. The Open MPI C++ library is now agnostic from being tied to either Cstd or stlport4. Now in the userland apps one is free to use whatever they would like. There was some sticky code in the previous implementation that used a data structure that did not allow for being agnostic. For those tracking on Sun ClusterTools this will show up in CT8.

For what it's worth

Monday Nov 19, 2007

CT 7.1 is here

Sun HPC ClusterTools 7.1 has hit the street...

Sun HPC ClusterTools 7.1 is now available! Sun HPC ClusterTools 7.1 is an update release based on Open MPI 1.2.4. Included in Sun HPC ClusterTools 7.1 are Intel 32- and 64-bit support, improved parallel debugger support, PBS Pro validation, improved memory usage for communication, and other bug fixes by the Open MPI community since Open MPI 1.2 was first released.

For what it's worth.

Thursday Oct 25, 2007

T2 in HPC

Just finished reading a post on using UltraSPARC T2 servers in the HPC space.

For what it's worth.

Wednesday Sep 12, 2007

Shiny Filesystem

Just saw the announcement that was posted for Sun to expands HPC portfolio with definitive agreement to acquire Lustre file system. Add this with ClusterTools.

For what it's worth.

Friday Sep 07, 2007

ClusterTools ROCKS

Back in '03 I was present as the team from SDSC ROCK together a fully working cluster in two hours. There was a great time lapse video of the event made. Well that was then, so what is happening now. ROCKS continues to move forward and is a great method for cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls. But for now it is on Linux. That is all well and good unless you want a HUGE Solaris cluster and want to use ROCKS. ClusterTools has a nice installer that will allow for an admin to take an existing node and install CT on it and get it working in a cluster. A savy admin can even get all the installation of CT setup in JumpStart. Well now there is an effort underway to get ROCKS up and working with Solaris. So I am sure that means there will be some changes coming to the CT install to help support this. This is all very exciting so stay tuned as more data becomes available that I can share.

For what it's worth.

ClusterTools and DTrace

For those that are interested, Terry just posted a little snip of some upcoming DTrace support that will be coming in ClusterTools.

For what it's worth.

About

dlacher

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today