Wednesday Jul 01, 2009

Run an HPC Cluster...On your Laptop

With one free download, you can now turn your laptop into a virtual three-node HPC cluster that can be used to develop and run HPC applications, including MPI apps. We've created a pre-configured virtual machine that includes all the components you need:

Sun Studio C, C++, and Fortran compilers with performance analysis, debugging tools, and high-performance math library; Sun HPC ClusterTools -- MPI and runtime based on Open MPI; and Sun Grid Engine -- Distributed resource management and cloud connectivity

Inside the virtual machine, we use OpenSolaris 2009.06, the latest release of OpenSolaris, to create a virtual cluster using Solaris zones technology and have pre-configured Sun Grid Engine to manage it so you don't need to. MPI is ready to go as well---we've configured everything in advance.

If you haven't tried OpenSolaris before, this will also give you a chance to play with ZFS, with DTrace, with Time Slider (like Apple's Time Machine, but without the external disk) and a host of other cool new OpenSolaris capabilities.

For full details on Sun HPC Software, Developer Edition for OpenSolaris check out the wiki.

To download the virtual image for VMware, go here. (VirtualBox image coming soon.)

If you have comments or questions, send us a note at

Thursday Dec 18, 2008

Fresh Bits: InfiniBand Updates for Solaris 10

Fresh InfiniBand bits for Solaris 10 Update 6 have just been announced by the IB Engineering Team:

The Sun InfiniBand Team is pleased to announce the availability of the Solaris InfiniBand Updates 2.1. This comprises updates to the previously available Solaris InfiniBand Updates 2. InfiniBand Updates 2 has been removed from the current download pages. (Previous versions of InfiniBand Updates need to be carefully matched to the OS Update versions that they apply to.)

The primary deliverable of Solaris InfiniBand Updates 2.1 is a set of updates of the Solaris driver supporting HCAs based on Mellanox's 4th generation silicon, ConnectX. These updates include the fixes that have been added to the driver since its original delivery, and functionality in this driver is equivalent to what was delivered as part of OpenSolaris 2008.11. In addition, there continues to be a cxflash utility that allows Solaris users to update firmware on the ConnectX HCAs. This utility is only to be used for ConnectX HCAs.

Other updates include:

  • uDAPL InfiniBand service provider library for Solaris (compatible with Sun HPC ClusterTools MPI)
  • Tavor and Arbel/memfree drivers that are compatible with new interfaces in the uDAPL library
  • Documentation (README and man pages)
  • A renamed flash utility for Tavor-, Arbel memfull, Arbel memfree, and Sinai based HCAs. Instead of "fwflash" this utility is rename "ihflash" to avoid possible namespace conflicts with a general firmware flashing utility in Solaris

All are compatible with Solaris 10 10/08 (Solaris 10, Update 6), for both SPARC and X86.

You can download the package from the "Sun Downloads" A-Z page by visiting and scrolling down or searching for the link for "Solaris InfiniBand (IB) Updates 2.1" or alternatively use this link.

Please read the README before installing the updates. This contains both installation instructions and other information you will need to know before running this product.

Please note again that this Update package is for use on Solaris 10/08 (Solaris 10, Update 6) only. A version of the Hermon driver has also been integrated into Update 7 and will be available with that Update's release.

Congratulations to the Solaris IB Hermon project team and the extended IB team for their efforts in making this product available!

Monday Jun 23, 2008

ClusterTools 8: Early Access 2 Now Available

The latest early access version of Sun HPC ClusterTools -- Sun's MPI library -- has just been made available for download here. As an active member of the Open MPI community, we continue to build our MPI offering on the Open MPI code base, making pre-compiled libraries freely available and offering a paid support option for interested customers. Wondering why we would base our MPI implementation on Open MPI? Read this.

What is particularly cool about CT 8 is that in addition to supporting Solaris, we've added Sun support for Linux (RHEL 4 & 5 and SLES 9 & 10), including use of both the Sun Studio compilers and tools and GNU C. We've also included a DTrace provider for enhanced MPI observability under Solaris as well as additional performance analysis capabilities and a number of other enhancements that are all detailed on the Early Access webpage.

Open MPI on the Biggest Supercomputer in the World

Los Alamos National Laboratory and IBM recently announced they had broken the PetaFLOP barrier with a LINPACK run on the Roadrunner supercomputer. The Open MPI community, including Sun Microsystems, was proud to have played a role in this HPC milestone. As described by Brad Benton, member of the Roadrunner team, the 1.026 PetaFLOP/s LINPACK run was achieved using an early, unmodified snapshot of Open MPI v1.3 as the messaging layer that tied together Roadrunner's 3000+ AMD-powered nodes. For more details on specific MPI tunables used, read this subsequent message from Brad and this follow-up message from Jeff Squyres, Open MPI contributor from Cisco.

About two years ago, we decided to change Sun's MPI strategy from one of continuing to develop our own proprietary implementation of MPI to instead joining a community-based effort to create a scalable, high-performance, and portable implementation of MPI. We joined the Open MPI community because we felt (and still feel) strongly that combining forces with other vendors and other organizations is the most effective path to creating the middleware infrastructure needed to support the needs of the HPC community into the future.

Sun was the 2nd commercial member to join the Open MPI effort, which at the time consisted of a small handful of research and academic organizations. Two years later, the community looks like this:

This mix of academic/research members and commercial members brings together into one community a focus on quality, stability and customer requirements on the one hand, with a passion for research and innovation on the other. Of course, it does also create some challenges as the community works to achieve an appropriate balance between these sometimes opposing forces, but the results to date have been impressive, as witnessed by the use of Open MPI to set a new LINPACK world record on the biggest supercomputer in the world.


Josh Simons


« July 2016