Monday Jan 25, 2010

Sun Microsystems Alumni

I did a double take yesterday when I received a colleague's invitation to join the Sun Microsystems Alumni Facebook group. Hey, I thought, I haven't left Sun! But then I realized we are all soon leaving Sun one way or another.

Friday Jan 15, 2010

Rest in Peace

Marguerite Handfield Simons
09/17/1937 - 09/11/2009
Julie Simons Droney
10/27/1967 - 12/06/2009

It has been an especially bad time for my family over the last few months with the loss of both my mother and my sister. Thank you everyone for your support.


Barbie's Next Career

While I don't follow her myself, I'm told Barbie has had over 120 "careers" since her introduction in 1959. Well, it is time for her to choose another, and Mattel wants to hear from you. Please vote for Computer Engineer Barbie! That is clearly much cooler than any of the other choices offered. Vote here.

Igniting the Earth's Atmosphere

As part of background research for a blog entry I'm working on, I went looking for the name of the Manhattan Project scientist who was tasked with calculating whether an atomic detonation could ignite the Earth's atmosphere and burn everyone on the planet to cinders. His name was Hans Bethe and he apparently concluded the bomb would not ignite the atmosphere. But according to the Wikipedia article on the Manhattan Project, Edward Teller co-authored a paper that also examined this question.

That paper, Ignition of the Atmosphere with Nuclear Bombs, was declassified in the 1970s and it is available as a PDF for your perusal here. I recommend reading the Abstract on Page 3 and the three concluding paragraphs on Page 18. The final paragraph, which I hereby nominate as a monumental understatement, reads as follows:

"One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable."

Apparently, the "satisfactory experimental foundations" were achieved at Trinity site. Had that gone wrong, it would have brought an entirely new meaning to the term "test coverage."

[This just gets worse: As my friend Monty points out, the paper is dated August 1946. The Trinity detonation occurred a year earlier, in July 1945.]


Virtualization for HPC: The Heterogeneity Issue

I've been advocating for awhile now that virtualization has much to offer HPC customers (see here.) In this blog entry I'd like to focus on one specific use case, heterogeneity. It's an interesting case because while heterogeneity is either desirable or to be avoided, depending on your viewpoint, virtualization can help in either case.

The diagram above depicts a typical HPC cluster installation with each compute node running whichever distro was chosen as that site's standard OS. Homogeneity like this eases the administrative burden, but it does so at the cost of flexibility for end-users. Consider, for example, a shared compute resource like a national supercomputing center or a centralized cluster serving multiple departments within a company or other organization. Homogeneity can be a real problem for end-users whose applications only run on either other versions of the chosen cluster OS or, worse, on completely different operating systems. These users are generally not able to use these centralized facilities unless they can port their application to the appropriate OS or convinced their application provider to do so.

The situation with respect to heterogeneity for software providers, or ISVs -- independent software vendors, is quite different. These providers have been wrestling with expenses and other difficulties related to heterogeneity for years. For example, while ISVs typically develop their applications on a single platform (OS 0 above,) they must often port and support their application on several operating systems in order to address the needs of their customer base. Assuming the ISV decides correctly which operating systems should be supported to maximize revenue, it must still incur considerable expenses to continually qualify and re-qualify their application on each supported operating system version. And maintain a complex, multi-platform testing infrastructure and in-house expertise to support these efforts as well.

Imagine instead a virtualized world, as shown above. In such a world, cluster nodes run hypervisors on which pre-built and pre-configured software environments (virtual machines) are run. These virtual machines include the end-user's application and the operating system required to run that application. So far as I can see, everyone wins. Let's look at each constituency in turn:

  • End-users -- End-users have complete freedom to run any application using any operating system because all of that software is wrapped inside a virtual machine whose internal details are hidden. The VM could be supplied by an ISV, built by an open-source application's community, or created by the end-user. Because the VM is a black box from the cluster's perspective, the choice of application and operating system need no longer be restricted by cluster administrators.
  • Cluster admins -- In a virtualized world, cluster administrators are in the business of launching and managing the lifecycle of virtual machines on cluster nodes and no longer need deal with the complexities of OS upgrades, configuring software stacks, handling end-user special software requests, etc. Of course, a site might still opt to provide a set of pre-configured "standard" VMs for end-users who do not have a need for the flexibility of providing their own VMs. (If this all sounds familiar -- it should. Running a shared, virtualized HPC infrastructure would be very much like running a public cloud infrastructure like EC2. But that is a topic for another day.)
  • ISVs -- ISVs can now significantly reduce the complexity and cost of their business. Since ISV applications would be delivered wrapped within a virtual machine that also includes an operating system and other required software, ISVs would be free to select a single OS environment for developing, testing, AND deploying their application. Rather than basing their operating system choice on market share considerations, the decision could be made based on the quality of the development environment, or perhaps the stability or performance levels achievable with a particular OS, or perhaps on the ability to partner closely with an OS vendor to jointly deliver a highly-optimized, robust, and completely supported experience for end-customers.

Thursday Jan 14, 2010

Sun Grid Engine: Still Firing on All Cylinders

The Sun Grid Engine team has just released the latest version of SGE, humbly called Sun Grid Engine 6.2 update 5. It's a yawner of a name for a release that actually contains some substantial new features and improvements to Sun's distributed resource management software, among them Hadoop integration, topology-aware scheduling at the node level (think NUMA), and improved cloud integration and power management capabilities.

You can get the bits directly here. Or you can visit Dan's blog for more details first. And then get the bits.


About

Josh Simons

Search

Archives
« January 2010 »
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
16
17
18
19
20
21
22
23
24
26
27
28
29
30
31
      
Today