Thursday Oct 22, 2009
Wednesday Oct 21, 2009
Thursday Oct 15, 2009
By marchamilton on Oct 15, 2009
And while the entire HPC community is focused on the upcoming SC09 conference right now, be sure to save the dates for the other major international supercomputer show, ISC10 starting May 31 in Hamburg Germany. Meanwhile, check out the following video by my favorite movie star, Sun EVP John Fowler, talking about the F5100.
Sunday Oct 11, 2009
By marchamilton on Oct 11, 2009
The Gospel of Java according to James.
Oracle Press Release Oracle and Sun Are Faster Than IBM: Proof Now Available
Go to the OpenWorld Live site.
Tuesday Sep 29, 2009
By marchamilton on Sep 29, 2009
For the first time ever, the Sun Datacenter InfiniBand Switch 72 lets you aggregate up to 72 QDR 4x ports into a single 1RU switch and multiple switches can be combined to cost effectively link up to 576 servers. For larger clusters, the Sun InfiniBand Switch 648 supports up to 648 servers with a single switch, or up to 5184 servers in a full non-blocking fabric using eight 648 port switches.
This week's Forbes article titled, Supercomputing in the Enterprise talks about how non-HPC workloads including server virtualization and consolidation increasingly are adopting HPC technologies like high performance fabrics. While many HPC customers are using large high performance fabrics based on the Sun Datacenter Switch 648 to solve problems that simply couldn't be solved a few years ago, commercial customers increasingly need high performance fabrics just to keep up with advances in multi-core CPU technology.
Lets say that last year you purchased a Sun Blade x6250 server module with two dual-core x86 CPUs. That was a total of four processor core and might easily have required four GigE connections. While we sell a Quad GigE network card, fast forward to today and the same x6250 blade module is equipped with quad core CPUs or a total of eight processor cores. Assuming the same network bandwidth per core was required, that would be 8 GigE connections. Opt instead for the Sun Blade x6275 server module and you get 16 processor cores per server module. If you filled a rack with four Sun Blade 6000 chassis each filled with x6275 blades, and provided a GigE connection for each core, you would need an astounding 640 ethernet cables coming out of the back of the rack. It is no wonder why HPC and commercial customers alike are looking to high performance fabrics that offer cable aggregation and other benefits.
So no matter if you are looking to build a high performance fabric out of InfiniBand or 10 GigE, Sun has products and solutions that can help you today.
Friday Sep 11, 2009
By marchamilton on Sep 11, 2009
CHPC's data center integrates three high efficiency cooling units which operate in an N+1 redundant fashion. The center can operate on a single cooling tower during most operational conditions, with the second cooling unit providing additional capacity for maximum operations or hot summer days. Meanwhile, the third unit provides redundancy should either of the primary units fail.
While most of Jülich's plumbing and water pumps were under their data center, CHPC was able to locate their's outside next to the cooling units.
And finally, to close of my blog series on CHPC, a look at Cape Town's Table Mountain, one of the many reasons why locals call Cape Town the most beautiful city in the world.
By marchamilton on Sep 11, 2009
Tuesday Sep 08, 2009
By marchamilton on Sep 08, 2009
Just a few short months ago, Sun launched the first of our second generation Sun Constellation System at the Jülich Supercomputer Center in Germany. Since then, similar Sun Constellation System supercomputers have been installed in North and South America, in Australia, and in Asia. Today's launch at CHPC brings Open Petascale Computing to a sixth continent, bringing state of the art supercomputing to Africa.
At each Sun Constellation System launch I attend, as I meet with government, education, and industry executives, I hear a common theme, the growing need to compute to compete. While we didn't compare notes prior to the launch, Minister Pandor used the phrase outcompute to outcompete in reference to her government's investment in CHPC as well as South Africa's ongoing competition with Australia to host the Square Kilometer Array project.
CHPC is quite proud of their green IT efforts, and like the Sun Constellation System I described in my blog Jülich Underground, CHPC uses Sun's Glacier passive water cooled doors to save 25% or more of the electricity normally used to cool a data center, a first as far as we know for any African data center. Ironically, CHPC is built on the site of an old ocean fish research lab, and several large walk-in freezers were removed in the renovation of the old lab into data center space. And while they won't freeze fish, the Sun Constellation System's Glacier doors are definitely more efficient in data center cooling than the old freezers. Of course, Sun Constellation System technology is not limited to HPC clusters and having read in the local paper this morning a report estimating South Africa would have to increase electrical generation capacity by 20 Gigawatts by 2020, I couldn't help but comment to Minister Pandor to think how that amount could be reduced if all data centers in South Africa used similar passive cooling technology.
Besides Green IT, another major thrust of CHPC is the use of open source software. Their new Sun Constellation System uses the Sun HPC Software, Linux Edition including the latest versions of Sun Grid Engine software and the Sun Lustre File System.
CHPC also followed the lead of other Sun HPC customers such as HPCVL in Canada and installed a Sun M9000 system, with 256 UltraSPARC CPU cores and 2 TB of memory which will be used to run large share memory codes for computational fluid dynamics research as well as other research requiring a large shared memory system. Running the open source Solaris operating system, the M9000 will provide users a similar HPC environment as the Sun Constellation System including Sun Grid Engine and NFS access to the Lustre file system.
Anyone who thinks Africa isn't ready for world class HPC had better think again!
Monday Aug 31, 2009
By marchamilton on Aug 31, 2009
If you want to learn more about how this image was collected and processed, visit the NASA's Earth Observatory site. For up to date information on the fire, I have been following the LA Times web site which is doing an excellent job in covering the blazes.
NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center.
Wednesday Aug 05, 2009
By marchamilton on Aug 05, 2009
For those who are not familiar with eSolar, it is a Bill Gross startup from Idealab and described more in today's Los Angeles Times article.
Knowing that I had a rare week with no travel, I decided last weekend to do a little father-son project and install my own small solar system, thinking at first it would be just the thing to run my teenage son's various portable devices and perhaps the small aquarium in his room. I had been eying the Sunforce 60-watt solar charging kit at Costco my last several visits and decided to take the plunge (Amazon.com carries the kit at a similar price, with free super-shipping and six months interest-free if you use your Amazon charge card, although that didn't work for my weekend impulse buy). The kit includes everything you need to for a small off-grid system except a battery, including all cables, a battery charge controller, and a 200 watt inverter. With the usual caveats, you should even be able to claim the 30% US tax credit for the kit.
So our Saturday started with assembling the PVC frame for the 4 solar panels in our garage. Unfortunately, the factory sealed box was missing one of the PVC pieces needed to complete the frame and I had visions of the rest of the day spent in Costco's return line trying to get an exchange. Luckily, when I returned around noon, a Costco manager was able to promptly help me and took the missing piece out of another box and I was on my way in less than five minute, now that was a pleasant surprise! On my way out, I stopped by Costco's automotive center (separate entrance, separate cashier, no waiting) and purchased a deep-cycle marine battery.
The 200 watt inverter is plenty to run my laptop, cell phone charger, and the small aquarium pump. With no travel and no in-office meetings, I have not had to use my car for work this week, further lessening my carbon footprint. My only complaint is that the 200 watt inverter has a small noise fan that runs continuously when turned on so next weekend's project is to replace it with a larger, 1500 watt inverter that has thermostatically controlled fan.
Knowing just enough about data center Power Usage Effectiveness (PUE) to sound impressive at cocktail parties, I realized running the AC inverter only to then have my laptop power adapter convert back to DC was not efficient, so as my Mac laptop battery approaches a full charge, I've been unplugging the inverter and plugging my laptop DC adapter directly into the battery power cable, which avoids an energy-burning DC-AC-DC cycle. The one lonely fish that has survived my son's mostly failed attempts at fish care over the last year doesn't mind that its water is only filtered part of the day!
Sunday Jun 21, 2009
By marchamilton on Jun 21, 2009
Hamburg promised even better running around the beautiful Binnenalster Lake formed out of the river that runs through town. The jogging path around the lake is a little less than 10K and easily accessed from my hotel, Le Meridian. With the next few days packed with customer meetings and International Supercomputer Conference events, I'll need every bit of the nearly 20 hours of daylight right now to get in my daily run.
If your in town this morning, don't miss out on Andy Bechtolsheim's presentation at the Sun HPC Consortium on Sun's HPC Roadmap.
Saturday Jun 20, 2009
By marchamilton on Jun 20, 2009
One of my favorite business hotels in Central London is the Sheraton Park Lane. It is conveniently located across from London's Green Park and plenty of lovely running paths. Exiting the Sheraton onto Piccadilly, cross the street to Green Park and head counter-clockwise. Halfway around the park you will get to Buckingham Palace, crossing the street in front of the Palace to get to St. James Park. My typical jet-lag recovery run continues counter-clockwise for a loop around St. James Park, then a reverse clockwise loop, then back across to Green Park continuing counter-clockwise to your starting point. Depending on your pace, its an easy 30-40 minute run which is perfect to get you ready for a sleep-deprived day of customer meetings.
The next key to avoiding jet-lag is to avoid the temptation to skip dinner and go to sleep early. The perfect remedy for that is just a short cab-ride away at Ozer's Turkish Restaurant. While the 11-course "Healthy Meal" special may not sound healthy by the title, it is an absolutely wonderful tasting-size assortment of Turkish specialities with just the right amount of spices to make you forget sleep for a few more hours. For those that prefer, there is a vegetarian option of the healthy meal as well although I quite enjoyed the marinated, grilled chicken and lamb courses of the standard meal.
Tomorrow, I'll share my Hamburg running tips.
Monday Jun 08, 2009
By marchamilton on Jun 08, 2009
This three day workshop contains three tracks with Sun and Customer presentations around Sun Grid Engine, Open Storage (including Lustre and SAM-QFS), and software Tools such as Sun Studio and Sun HPC ClusterTools. Talks will range from general to very detailed engineering topics. At times in the schedule the three tracks will merge into one track for specific plenary sessions that have a general interest. For example, a detailed talk on Sun’s HPC Stack would be a plenary session. The conference will begin at 9am on Tuesday and end at 1pm on Thursday.
Friday Jun 05, 2009
By marchamilton on Jun 05, 2009
Thursday May 28, 2009
By marchamilton on May 28, 2009
Running a PetaFlop BlueGene system, a 2000 node Sun cluster, and a 1000 node Bull cluster takes a lot of cooling water. The facilities work that goes into a modern HPC data center is an absolutely amazing act of mechanical engineering in and of itself, never mind the computers.
If you look closely you will see the chilled water connector from the Sun Constellation System rear cooling door at the bottom right of the rack. A flexible connector is used to connect to the under-floor chiller water supply. There is also a top-of-door connector (not shown) for customers who run chiller water pipes above the racks.
Here is another shot of the under-floor piping.
Sun also has a gas refrigerant cooling door option for Sun Constellation System. The gas refrigerant door requires smaller diameter piping to the racks and can have other advantages, although it does require an external unit which can be outside the computer room, typically with its own chilled water heat exchanger. Sun's data center design team can help design the ideal data center cooling system for your Sun Constellation System.
Tuesday May 26, 2009
By marchamilton on May 26, 2009
We believe there will be many different types of clouds, including public clouds like Amazon's offerings and the Sun Cloud, enterprise clouds, clouds run by service providers, and other hybrid offerings.
My group at Sun, besides selling large HPC systems, is responsible for helping customers build enterprise clouds using Sun's technology. Many of our customers are starting down the path of building enterprise clouds, most are not ready to talk about it in public. So I was very excited to read about NASA's enterprise cloud, called Nebula, and how it is using the Sun Lustre file system as a key part of their cloud architecture. The Nebula web site gives a detailed description of their Lustre implementation.
My group has, of course, worked on most of the large Lustre deployments, including many on non-Sun hardware, that have been done around the world. One thing we realized is that not everyone has rocket scientists on their staff, and even if they do, they don't always want to spend their time custom-designing Lustre storage systems. So to help HPC and enterprise cloud customers simplify and accelerate the deployment of Lustre, we have created the Sun Lustre Storage System. Scaling from 1 to over 100 GB/sec, the modular architecture of the Sun Lustre Storage System makes it easy for anyone to deploy Lustre.
In the coming days, you will be hearing a lot more about the Sun Cloud, and many other Sun technologies being used in our cloud deployments.
By marchamilton on May 26, 2009
Here is a good view of three of the six Sun Magnum QDR switches at Jülich. Each switch has 648 QDR IB ports exposed as 216 CXP 12x connectors.
I won't show pictures of the other vendor's IB rack, but just think about this with three times as many cables.
Of course, the cabling gets even more challenging under the floor.
But it sure likes nice when you are all done.
And here is the team to thank for all that hard work, yours truly just there for the picture op as I have to say I didn't help with any of this.
Monday May 25, 2009
By marchamilton on May 25, 2009
The Jülich system also features a Sun Lustre Storage System directly connected to its InfiniBand network, using multiple Lustre Object Storage Servers (OSS) to provide high speed & parallel access to large single namespace filesystem easily expandable to PetaBytes of storage and 10's or even 100's of GB/sec of storage bandwidth (Oak Ridge National Labs has achieved over 200 GB/sec on their Sun Lustre system).
One unique feature the Jülich system is its InfiniBand fabric using Sun and Mellanox QDR switches. Besides the 2000 node Sun Constellation System using Sun Magnum QDR switches, the Jülich QDR fabric also supports a 1000 node Bull cluster using Mellanox QDR switches. While both the Sun and Bull supercomputers are built out of 2-socket Intel Nehalem compute nodes, the physical size and complexity of the systems stands in stark contrast. Using regular 4x IB cables to connect to the Mellanox switches, the Bull cluster, while only half the number of compute nodes, requires more cables than the Sun Constellation System with its 3-in-1 12x cables. In addition, the Sun Constellation System racks require no internal cables to connect the compute nodes to its built-in "QNEM", the world's first in-chassis QDR leaf switch. While most Sun Constellation Systems use the QNEM to build a fully connected "fat tree" IB fabric, the QNEM also supports mesh and 3D Torus IB fabrics, the latter being used at a Sun Constellation System being deployed at Sandia National Labs in the US.
Bull does a good job of packing 72 of their Nehalem compute nodes into a single rack, but counting their IB racks still requires almost 2x the floorspace of the Sun Constellation System sporting 96 compute nodes in each rack.
Jülich choose Sun's new water-cooled rear door option for the Sun Constellation System, greatly simplifying the cooling design of their data center. Depending on exact CPU and memory configuration, Sun Constellation System racks can require 30-40 KW of cooling per rack which requires some sort of supplemental cooling. Sun provides both water-cooled and refrigerant gas cooled rear door options for Sun Constellation System racks. This approach has advantages over in-row or top-of-rack based supplemental cooling systems in that no supplemental fans are required, air is moved through the cooling doors using only the blade chassis's build-in fans.The supplemental fans in in-row and top-of-rack systems are often left out of customer's power-usage calculations. Sun's Data Center Efficiency practice can help customers design more efficient data centers, be it an entire new from the ground up data center or retrofitting an existing data center.
Well, it is time to head off to the grand opening ceremonies, I'll be back afterwards with more of the story.
Wednesday May 13, 2009
By marchamilton on May 13, 2009
Want to hear more about Sun's latest plans for Lustre, InfiniBand, and GPGPU technology? June is a wonderful month to visit Germany and the HPC Consortium user group meeting in Hamburg promises to bring you updates in all these areas. There is still time to register at the reduced early bird registration fee. I hope to see many of you there.
Thursday May 07, 2009
By marchamilton on May 07, 2009
So at least in my trivial example, the killer app for cloud computing is the one I already use - OpenOffice. I don't have to take any extra steps to drag files between folders, I just use save to cloud and open from cloud instead of save and open.
One could easily expand this notion to almost any enterprise software. Do you want to buy special cloud backup software for your database or do you just want a "backup to cloud" button for the database you already use?
I'm not saying there isn't room for innovation in apps like Dropbox, and ultimately the "front end" of clouds and the "back end" are not necessarily linked. That is why Sun is is promoting a set of Open Cloud APIs so that in the future a company like Dropbox can decide to focus on innovating on the front end/client and simply use an existing cloud back end, without getting locked into that back end.
Saturday May 02, 2009
By marchamilton on May 02, 2009
Well, since I can't talk about our latest Linpack results, I though I would share my world airport tips.
SFO is simply the best for international connections, now that the walkway from Terminal 3 (United) to the International terminal is complete. Now if only the United flight attendants would update their script to not confuse people with, "please take the shuttle to the international terminal". Due to a late departure from LAX, I had only 30 minutes to make my first connection, luckily my plane pulled up to SFO gate 75, less than a five minute walk to my international gate.
United to Lufthansa connections can require quite a bit longer walk in Frankfurt, and for some reason the airlines are determined to make it just plain hard to get to India. Well, I guess it might have something to do with geography too. Six hours in Frankfurt was plenty of time to sample both the United Red Carpet Club lounge as well as the Lufthansa lounge. But six hours is a long time to spend in an airport, no matter how many lounges you visit.
Pune, India, is an interesting city, especially when you arrive at 3:30 am, one of the few times the streets are uncrowded. With 600 new car registrations every day, and no new roads, well, you get the picture. Tata Motors is based in Pune as I am sure will be thousands of Tata's new Nanos before long. Officially the Nano is a 4 passenger vehicle, but given that I've seen more passengers on an Indian motercycle, I expect the occasional Nano will be found with 5 or more passengers.
Mumbai (Bombay) was destined to be another six hour layover on my way out of the country. Privatization is greatly improving service at India's airports. Sometimes too much. The Jet Airways staff seemed so proud to provide a modern "kneeling" airport bus to transport us no more than 20 yards from our plane's parking spot to the terminal. Then came the dreaded domestic to international terminal transfer bus. Just a few years ago, said "bus" resembled a pre WWII relic of engineering. While today its a modern bus with air conditioning, it was still nearly an hour wait followed by a slow 45 minute crawl across the airfield including what seemed like a 10 minute standoff with an Airbus A320 before the driver finally went around what appeared to be an illegal shortcut.
No surprise given its history near the center of the SARS and Avian (H5N1) flu, Singapore already had their thermal imaging monitors out scanning all incoming travelers for telltale signs of the new H1N1 flu. While definitely my shortest visit to Singapore at less than 24 hours, the Sun Singapore team was as efficient as ever, having organized a great HPC Symposium for about fifty customers from across Asia South.
Singapore Airlines deserves special mention for making my 11 hour flight to Paris as restful as an 11 hour flight can be. I stepped off the plane at 6:30 am and about half a dozen customer meetings later stepped into my hotel room at about 9 pm and collapsed. Luckily I had a late start the next morning and felt recovered after my first full night of sleep of the week. I sincerely thank Europe's PRACE Project for extending their vendor briefings an additional day to meet with me.
Having enjoyed favorable tail winds most of the week, it was now time for payback and my flight back to Chicago, at over 9 hours, was considerably late. At the risk of spreading one of the best kept secrets of international travel, the US Global Entry program got me past a huge line of several hundred travelers and through immigration and customs, and despite United's txt msg that I was rebooked I had glimmers of making my final flight home. Alas, the train from Chicago's terminal 5 to terminal 2 was even more crowded, as was the security line, and I missed my connection. Luckily, United had already rebooked me on a flight only 90 minutes later, and I was soon home.
Next week, I'm staying in California. Maybe even for two weeks :)
Saturday Apr 25, 2009
Thursday Apr 23, 2009
By marchamilton on Apr 23, 2009
Nigel yesterday highlighted AMD's most energy efficient processor to date, their latest quad core, 40 watt processor. Nigel's blog linked to plenty of benchmarks and that is really important. Anyone can build a low power processor, and I'm always amazed by naive remarks that look simply at processor wattage. It is, of course, the performance per watt that counts, and don't forget to throw in other power hungry components like memory. Many servers today consume more power for the memory subsystem than by their CPUs, and different memory types have different power consumption. Anyhow, Nigel touted the new 40 watt processor as ideal for scale-out cloud computing deployments, and I'll add like the Sun Cloud.
AMD announced much more than just a new low power processor, however, the most exciting thing for me was news on their upcoming six-core Istanbul processor. I've embedded AMD's presentation below so you can read about Istanbul for yourself. Istanbul is socket compatible with existing AMD Barcelona series processors so I would expect to see Istanbul showing up as an option in most existing AMD based servers. However, as HPC users know, a faster processor alone isn't enough. With faster processors you will need faster storage like our Sun Lustre Storage System as well as faster InfiniBand and 10GbE networking.
And of course, whenever I mention one of our x86 CPU partners, the other one asks for equal time, so be sure to check out blogs.intel.com too.
Thursday Apr 16, 2009
By marchamilton on Apr 16, 2009
If you have not visited lately, check out the new and improved Lustre.org site where you can find the latest Lustre roadmap, an updated Lustre hardware support matrix, and a number of different white papers and training material.
Tuesday Apr 14, 2009
By marchamilton on Apr 14, 2009
Of course as interesting as the discussion was, what really makes an HPC product is its performance, and our new Open Network Systems come complete with benchmark proof points that these new systems are absolutely the best HPC servers in the world using Intel's new Xeon 5500 (aka Nehalem) CPUs. While every major vendor has announced Nehalem based systems, only Sun:
Needless to say, everyone in the Sun HPC community is excited about these products. The new Sun Constellation System components are being delivered today, and thanks to the ease of use our Sun HPC Software Stack you can start gaining immediate productivity improvements from your new system today!
- Oracle Powers High Tech Archeological Research
- Oracle Grid Engine on AWS Cluster Compute Instances
- Oracle PASIG Meeting
- Summer Reading - July-August 2010 Oracle Magazine
- A Petabyte of Storage Isn't What it Used to Be
- Highlights of Oracle's Next Generation x86 Systems Launch
- A Brilliant Argument for ZFS in Cloud Storage Environments
- Some Favorite Cloud Computing Links
- Oracle HPC Consortium Registration and Agenda
- Oracle HPC Consortium