Thursday Oct 22, 2009

This Flash Will Hold A Lot Of Pictures

Ever run out space on your digital camera's flash card? Well, this flash won't quite fit in your camera, but at 1.92 TB in 1RU, my good friends at SmugMug will be using their new flash, a Sun Storage F5100 Flash Array to store a lot of pictures. Just how fast is it, or how long would it take to store all the pictures from 1000, 2GB digital camera flash cards onto the F5100? Lets just say SmugMug CEO Don MacAskill called it "crazy fast" in his Twitter post. I guess that is a technical term in the photographic industry. All kidding aside, Don, who is clearly an avid photographer, has also been an avid user of Sun technology over the years, and writes about both in his blog. I look forward to hearing about how SmugMug is using their new F5100 in the coming weeks.

Wednesday Oct 21, 2009

The Pitfalls of Organic Gardening

I know my gardening friends on the East Coast will have no pity for my late fall challenges.

Thursday Oct 15, 2009

SSDs Are So Yesterday

Some people are still touting Solid State Disks (SSDs) as hot new technology. Meanwhile, many High Performance Computing (HPC) customers have moved on to more innovative uses of flash technology. Come hear what some of our customers are doing with Sun's flash technology like the Sun Storage F5100 at next month's HPC Consortium meeting immediately prior to SC09 conference in beautiful Portland Oregon. Impatient, don't worry, some Sun HPC customers who participated in early testing of the F5100 already have great things to say about it. Customers like Chuck Sears, Manager of Research Computing at Oregon's very own College of Oceanic and Atmospheric Sciences, Oregon State University, and Don Thorp, Production Systems, San Diego Supercomputer Center, share their experiences with the F5100 in HPC environments.

And while the entire HPC community is focused on the upcoming SC09 conference right now, be sure to save the dates for the other major international supercomputer show, ISC10 starting May 31 in Hamburg Germany. Meanwhile, check out the following video by my favorite movie star, Sun EVP John Fowler, talking about the F5100.

Sunday Oct 11, 2009

Oracle OpenWorld Live

Catch Scott and Larry in the opening keynote, Sunday at 5:45 pm PT on OpenWorld Live. But don't have too much fun tonight, get some rest and join Oracle's Judson Althoff Monday morning for a 5K partner run through the streets of San Francisco, meet Sun & other partners at 5th & Howard at 7 am.

The Gospel of Java according to James.

Oracle Press Release Oracle and Sun Are Faster Than IBM: Proof Now Available

Go to the OpenWorld Live site.

Tuesday Sep 29, 2009

High Performance Fabrics - Not Just For HPC Anymore

Traditional High Performance Computing (HPC) customers continue to rapidly adopt the latest 40 GB/sec QDR InfiniBand technology delivered by the Sun InfiniBand Switch 648. Recently, Oracle put their weight behind Sun's QDR technology in Exadata v2, incorporating the Sun Datacenter InfiniBand Switch 36.. Today, Sun further expands our family of high performance fabrics offering with the Sun Datacenter InfiniBand Switch 72.

For the first time ever, the Sun Datacenter InfiniBand Switch 72 lets you aggregate up to 72 QDR 4x ports into a single 1RU switch and multiple switches can be combined to cost effectively link up to 576 servers. For larger clusters, the Sun InfiniBand Switch 648 supports up to 648 servers with a single switch, or up to 5184 servers in a full non-blocking fabric using eight 648 port switches.

This week's Forbes article titled, Supercomputing in the Enterprise talks about how non-HPC workloads including server virtualization and consolidation increasingly are adopting HPC technologies like high performance fabrics. While many HPC customers are using large high performance fabrics based on the Sun Datacenter Switch 648 to solve problems that simply couldn't be solved a few years ago, commercial customers increasingly need high performance fabrics just to keep up with advances in multi-core CPU technology.

Lets say that last year you purchased a Sun Blade x6250 server module with two dual-core x86 CPUs. That was a total of four processor core and might easily have required four GigE connections. While we sell a Quad GigE network card, fast forward to today and the same x6250 blade module is equipped with quad core CPUs or a total of eight processor cores. Assuming the same network bandwidth per core was required, that would be 8 GigE connections. Opt instead for the Sun Blade x6275 server module and you get 16 processor cores per server module. If you filled a rack with four Sun Blade 6000 chassis each filled with x6275 blades, and provided a GigE connection for each core, you would need an astounding 640 ethernet cables coming out of the back of the rack. It is no wonder why HPC and commercial customers alike are looking to high performance fabrics that offer cable aggregation and other benefits.

Of course, InfiniBand is not the only high performance fabric available. Sun sells a 10 GigE network card for our blades and also has a 10 Gig Virtual Network Express Module for the Sun Blade 6000.

So no matter if you are looking to build a high performance fabric out of InfiniBand or 10 GigE, Sun has products and solutions that can help you today.

Friday Sep 11, 2009

CHPC Data Center Infrastructure

Back from Cape Town, I wanted to finish up my discussion of CHPC with a few photos of their data center infrastructure. As I mentioned in my last post, CHPC uses the new Sun Constellation System Glacier cooling doors, the first such deployment in Africa we are aware of using energy efficient passive cooling doors. In my Jülich Underground post earlier this summer I shared some pictures of Jülich's underground cooling pipe installation. Cape Town being blessed with a more temperate climate than Jülich Germany, Sun's Data Center Strategy, Design, and Build Services team was able to help CHPC design a completely outdoor versus underground cooling infrastructure.

CHPC's data center integrates three high efficiency cooling units which operate in an N+1 redundant fashion. The center can operate on a single cooling tower during most operational conditions, with the second cooling unit providing additional capacity for maximum operations or hot summer days. Meanwhile, the third unit provides redundancy should either of the primary units fail.

While most of Jülich's plumbing and water pumps were under their data center, CHPC was able to locate their's outside next to the cooling units.

And finally, to close of my blog series on CHPC, a look at Cape Town's Table Mountain, one of the many reasons why locals call Cape Town the most beautiful city in the world.

Sun Hosts HPC Virtual Conference Sept. 17

If you haven't already signed up, take a few minutes today to register for Sun's HPC Virtual Conference being held next week. Andy Bechtolsheim and other Sun HPC experts will be joining me for this unique virtual conference dedicated to High Performance Computing to discuss the trends and issues facing the HPC community.

Tuesday Sep 08, 2009

CHPC Cape Town: Compute to Compete

Compute to Compete was certainly the theme of the day as I joined Ms Naledi Pandor Minister of Science and Technology for South Africa at the launch of our latest Sun Constellation System and Africa's largest supercomputer at CHPC in Cape Town.

Just a few short months ago, Sun launched the first of our second generation Sun Constellation System at the Jülich Supercomputer Center in Germany. Since then, similar Sun Constellation System supercomputers have been installed in North and South America, in Australia, and in Asia. Today's launch at CHPC brings Open Petascale Computing to a sixth continent, bringing state of the art supercomputing to Africa.

At each Sun Constellation System launch I attend, as I meet with government, education, and industry executives, I hear a common theme, the growing need to compute to compete. While we didn't compare notes prior to the launch, Minister Pandor used the phrase outcompute to outcompete in reference to her government's investment in CHPC as well as South Africa's ongoing competition with Australia to host the Square Kilometer Array project.

CHPC is quite proud of their green IT efforts, and like the Sun Constellation System I described in my blog Jülich Underground, CHPC uses Sun's Glacier passive water cooled doors to save 25% or more of the electricity normally used to cool a data center, a first as far as we know for any African data center. Ironically, CHPC is built on the site of an old ocean fish research lab, and several large walk-in freezers were removed in the renovation of the old lab into data center space. And while they won't freeze fish, the Sun Constellation System's Glacier doors are definitely more efficient in data center cooling than the old freezers. Of course, Sun Constellation System technology is not limited to HPC clusters and having read in the local paper this morning a report estimating South Africa would have to increase electrical generation capacity by 20 Gigawatts by 2020, I couldn't help but comment to Minister Pandor to think how that amount could be reduced if all data centers in South Africa used similar passive cooling technology.

Besides Green IT, another major thrust of CHPC is the use of open source software. Their new Sun Constellation System uses the Sun HPC Software, Linux Edition including the latest versions of Sun Grid Engine software and the Sun Lustre File System.

CHPC also followed the lead of other Sun HPC customers such as HPCVL in Canada and installed a Sun M9000 system, with 256 UltraSPARC CPU cores and 2 TB of memory which will be used to run large share memory codes for computational fluid dynamics research as well as other research requiring a large shared memory system. Running the open source Solaris operating system, the M9000 will provide users a similar HPC environment as the Sun Constellation System including Sun Grid Engine and NFS access to the Lustre file system.

Anyone who thinks Africa isn't ready for world class HPC had better think again!

Monday Aug 31, 2009

LA Fires From Space

Thanks to all of you who have inquired if I am being impacted by the terrible fires in the Los Angeles area that as of this afternoon have burned over 105,000 acres. Luckily, I live away from the fire area, but my condolences go out to all of those who have suffered losses. This NASA image from yesterday gives you some idea of the extent of the fire, although the Station Fire has more than doubled in size since that time!

If you want to learn more about how this image was collected and processed, visit the NASA's Earth Observatory site. For up to date information on the fire, I have been following the LA Times web site which is doing an excellent job in covering the blazes.

NASA image by Jeff Schmaltz, MODIS Rapid Response Team, Goddard Space Flight Center.

Wednesday Aug 05, 2009

A Week Off The Grid

Today, eSolar opens a new 5 MW Concentrating Solar Power (CSP) plant outside of Los Angeles.

(Credit eSolar)

For those who are not familiar with eSolar, it is a Bill Gross startup from Idealab and described more in today's Los Angeles Times article.

Knowing that I had a rare week with no travel, I decided last weekend to do a little father-son project and install my own small solar system, thinking at first it would be just the thing to run my teenage son's various portable devices and perhaps the small aquarium in his room. I had been eying the Sunforce 60-watt solar charging kit at Costco my last several visits and decided to take the plunge ( carries the kit at a similar price, with free super-shipping and six months interest-free if you use your Amazon charge card, although that didn't work for my weekend impulse buy). The kit includes everything you need to for a small off-grid system except a battery, including all cables, a battery charge controller, and a 200 watt inverter. With the usual caveats, you should even be able to claim the 30% US tax credit for the kit.

So our Saturday started with assembling the PVC frame for the 4 solar panels in our garage. Unfortunately, the factory sealed box was missing one of the PVC pieces needed to complete the frame and I had visions of the rest of the day spent in Costco's return line trying to get an exchange. Luckily, when I returned around noon, a Costco manager was able to promptly help me and took the missing piece out of another box and I was on my way in less than five minute, now that was a pleasant surprise! On my way out, I stopped by Costco's automotive center (separate entrance, separate cashier, no waiting) and purchased a deep-cycle marine battery.

The 200 watt inverter is plenty to run my laptop, cell phone charger, and the small aquarium pump. With no travel and no in-office meetings, I have not had to use my car for work this week, further lessening my carbon footprint. My only complaint is that the 200 watt inverter has a small noise fan that runs continuously when turned on so next weekend's project is to replace it with a larger, 1500 watt inverter that has thermostatically controlled fan.

Knowing just enough about data center Power Usage Effectiveness (PUE) to sound impressive at cocktail parties, I realized running the AC inverter only to then have my laptop power adapter convert back to DC was not efficient, so as my Mac laptop battery approaches a full charge, I've been unplugging the inverter and plugging my laptop DC adapter directly into the battery power cable, which avoids an energy-burning DC-AC-DC cycle. The one lonely fish that has survived my son's mostly failed attempts at fish care over the last year doesn't mind that its water is only filtered part of the day!

Sunday Jun 21, 2009

Hamburg Running

Before leaving London I had time to grab a quick breakfast at one of the many restaurants located in the Covent Garden area a few blocks from my hotel. There are many small bakeries, cafes, pubs, and varied international fare tucked into the back alleys, although I never found the garden.

Hamburg promised even better running around the beautiful Binnenalster Lake formed out of the river that runs through town. The jogging path around the lake is a little less than 10K and easily accessed from my hotel, Le Meridian. With the next few days packed with customer meetings and International Supercomputer Conference events, I'll need every bit of the nearly 20 hours of daylight right now to get in my daily run.

If your in town this morning, don't miss out on Andy Bechtolsheim's presentation at the Sun HPC Consortium on Sun's HPC Roadmap.

Saturday Jun 20, 2009

London Jet-Lag Cures

On the way to the Sun HPC Consortium starting today in Hamburg, I stopped off in London to meet first with a few customers. Now twenty-four hours in London is not really enough to get jet-lagged, but here are my tips.

One of my favorite business hotels in Central London is the Sheraton Park Lane. It is conveniently located across from London's Green Park and plenty of lovely running paths. Exiting the Sheraton onto Piccadilly, cross the street to Green Park and head counter-clockwise. Halfway around the park you will get to Buckingham Palace, crossing the street in front of the Palace to get to St. James Park. My typical jet-lag recovery run continues counter-clockwise for a loop around St. James Park, then a reverse clockwise loop, then back across to Green Park continuing counter-clockwise to your starting point. Depending on your pace, its an easy 30-40 minute run which is perfect to get you ready for a sleep-deprived day of customer meetings.

The next key to avoiding jet-lag is to avoid the temptation to skip dinner and go to sleep early. The perfect remedy for that is just a short cab-ride away at Ozer's Turkish Restaurant. While the 11-course "Healthy Meal" special may not sound healthy by the title, it is an absolutely wonderful tasting-size assortment of Turkish specialities with just the right amount of spices to make you forget sleep for a few more hours. For those that prefer, there is a vegetarian option of the healthy meal as well although I quite enjoyed the marinated, grilled chicken and lamb courses of the standard meal.

Tomorrow, I'll share my Hamburg running tips.

Monday Jun 08, 2009

Sun HPC Software Workshop

If you can't get to Germany for this month's Sun HPC Consortium, you have a second chance coming up in September when we will host the Sun HPC Software Workshop in beautiful Regensburg, Germany. It's a great opportunity for users to get answers, advice, and suggestions regarding their specific implementations, and to share their insights with colleagues.

This three day workshop contains three tracks with Sun and Customer presentations around Sun Grid Engine, Open Storage (including Lustre and SAM-QFS), and software Tools such as Sun Studio and Sun HPC ClusterTools. Talks will range from general to very detailed engineering topics. At times in the schedule the three tracks will merge into one track for specific plenary sessions that have a general interest. For example, a detailed talk on Sun’s HPC Stack would be a plenary session. The conference will begin at 9am on Tuesday and end at 1pm on Thursday.

Friday Jun 05, 2009

Sun HPC Consortium Update

If you are going to International Supercomputer Conference later this month in Hamburg, there is still time to come early and attend the Sun HPC Consortium. Highlights of the HPC Consortium are sure to be Sun co-founder Andy Bechtolsheim updating the audience on Sun's roadmap for the Sun Constellation System, as well as Dr. Thomas Lippert of the Jülich Supercomputer Center who will give a preview of how one of the first 2000+ node Sun Constellation Systems with Sun's latest "Magnum" QDR InfiniBand switch did in the Top500 Linpack benchmark before the official Top500 update is published later that week. See today's press release for more info on the Jülich system.

Thursday May 28, 2009

Jülich Underground

The Jülich VIP data center tour was a rare treat. I can only imagine anthropologists examining the site 100's of years from now and wondering why anyone would run 5 MW of power and 1000's of gallons an hour of water into the basement of a gymnasium sized building.

Running a PetaFlop BlueGene system, a 2000 node Sun cluster, and a 1000 node Bull cluster takes a lot of cooling water. The facilities work that goes into a modern HPC data center is an absolutely amazing act of mechanical engineering in and of itself, never mind the computers.

If you look closely you will see the chilled water connector from the Sun Constellation System rear cooling door at the bottom right of the rack. A flexible connector is used to connect to the under-floor chiller water supply. There is also a top-of-door connector (not shown) for customers who run chiller water pipes above the racks.

Here is another shot of the under-floor piping.

Sun also has a gas refrigerant cooling door option for Sun Constellation System. The gas refrigerant door requires smaller diameter piping to the racks and can have other advantages, although it does require an external unit which can be outside the computer room, typically with its own chilled water heat exchanger. Sun's data center design team can help design the ideal data center cooling system for your Sun Constellation System.

Tuesday May 26, 2009

HPC Technologies & Cloud Computing

Everyone from traditional web hosting companies to university HPC centers are rebranding themselves with the Cloud Computing moniker these days, but what's the reality behind the hype. We certainly have many great hardware and software products that can be used for HPC, web hosting, and just about anything else you would want to do "in a cloud", but when Sun talks about cloud computing, there are a couple of key defining concepts we focus on:
  • Virtualization
  • Multi-tenancy
  • Real-time, user-controlled provisioning
  • Pay-per-use

    We believe there will be many different types of clouds, including public clouds like Amazon's offerings and the Sun Cloud, enterprise clouds, clouds run by service providers, and other hybrid offerings.

    My group at Sun, besides selling large HPC systems, is responsible for helping customers build enterprise clouds using Sun's technology. Many of our customers are starting down the path of building enterprise clouds, most are not ready to talk about it in public. So I was very excited to read about NASA's enterprise cloud, called Nebula, and how it is using the Sun Lustre file system as a key part of their cloud architecture. The Nebula web site gives a detailed description of their Lustre implementation.

    My group has, of course, worked on most of the large Lustre deployments, including many on non-Sun hardware, that have been done around the world. One thing we realized is that not everyone has rocket scientists on their staff, and even if they do, they don't always want to spend their time custom-designing Lustre storage systems. So to help HPC and enterprise cloud customers simplify and accelerate the deployment of Lustre, we have created the Sun Lustre Storage System. Scaling from 1 to over 100 GB/sec, the modular architecture of the Sun Lustre Storage System makes it easy for anyone to deploy Lustre.

    In the coming days, you will be hearing a lot more about the Sun Cloud, and many other Sun technologies being used in our cloud deployments.

  • Supercomputing By Jülich

    The Jülich web site has been updated with a nice shot of the Sun Constellation System. They also have details on the technical configuration.

    Here is a good view of three of the six Sun Magnum QDR switches at Jülich. Each switch has 648 QDR IB ports exposed as 216 CXP 12x connectors.

    I won't show pictures of the other vendor's IB rack, but just think about this with three times as many cables.

    Of course, the cabling gets even more challenging under the floor.

    But it sure likes nice when you are all done.

    And here is the team to thank for all that hard work, yours truly just there for the picture op as I have to say I didn't help with any of this.

    Monday May 25, 2009

    Memorial Day in Germany

    Memorial Day started about 9 hours too early for me, as the first rays of sunlight broke through the bottom of the window shade in United Airlines 747 as we descended towards Frankfurt airport. I'm visiting Germany this week for the grand opening of the new Jülich Supercomputer Center, and its 2000 node Sun Constellation System. The Jülich system is one of the first large QDR-based InfiniBand supercomputers, but we expect that 40 Gb/sec QDR technology will rapidly replace the previous generation 20 Gb/sec DDR technology in large clusters, not only because of its higher bandwidth but also because of the improved latency of QDR.

    The Jülich system also features a Sun Lustre Storage System directly connected to its InfiniBand network, using multiple Lustre Object Storage Servers (OSS) to provide high speed & parallel access to large single namespace filesystem easily expandable to PetaBytes of storage and 10's or even 100's of GB/sec of storage bandwidth (Oak Ridge National Labs has achieved over 200 GB/sec on their Sun Lustre system).

    One unique feature the Jülich system is its InfiniBand fabric using Sun and Mellanox QDR switches. Besides the 2000 node Sun Constellation System using Sun Magnum QDR switches, the Jülich QDR fabric also supports a 1000 node Bull cluster using Mellanox QDR switches. While both the Sun and Bull supercomputers are built out of 2-socket Intel Nehalem compute nodes, the physical size and complexity of the systems stands in stark contrast. Using regular 4x IB cables to connect to the Mellanox switches, the Bull cluster, while only half the number of compute nodes, requires more cables than the Sun Constellation System with its 3-in-1 12x cables. In addition, the Sun Constellation System racks require no internal cables to connect the compute nodes to its built-in "QNEM", the world's first in-chassis QDR leaf switch. While most Sun Constellation Systems use the QNEM to build a fully connected "fat tree" IB fabric, the QNEM also supports mesh and 3D Torus IB fabrics, the latter being used at a Sun Constellation System being deployed at Sandia National Labs in the US.

    Bull does a good job of packing 72 of their Nehalem compute nodes into a single rack, but counting their IB racks still requires almost 2x the floorspace of the Sun Constellation System sporting 96 compute nodes in each rack.

    Jülich choose Sun's new water-cooled rear door option for the Sun Constellation System, greatly simplifying the cooling design of their data center. Depending on exact CPU and memory configuration, Sun Constellation System racks can require 30-40 KW of cooling per rack which requires some sort of supplemental cooling. Sun provides both water-cooled and refrigerant gas cooled rear door options for Sun Constellation System racks. This approach has advantages over in-row or top-of-rack based supplemental cooling systems in that no supplemental fans are required, air is moved through the cooling doors using only the blade chassis's build-in fans.The supplemental fans in in-row and top-of-rack systems are often left out of customer's power-usage calculations. Sun's Data Center Efficiency practice can help customers design more efficient data centers, be it an entire new from the ground up data center or retrofitting an existing data center.

    Well, it is time to head off to the grand opening ceremonies, I'll be back afterwards with more of the story.

    Wednesday May 13, 2009

    HPC Partner Week

    In between planning for our upcoming HPC Consortium user group meeting prior to next month's ISC09 conference, I've had a busy week meeting with a number of our HPC partners. I started off the week meeting with Cray's CEO Pete Ungaro. Cray is one of our Lustre partners, offering their customers storage solutions based on the Sun Lustre file system. I also met with Sun partner Integrated Media Technologies. IMT is one of our first partners to qualify for our new Sun HPC Elite partner program, bringing their expertise in Lustre, InfiniBand, and GPGPU technology to our customers. Speaking of GPGPU, yesterday I met with Shanker Trivedi Nvidia's new VP of sales for GPGPU and professional graphics technologies. A number of Sun customers are adding GPGPU's to their Sun clusters, most notably the TiTech TSUBAME supercomputer which now includes 170 Nvidia Tesla GPGPUs.

    Want to hear more about Sun's latest plans for Lustre, InfiniBand, and GPGPU technology? June is a wonderful month to visit Germany and the HPC Consortium user group meeting in Hamburg promises to bring you updates in all these areas. There is still time to register at the reduced early bird registration fee. I hope to see many of you there.

    Thursday May 07, 2009

    What Is The Killer App For Cloud Computing?

    Probably the software you are already using today! One of the biggest challenges with any new service is to get people to use it. Adoption always precedes monitization. I tried out the Dropbox service the day it was launched as I happened to be sitting in an airport with an hour to kill when I received the email invite. They have a great front-end, at least on my Mac where I've tried it, but to be honest I don't use it much anymore because I've reached the 2 GB limit of my free account. By contrast, I've been using the beta version of OpenOffice "save to cloud" since the day Sun launched its internal testing, and I save virtually all of my documents to the cloud these days by default. Other than 1 or 2 documents I might need to edit on a flight, any other documents I need to access offline tend to be cached in my email and are accessible offline in that manner. I use Google Docs too for a few personal files that I want to share with family members, but not for mainstream use.

    So at least in my trivial example, the killer app for cloud computing is the one I already use - OpenOffice. I don't have to take any extra steps to drag files between folders, I just use save to cloud and open from cloud instead of save and open.

    One could easily expand this notion to almost any enterprise software. Do you want to buy special cloud backup software for your database or do you just want a "backup to cloud" button for the database you already use?

    I'm not saying there isn't room for innovation in apps like Dropbox, and ultimately the "front end" of clouds and the "back end" are not necessarily linked. That is why Sun is is promoting a set of Open Cloud APIs so that in the future a company like Dropbox can decide to focus on innovating on the front end/client and simply use an existing cloud back end, without getting locked into that back end.

    Saturday May 02, 2009

    Seven Days, Eight Airports

    I had not originally planned to pack in LAX-SFO-FRA-PNQ-BOM-SIN-CDG-ORD-LAX in one week, but thanks to modern air travel I was able to make all of last week's important customer events. Everyone I talked to was quite excited about Sun's new HPC specific blade products which along with our new QDR (quad data rate - 40 Gb/sec) InfiniBand and Lustre powered open storage products are bringing great new levels of performance to the Sun Constellation System. We have had our first customer Linpack runs on both 3D Torus and fat tree IB configs, and while we are not quite ready to publicly discuss results, they are pretty amazing. It should make for some interesting Top500 announcements at ISC this June.

    Well, since I can't talk about our latest Linpack results, I though I would share my world airport tips.

    SFO is simply the best for international connections, now that the walkway from Terminal 3 (United) to the International terminal is complete. Now if only the United flight attendants would update their script to not confuse people with, "please take the shuttle to the international terminal". Due to a late departure from LAX, I had only 30 minutes to make my first connection, luckily my plane pulled up to SFO gate 75, less than a five minute walk to my international gate.

    United to Lufthansa connections can require quite a bit longer walk in Frankfurt, and for some reason the airlines are determined to make it just plain hard to get to India. Well, I guess it might have something to do with geography too. Six hours in Frankfurt was plenty of time to sample both the United Red Carpet Club lounge as well as the Lufthansa lounge. But six hours is a long time to spend in an airport, no matter how many lounges you visit.

    Pune, India, is an interesting city, especially when you arrive at 3:30 am, one of the few times the streets are uncrowded. With 600 new car registrations every day, and no new roads, well, you get the picture. Tata Motors is based in Pune as I am sure will be thousands of Tata's new Nanos before long. Officially the Nano is a 4 passenger vehicle, but given that I've seen more passengers on an Indian motercycle, I expect the occasional Nano will be found with 5 or more passengers.

    Mumbai (Bombay) was destined to be another six hour layover on my way out of the country. Privatization is greatly improving service at India's airports. Sometimes too much. The Jet Airways staff seemed so proud to provide a modern "kneeling" airport bus to transport us no more than 20 yards from our plane's parking spot to the terminal. Then came the dreaded domestic to international terminal transfer bus. Just a few years ago, said "bus" resembled a pre WWII relic of engineering. While today its a modern bus with air conditioning, it was still nearly an hour wait followed by a slow 45 minute crawl across the airfield including what seemed like a 10 minute standoff with an Airbus A320 before the driver finally went around what appeared to be an illegal shortcut.

    No surprise given its history near the center of the SARS and Avian (H5N1) flu, Singapore already had their thermal imaging monitors out scanning all incoming travelers for telltale signs of the new H1N1 flu. While definitely my shortest visit to Singapore at less than 24 hours, the Sun Singapore team was as efficient as ever, having organized a great HPC Symposium for about fifty customers from across Asia South.

    Singapore Airlines deserves special mention for making my 11 hour flight to Paris as restful as an 11 hour flight can be. I stepped off the plane at 6:30 am and about half a dozen customer meetings later stepped into my hotel room at about 9 pm and collapsed. Luckily I had a late start the next morning and felt recovered after my first full night of sleep of the week. I sincerely thank Europe's PRACE Project for extending their vendor briefings an additional day to meet with me.

    Having enjoyed favorable tail winds most of the week, it was now time for payback and my flight back to Chicago, at over 9 hours, was considerably late. At the risk of spreading one of the best kept secrets of international travel, the US Global Entry program got me past a huge line of several hundred travelers and through immigration and customs, and despite United's txt msg that I was rebooked I had glimmers of making my final flight home. Alas, the train from Chicago's terminal 5 to terminal 2 was even more crowded, as was the security line, and I missed my connection. Luckily, United had already rebooked me on a flight only 90 minutes later, and I was soon home.

    Next week, I'm staying in California. Maybe even for two weeks :)

    Saturday Apr 25, 2009

    Julich's Sun Constellation System Supercomputer

    For your weekend viewing pleasure, enjoy this video by Professor Thomas Lippert discussing the new Sun Constellation System supercomputer at the Julich Supercomputer Centre in Germany.

    Thursday Apr 23, 2009

    AMD Blogs

    While on the topic of corporate blogs, I should point out one of my favorite corporate blog sites - besides of course - and that is AMD's Chief Marketing Office Nigel Desseau definitely learned a thing or two about blogging during his years at Sun and celebrated the 6th anniversary of AMD's Opteron processor with his blog post yesterday.

    Nigel yesterday highlighted AMD's most energy efficient processor to date, their latest quad core, 40 watt processor. Nigel's blog linked to plenty of benchmarks and that is really important. Anyone can build a low power processor, and I'm always amazed by naive remarks that look simply at processor wattage. It is, of course, the performance per watt that counts, and don't forget to throw in other power hungry components like memory. Many servers today consume more power for the memory subsystem than by their CPUs, and different memory types have different power consumption. Anyhow, Nigel touted the new 40 watt processor as ideal for scale-out cloud computing deployments, and I'll add like the Sun Cloud.

    AMD announced much more than just a new low power processor, however, the most exciting thing for me was news on their upcoming six-core Istanbul processor. I've embedded AMD's presentation below so you can read about Istanbul for yourself. Istanbul is socket compatible with existing AMD Barcelona series processors so I would expect to see Istanbul showing up as an option in most existing AMD based servers. However, as HPC users know, a faster processor alone isn't enough. With faster processors you will need faster storage like our Sun Lustre Storage System as well as faster InfiniBand and 10GbE networking.

    Discover Simple, Private Sharing at
    <script type="text/javascript" src=''></script>
    <script type="text/javascript"> var scribd_doc = scribd.Document.getDoc(14533089, 'key-2256tkjx3p7w8k3njj94'); scribd_doc.addParam('height', 450);scribd_doc.addParam('width', 650); scribd_doc.write('mediaPlayer'); </script>

    And of course, whenever I mention one of our x86 CPU partners, the other one asks for equal time, so be sure to check out too.

    Thursday Apr 16, 2009

    Lustre User Group News

    Peter Bojanic kicked off the morning with news that Lustre 2.0 Alpha is now available for download. We really want to get community feedback early and often. We will be releasing new builds every 4-6 weeks. We expect to be starting a formal beta in several months, let us know if you are interested.

    If you have not visited lately, check out the new and improved site where you can find the latest Lustre roadmap, an updated Lustre hardware support matrix, and a number of different white papers and training material.

    Tuesday Apr 14, 2009

    Sun Open Network Systems for HPC

    Sun announced a number of new products today that weren't HPC focused, but at our Open Network Systems launch event at Sun's North America Partner summit in Las Vegas the talk was sure focused on HPC. Of course, when you get Sun engineers like Andy Bechtolsheim, Jeff Bonwick (inventor of ZFS), and Michael Cornwell (Sun's chief flash technologist) on stage for an unscripted talk with John Fowler, what else would you expect.

    Of course as interesting as the discussion was, what really makes an HPC product is its performance, and our new Open Network Systems come complete with benchmark proof points that these new systems are absolutely the best HPC servers in the world using Intel's new Xeon 5500 (aka Nehalem) CPUs. While every major vendor has announced Nehalem based systems, only Sun:

  • Is delivering up to 96 two-socket nodes in a standard 24" rack using our new x6275 server module
  • Is delivering on-board QDR InfiniBand, providing each two-socket node in the x6275 full 40 GB/sec network bandwidth
  • Is delivering on-board SSD-flash accelerated storage, providing the equivalent performance of 100 disk drives
  • Is delivering in-chassis QDR IB switching, with our QNEM supporting large Torus or full fat tree (with external core switches) IB fabrics of > 5000 nodes with a 6x cable reduction versus traditional IB switches

    Needless to say, everyone in the Sun HPC community is excited about these products. The new Sun Constellation System components are being delivered today, and thanks to the ease of use our Sun HPC Software Stack you can start gaining immediate productivity improvements from your new system today!

  • About



    « July 2016