Sunday May 03, 2009

Innovation Matters, What's Cool in Datacenters

Late last year I had a fun conversation on Greg Papadopolous show, Innovation Matters.  This show was targeted for Sun Internal, but there was quite a bit of interest to share it externally.  So on February 14th, it was shared on We are very transparent when it comes to the work we do, and we want our customers to benefit from it.   So enjoy the segment Greg titled, What's Cool in Datacenters.

Friday Apr 10, 2009

The Chill Off Is Ready To Go!

The Chill Off testing will begin in two weeks.  In this Data Center Pulse episode, Brian Day and Mike Ryan step us through the test bed where the chill of tests will be conducted.  A lot of companies have donated their design expertise, parts and labor to get this ready.  This includes Sun Microsystems (test location, infrastructure, test equipment), Redwood City Electric (electrical design and installation), SynapSense (Meters, Sensors and other controls), Western Allied (Mechanical design, parts and labor), California Hydronics Corporation (Water Control Pump), Norman S Wright Mechanical Equipment Corporation (Variable Frequency Drive), F Rodgers Insulation & Specialtiy Contractors (Chilled Water Pipe Insulation), and Liebert Corporation (Refrigerant Pumping System - XDP).



Make sure to watch the video in HD!  The button is on the bottom right (HD).  God bless Google's YouTube upgrade!  :-) 

Watch the Data Center Pulse YouTube Channel for more updates. We will be highlighting each of the vendors technologies as the testing is conducted. The testing is scheduled to wrap up by August 15th. The results will be presented at the Silicon Valley Leadership Group Energy Summit slated for end of September, 2009.  Additional Chill off details can be followed at the official Chill Off Site on 

Monday Jun 09, 2008

The Role of Modularity in Datacenter Design

In June of 2006 we presented our proposal to Jonathan Schwartz's staff for the largest, most complex and aggressive datacenter consolidation in Sun's history.  We had just completed a years worth of great datacenter projects in Czech Republic, China, UK, and Norway and we were raring to go. The proposal said that in just 12 months we would consolidate 202,000 square feet of datacenter space from of our Newark & Sunnyvale, CA campuses into less than 80,000 square feet of new datacenter space in our Santa Clara, CA campus. At the end of that meeting we received the approval for the project and Jonathan asked one final question, "Where's the book that I can hand out to customers?".  Needless to say, we had our hands full with the California consolidation and the new acquisition of StorageTek.

I am happy to say that today we finally finished that task. You can download the blueprint here:

Energy Efficient Datacenters: The Role of Modularity in Datacenter Design.

In August, 2007 we finished the California consolidation project on schedule, under budget and with the ability to almost triple our densities as our load increased all while maintaining a brutally efficient PUE. We had also completed an Eco Launch that highlighted some of our successes in the form of Solution Briefs and a Datacenter Tour Video. We were also well underway with the next largest consolidation, Louisville, CO (StorageTek) to Broomfield, CO. In addition, we received a deluge of requests for tours of the new Santa Clara datacenter. Our customers, industry and partners wanted to see what we had done first hand.

We started on the blueprint in January 2008 in addition to the 40+ active projects we had globally. We knew this was important and we wanted to fulfill the commitment we had made to Jonathan, albeit later than he had likely expected. In the end, I am really glad we waited. The Blueprint is full of examples of what we did right, what we did wrong and what we believe the future holds for datacenter design. Remember, we are the company that solves our customers technical problems with our technology innovations, but, we are also generating more heat in the datacenter with those solutions. This blueprint is a guide to help our customers understand our approach and lessons learned along the way. The datacenter is a complete eco system that requires a dynamic balance between the IT load and support load to enable flexibility and ensure efficiencies for both economic and ecological gains. In my job my team is blessed to work with the product groups who are building the next generation equipment.  These are the products that will be rolling into our customers datacenter 1-3 years from now.  In other words, it's like having a crystal ball.  We have tomorrows technology in our datacenters today including the power, cooling and connectivity challenges that come along with them.

To date, we have had over 2,200 people walk through Santa Clara. The Datacenter Tour and Solution briefs have been downloaded over 12,000 times from the website.  We also distribute this content on memory sticks to the thousands of people on the tours and different conferences we speak at.

This blueprint is the first of nine chapters that will be released over the next 12 months. I encourage you to give your opinions and suggestions or raise your questions and concerns through this blog entry, email or the blueprint wiki site.

Stay tuned. This is just getting fun... :-)

Sunday Apr 20, 2008

I'll show you mine...


So how efficient is your datacenter?

Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.


Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.


We achieved a PUE of 1.28!


That was 36% more efficient than the Uptime Institute target (2) and almost 50% more efficient than the standard PUE (2.5)!

For those of you who may not be familiar with PUE, it is Power Usage Effectiveness.  The Green Grid's measurement of efficiencies in the datacenter.  In other words, how much power do you use to run your equipment.  For a PUE of 2, you have 1 watt for the equipment and 1 watt for the other equipment (chillers, ups, transformers, etc.) to run it.  That means if you have a 1MWatt datacenter, you would only have 500kW of power for compute equipment.

This means that with a PUE of 1.28 our datacenter can run 798kW of equipment load and only require 225kW of support load for a total load of 1,023kW.  In a datacenter with a PUE of 2 to run 798kW of equipment load, it would require 798kW of support load for a total of 1,596kW.  

Bottom line we use 573kW less support power to operate the same equipment.  This equates to a power bill that is $400k less per year (using industry average of $0.08/kWh).

I don't know about you, but this solidifies the statement we have been saying all along.  Economics drive the decisions in datacenters.  But if you can achieve efficiencies into your datacenters, you will achieve both economic and ecological results.  This PUE of 1.28 not only reduces our overall operating expense every year, it also lowers our carbon emissions substantially!  We're only using the power we have to.  No Eco hype, this is Eco reality.

Keep in mind that this PUE of 1.28 was achieved in our highest density datacenter globally.  It is also built on slab.  No raised floors to be seen.  The other aspect is that the datacenter is only 60% full (physically). We moved all of the equipment from Newark, CA but are continuing to add more equipment from our other campuses.   It is also a heterogeneous datacenter.  We have a mixture of all types of equipment from many vendors. Our plant is designed to 9MW (phase I) and we're drawing 3.5MW so far.  So, we can achieve extreme efficiencies with a mixture of equipment, even when we're not running at max capacity.  (Note: the remaining 13 datacenter rooms that make up the remaining load were not measured in the chill-off, but are all fed by the same efficient plant).

In support of our Eco Strategy, "Innovate, Act & Share", I want to have full disclosure as I share. I want everyone to understand that this room is a Tier I datacenter.  We don't have as much redundancy as higher Tier level datacenters, but the design approach is the same.  Our Variable Primary Loop chiller plant design is the same we would use in our Tier III sites.  We have removed an entire set of pumps and reduced the pipe distances because everything is dynamic from the plant to the rack. We only use the energy we have to. For all Tier level datacenters we would also be choosing very efficient transformers, ups and other components.  So, no matter what tier level you are trying to build, you can achieve these efficiencies if you embrace next generation designs.  But, regrettably, many design and construction firms out there are hesitant to do this.  You need to be specific on what you want and squelch the nay-sayers.

Here is the chart that shows the details.  PDF version is here.

So, the question is.  I've shown you mine.  Can you show me yours?

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at and I'll tell you how.


Sunday Oct 21, 2007

Not Rocket Science!

Man, what a month.

Earlier this month, I did an interview with Contrarian Minds editor, Al Riske. He captured my ramblings and then published the following report titled, "Not Rocket Science". It was an honor to be on the same website as people like Scott McNealy, Jonathan Schwartz, Greg Papadopoulos, Radia Permlan and James Gosling to name a few. Talk about some brain power. :-)

Instead of complicating things, my team and I have really tried to simply them when it comes to datacenter design philosophies that support the equipment of today and tomorrow. Take a look here:

    Not Rocket Science...

I won't be building a rocket any time soon. Then again that may be kinda fun...


Gig: GDS Director
Global Lab & Datacenter Design Services, Sun Microsyste


« July 2016