Sunday Jun 28, 2009

The Stack

I believe the industry needs a common way to define, address, report and regulate our datacenters.  This is not just a mash up of metrics that I can apply to different aspects of my datacenter to report efficiency, it is a framework that allows us all to compare how our datacenters are built, what useful work they do and ultimately what score they achieve against others in the industry.  I believe we can achieve a standard framework to compare all global datacenters regardless of industry, location, function or design.  In the latest Data Center Pulse video episode, I discuss the DCP Stack Framework with Jeremy Rodriguez, the Co-Chair of the DCP Technical Advisory Board who is leading of the stack development.  The goal is to define this stack, gain end user acceptance and start to apply it by January 1, 2010.

Join us in developing the Stack:

Monday Jun 15, 2009

Green Recovery!

Earlier this year, I had the opportunity to talk with Andrew S. Winston, the author of Green to Gold.  I met Andrew when he presented at Dave Douglas's 2008 Eco Summit in Santa Clara, CA and we had stayed in contact. Andrew was undertaking a new project and wanted input about Data Centers for one of the sections. 

Green Recovery focuses on how companies can use environmental thinking to survive hard economic times and position themselves for growth and advantage when the downturn ends.  One core focus is on getting lean -- taking action in five key areas of the business that can yield quick payback and high ROI.  Andrew recently put out a free report, Green Cost Cutting, that includes the introduction from the new book and the chapter "Get Lean".  The purpose of releasing the content early is to put out some of the tactical, short-term ideas as soon as possible so companies can employ them quickly. 

Andrew was gracious enough to include quotes from myself and Subodh Bapat. You can pre-order a copy through Amazon. You can download the free excerpt (introduction and the chapter "Get Lean") here

You can reach Andrew through the following:

Twitter: GreenAdvantage 
Facebook: Andrew S Winston 
Email: andrew-AT-eco-strategies-DOT-com 



Sunday Jun 14, 2009


Earlier this year I was interviewed by Laina Raveendran Greene (perfect name for Green IT) from TelecomTV.  They created a new show called Green Planet, Sustainable ICT.   Sun was featured in two different episodes.  I couldn't get their embed video to work, so the links are below. :-) 

Green Planet Episode 2: Energy Efficiency & the Green Data Centre

Green Planet Episode 3: Innovation for Sustainable Efficiency 


Wednesday Apr 01, 2009

Holy Battery Backup Batman!



Today I attended Google's Efficient Data Centers Summit at their Mountain View, CA Campus.  They unveiled how they achieved their average PUE of 1.21 across their six large datacenters.  It was great that they were sharing information with the public around how they measured and the innovations they have.  For cooling it was pretty simple.  Closely coupled cooling, raising the temperatures in the datacenters and utilizing economizers.

What jumped out the most was how they were able to achieve 99.9% efficiency in their UPS.  Talk about a different approach.  Instead of trying to raise the voltage and eliminate transformers or using DC power, or other methods, they solved the problem in server itself.  As many of you may be aware, google manufactures their own servers.  What they decided to do was remove the UPS all together.  They simplified the motherboard design to deliver only one 12V line to the monther board and then let the mother board distribute the power out the hard drives. Then, they put a simple battery on the server itself.  Consider it like adding a car battery to the server, just like a laptop.  Since this is only requiring 12V, it can be small.  Now, if there is a brownout or a spike the in-server battery takes over while the facility switches to the Generator. Simple.  Very simple.

I think this is the type of thinking that really changes the game. When you take away the constraints, you can then come up with innovation solutions.  Granted that this is not the single answer to the datacenter problems, it is a very real solution to a very real problem.  Google has the money and the clout to make things like this happen (imagine the PS and motherboard suppliers that had to change their design to deliver for Google).

I applaud Google's effort and look forward to further innovations.  Now, if they would just let the world actually walk into their datacenters. A virtual tour video is nice, but there is nothing better than seeing it first hand.  Sun has been doing that for the last two plus years.  We show what works and what doesn't.  That goes a long way.

So Google, let me know when I can come and see your Data Center!  I'm sure it has got to be one of the coolest batcaves in the industry!

Below is a picture and YouTube video of Ben Jai, Server Architect showing the battery based server from Google. 



Tuesday Sep 23, 2008

AFCOM NorCal - The Silicon Valley Chapter

On Wednesday September 24, 2008 we hosted the second AFCOM Northern California chapter meeting in Sun's Santa Clara, CA auditorium. I'm the board VP for this chapter that started in April 2008. The focus for this meeting was the Chill Off 2 and the first Data Center Pulse round table.


After Jim Gammond (membership), Eric Stromberg (President) and Maricel Cerruti (Communications) concluded chapter business, I presented a quick summary of the findings from the first chill off we hosted in June 2008. Then I gave a summary of the approach for Chill Off 2. Next we brought up a panel of subject matter experts to discuss the details between each other and the 80+ profesionals that attended. The panel I moderated included:

  1. Mike Ryan, GDS Sr Staff Engineer - Sun Microsystems

  2. Olivier Sanche, Sr Director - Data Center Operations for eBay 

  3. Phil Hughes, CEO of Clustered Systems

  4. Bill Tschudi, Lawrence Berkeley National Labs (LBNL)

Mike was there to represent the vendors from the last chill off and the testing environment. He was the technical lead for the first chill off. Bill will be conducting the testing again this year as an independent party. Olivier has agreed to provide the workload definition that answers two important questions for ebay data center investments/operations. Phil is a new vendor in the test with a passive cooling solution that removes all the fans from the servers. We started with a few questions from me and then opened it up to the crowd for questions and input. It was a very lively session with most of the dialog coming from the audience which represented the entire data center community from consultants to engineers to manufacturers to end users. We received excellent feedback and were able to achieve what we set out to do, have the community infulence the direction of the chill off 2.

Some of the learnings/changes that will be made from this session:

  1. Possibly have a mixture of hardware/vendors (blades, 1U, 2U, etc) in the test rather than all one form factor (sun x4100) - or do both? 

  2. 2N testing down to N to compare efficiency losses.

  3. Fault insertion that shows both equipment failures and humans faults such as opening the door of a container for maintenance, leaving a hole in an APC HACS, etc. Also, simulate moves adds and changes and their affect on the test environment.

  4. Mix the work load in the test, not serial tests. So HPC, Web and enterprise all in one test as the mixed workload.

  5. Add results from a tuned/contained raised floor/crac based DataCenter to the chart. Show the real differences, not just theoretical, against all the solutions.  Current chart shows a traditional open raised floor/crac installation.

  6. Standard communication protocol for data collection? How easily and effectively do these solutions tie into a BMS?

  7. Compare main PDU, in-rack PDU wireless sensors, env monitors and server sensor efficiency. Include mixture of vendors that can meter down to the plug level and see how acurate they are compared to the internal server readings.  Take temp and humidity readings across the equipment and the cooling devices to see how accurate they are.

  8. A fully isolated environment so there are no questions about other in-room conditions affecting the tests. Also have the ability to watch the affect of raised water temps and raised inlet temp, on each solution.

  9. TCO of each solution? This would be difficult, but would be very interesting data.

  10. Remove all the server fans in the container and compare that to the regularly loaded test. In other words only use the containers in-line cooling fans to move air rather than the servers.

  11. Do a full 3D CFD model of the solutions and compare that to what is actually seen.  Future Facilities.


The presentations with the updates based on the session is coming soon. I would appreciate feedback on any area in the test/presentation. If there are companies that would be interested in including their products, services, support, expertise or time, please email me.



After a break for networking, we started what I hope to be a regular occurance at data center chapter meetings like this all over the world. Earlier in September, Mark Thiele and I started a new exclusive group that only includes datacenter owners and operators called Data Center Pulse. You can see the details in my earlier blog entry.

The panel we assembled was quite impressive. We had data center owners from Cisco, Apple, VMware, eBay, Stanford and the CEO of IDS, a new startup company putting datacenter co-lo space on container ships.

I asked a number of questions that had come up in our Data Center Pulse group through likedin, some others I had prepared and questions that came out naturally during the dialog/debate.

Check out the specific questions and responses in this round table session through the Data Center Pulse Blog. (click on blog)

In the end we ran out of time with the amount of discussion and healthy debate. After we were finished, I really felt that we should have filmed the session for the rest of the community to watch. Later, an industry friend of mine suggested that we hold these kinds of roundtable discussions on satelite radio so everyone could benefit from them. I thought that was a great idea, but we should try through a differnet medium. Video podcasts or YouTube. I travel quite a bit and could film these sessions and have interviews with different DC owners all over the globe. So, starting next week, Mark and I will be filming the first episode introducing the group. We have identified about 10 sessions we want to film including an upcoming trip I'm taking to Barcelona, Spain where I hope to have another DC Pulse roundtable session.

I'd be very interested in input from DC professionals around the globe. Is this something you would watch? Is it something you would want to participate in as a panelist? Would you be interested in letting us tour your DC to share the challenges and your solutions/learnings first hand with the community? Other ideas? We'd love to hear them.

Sunday Apr 20, 2008

I'll show you mine...


So how efficient is your datacenter?

Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.


Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.


We achieved a PUE of 1.28!


That was 36% more efficient than the Uptime Institute target (2) and almost 50% more efficient than the standard PUE (2.5)!

For those of you who may not be familiar with PUE, it is Power Usage Effectiveness.  The Green Grid's measurement of efficiencies in the datacenter.  In other words, how much power do you use to run your equipment.  For a PUE of 2, you have 1 watt for the equipment and 1 watt for the other equipment (chillers, ups, transformers, etc.) to run it.  That means if you have a 1MWatt datacenter, you would only have 500kW of power for compute equipment.

This means that with a PUE of 1.28 our datacenter can run 798kW of equipment load and only require 225kW of support load for a total load of 1,023kW.  In a datacenter with a PUE of 2 to run 798kW of equipment load, it would require 798kW of support load for a total of 1,596kW.  

Bottom line we use 573kW less support power to operate the same equipment.  This equates to a power bill that is $400k less per year (using industry average of $0.08/kWh).

I don't know about you, but this solidifies the statement we have been saying all along.  Economics drive the decisions in datacenters.  But if you can achieve efficiencies into your datacenters, you will achieve both economic and ecological results.  This PUE of 1.28 not only reduces our overall operating expense every year, it also lowers our carbon emissions substantially!  We're only using the power we have to.  No Eco hype, this is Eco reality.

Keep in mind that this PUE of 1.28 was achieved in our highest density datacenter globally.  It is also built on slab.  No raised floors to be seen.  The other aspect is that the datacenter is only 60% full (physically). We moved all of the equipment from Newark, CA but are continuing to add more equipment from our other campuses.   It is also a heterogeneous datacenter.  We have a mixture of all types of equipment from many vendors. Our plant is designed to 9MW (phase I) and we're drawing 3.5MW so far.  So, we can achieve extreme efficiencies with a mixture of equipment, even when we're not running at max capacity.  (Note: the remaining 13 datacenter rooms that make up the remaining load were not measured in the chill-off, but are all fed by the same efficient plant).

In support of our Eco Strategy, "Innovate, Act & Share", I want to have full disclosure as I share. I want everyone to understand that this room is a Tier I datacenter.  We don't have as much redundancy as higher Tier level datacenters, but the design approach is the same.  Our Variable Primary Loop chiller plant design is the same we would use in our Tier III sites.  We have removed an entire set of pumps and reduced the pipe distances because everything is dynamic from the plant to the rack. We only use the energy we have to. For all Tier level datacenters we would also be choosing very efficient transformers, ups and other components.  So, no matter what tier level you are trying to build, you can achieve these efficiencies if you embrace next generation designs.  But, regrettably, many design and construction firms out there are hesitant to do this.  You need to be specific on what you want and squelch the nay-sayers.

Here is the chart that shows the details.  PDF version is here.

So, the question is.  I've shown you mine.  Can you show me yours?

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at and I'll tell you how.



Gig: GDS Director
Global Lab & Datacenter Design Services, Sun Microsyste


« August 2016