Wednesday Apr 01, 2009

Holy Battery Backup Batman!

 

 

Today I attended Google's Efficient Data Centers Summit at their Mountain View, CA Campus.  They unveiled how they achieved their average PUE of 1.21 across their six large datacenters.  It was great that they were sharing information with the public around how they measured and the innovations they have.  For cooling it was pretty simple.  Closely coupled cooling, raising the temperatures in the datacenters and utilizing economizers.

What jumped out the most was how they were able to achieve 99.9% efficiency in their UPS.  Talk about a different approach.  Instead of trying to raise the voltage and eliminate transformers or using DC power, or other methods, they solved the problem in server itself.  As many of you may be aware, google manufactures their own servers.  What they decided to do was remove the UPS all together.  They simplified the motherboard design to deliver only one 12V line to the monther board and then let the mother board distribute the power out the hard drives. Then, they put a simple battery on the server itself.  Consider it like adding a car battery to the server, just like a laptop.  Since this is only requiring 12V, it can be small.  Now, if there is a brownout or a spike the in-server battery takes over while the facility switches to the Generator. Simple.  Very simple.

I think this is the type of thinking that really changes the game. When you take away the constraints, you can then come up with innovation solutions.  Granted that this is not the single answer to the datacenter problems, it is a very real solution to a very real problem.  Google has the money and the clout to make things like this happen (imagine the PS and motherboard suppliers that had to change their design to deliver for Google).

I applaud Google's effort and look forward to further innovations.  Now, if they would just let the world actually walk into their datacenters. A virtual tour video is nice, but there is nothing better than seeing it first hand.  Sun has been doing that for the last two plus years.  We show what works and what doesn't.  That goes a long way.

So Google, let me know when I can come and see your Data Center!  I'm sure it has got to be one of the coolest batcaves in the industry!

Below is a picture and YouTube video of Ben Jai, Server Architect showing the battery based server from Google. 


 

 


Sunday Apr 20, 2008

I'll show you mine...



 

So how efficient is your datacenter?


Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.

 

Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.

 

We achieved a PUE of 1.28!

 

That was 36% more efficient than the Uptime Institute target (2) and almost 50% more efficient than the standard PUE (2.5)!

For those of you who may not be familiar with PUE, it is Power Usage Effectiveness.  The Green Grid's measurement of efficiencies in the datacenter.  In other words, how much power do you use to run your equipment.  For a PUE of 2, you have 1 watt for the equipment and 1 watt for the other equipment (chillers, ups, transformers, etc.) to run it.  That means if you have a 1MWatt datacenter, you would only have 500kW of power for compute equipment.

This means that with a PUE of 1.28 our datacenter can run 798kW of equipment load and only require 225kW of support load for a total load of 1,023kW.  In a datacenter with a PUE of 2 to run 798kW of equipment load, it would require 798kW of support load for a total of 1,596kW.  

Bottom line we use 573kW less support power to operate the same equipment.  This equates to a power bill that is $400k less per year (using industry average of $0.08/kWh).

I don't know about you, but this solidifies the statement we have been saying all along.  Economics drive the decisions in datacenters.  But if you can achieve efficiencies into your datacenters, you will achieve both economic and ecological results.  This PUE of 1.28 not only reduces our overall operating expense every year, it also lowers our carbon emissions substantially!  We're only using the power we have to.  No Eco hype, this is Eco reality.

Keep in mind that this PUE of 1.28 was achieved in our highest density datacenter globally.  It is also built on slab.  No raised floors to be seen.  The other aspect is that the datacenter is only 60% full (physically). We moved all of the equipment from Newark, CA but are continuing to add more equipment from our other campuses.   It is also a heterogeneous datacenter.  We have a mixture of all types of equipment from many vendors. Our plant is designed to 9MW (phase I) and we're drawing 3.5MW so far.  So, we can achieve extreme efficiencies with a mixture of equipment, even when we're not running at max capacity.  (Note: the remaining 13 datacenter rooms that make up the remaining load were not measured in the chill-off, but are all fed by the same efficient plant).

In support of our Eco Strategy, "Innovate, Act & Share", I want to have full disclosure as I share. I want everyone to understand that this room is a Tier I datacenter.  We don't have as much redundancy as higher Tier level datacenters, but the design approach is the same.  Our Variable Primary Loop chiller plant design is the same we would use in our Tier III sites.  We have removed an entire set of pumps and reduced the pipe distances because everything is dynamic from the plant to the rack. We only use the energy we have to. For all Tier level datacenters we would also be choosing very efficient transformers, ups and other components.  So, no matter what tier level you are trying to build, you can achieve these efficiencies if you embrace next generation designs.  But, regrettably, many design and construction firms out there are hesitant to do this.  You need to be specific on what you want and squelch the nay-sayers.

Here is the chart that shows the details.  PDF version is here.


So, the question is.  I've shown you mine.  Can you show me yours?

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at dean.nelson@sun.com and I'll tell you how.

 

About


Gig: GDS Director
Global Lab & Datacenter Design Services, Sun Microsyste

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today