Friday Apr 10, 2009

The Chill Off Is Ready To Go!

The Chill Off testing will begin in two weeks.  In this Data Center Pulse episode, Brian Day and Mike Ryan step us through the test bed where the chill of tests will be conducted.  A lot of companies have donated their design expertise, parts and labor to get this ready.  This includes Sun Microsystems (test location, infrastructure, test equipment), Redwood City Electric (electrical design and installation), SynapSense (Meters, Sensors and other controls), Western Allied (Mechanical design, parts and labor), California Hydronics Corporation (Water Control Pump), Norman S Wright Mechanical Equipment Corporation (Variable Frequency Drive), F Rodgers Insulation & Specialtiy Contractors (Chilled Water Pipe Insulation), and Liebert Corporation (Refrigerant Pumping System - XDP).

 

 

Make sure to watch the video in HD!  The button is on the bottom right (HD).  God bless Google's YouTube upgrade!  :-) 

Watch the Data Center Pulse YouTube Channel for more updates. We will be highlighting each of the vendors technologies as the testing is conducted. The testing is scheduled to wrap up by August 15th. The results will be presented at the Silicon Valley Leadership Group Energy Summit slated for end of September, 2009.  Additional Chill off details can be followed at the official Chill Off Site on http://datacenterpulse.org. 

Tuesday Oct 07, 2008

Chill-Off 2

Based on the feedback from the AFCOM NorCal chapter meeting roundtable and follow on meetings with SVLG and LBNL, we have finalized the approach for the Chill-Off 2.  Below is the stack that I created to represent all of the different layers that will be covered in this test.  It is a much more aggressive goal this year.  I have also included the updated presentation (pdf) that describes the Chill-Off 2 strategy and project information.  We are still searching for a few more companies to participate.

Please provide any feedback or suggestions directly to dean.nelson@sun.com.



Monday Jun 09, 2008

The Role of Modularity in Datacenter Design


In June of 2006 we presented our proposal to Jonathan Schwartz's staff for the largest, most complex and aggressive datacenter consolidation in Sun's history.  We had just completed a years worth of great datacenter projects in Czech Republic, China, UK, and Norway and we were raring to go. The proposal said that in just 12 months we would consolidate 202,000 square feet of datacenter space from of our Newark & Sunnyvale, CA campuses into less than 80,000 square feet of new datacenter space in our Santa Clara, CA campus. At the end of that meeting we received the approval for the project and Jonathan asked one final question, "Where's the book that I can hand out to customers?".  Needless to say, we had our hands full with the California consolidation and the new acquisition of StorageTek.



I am happy to say that today we finally finished that task. You can download the blueprint here:


Energy Efficient Datacenters: The Role of Modularity in Datacenter Design.


In August, 2007 we finished the California consolidation project on schedule, under budget and with the ability to almost triple our densities as our load increased all while maintaining a brutally efficient PUE. We had also completed an Eco Launch that highlighted some of our successes in the form of Solution Briefs and a Datacenter Tour Video. We were also well underway with the next largest consolidation, Louisville, CO (StorageTek) to Broomfield, CO. In addition, we received a deluge of requests for tours of the new Santa Clara datacenter. Our customers, industry and partners wanted to see what we had done first hand.


We started on the blueprint in January 2008 in addition to the 40+ active projects we had globally. We knew this was important and we wanted to fulfill the commitment we had made to Jonathan, albeit later than he had likely expected. In the end, I am really glad we waited. The Blueprint is full of examples of what we did right, what we did wrong and what we believe the future holds for datacenter design. Remember, we are the company that solves our customers technical problems with our technology innovations, but, we are also generating more heat in the datacenter with those solutions. This blueprint is a guide to help our customers understand our approach and lessons learned along the way. The datacenter is a complete eco system that requires a dynamic balance between the IT load and support load to enable flexibility and ensure efficiencies for both economic and ecological gains. In my job my team is blessed to work with the product groups who are building the next generation equipment.  These are the products that will be rolling into our customers datacenter 1-3 years from now.  In other words, it's like having a crystal ball.  We have tomorrows technology in our datacenters today including the power, cooling and connectivity challenges that come along with them.


To date, we have had over 2,200 people walk through Santa Clara. The Datacenter Tour and Solution briefs have been downloaded over 12,000 times from the sun.com website.  We also distribute this content on memory sticks to the thousands of people on the tours and different conferences we speak at.


This blueprint is the first of nine chapters that will be released over the next 12 months. I encourage you to give your opinions and suggestions or raise your questions and concerns through this blog entry, email or the blueprint wiki site.


Stay tuned. This is just getting fun... :-)

Sunday Apr 20, 2008

I'll show you mine...



 

So how efficient is your datacenter?


Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.

 

Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.

 

We achieved a PUE of 1.28!

 

That was 36% more efficient than the Uptime Institute target (2) and almost 50% more efficient than the standard PUE (2.5)!

For those of you who may not be familiar with PUE, it is Power Usage Effectiveness.  The Green Grid's measurement of efficiencies in the datacenter.  In other words, how much power do you use to run your equipment.  For a PUE of 2, you have 1 watt for the equipment and 1 watt for the other equipment (chillers, ups, transformers, etc.) to run it.  That means if you have a 1MWatt datacenter, you would only have 500kW of power for compute equipment.

This means that with a PUE of 1.28 our datacenter can run 798kW of equipment load and only require 225kW of support load for a total load of 1,023kW.  In a datacenter with a PUE of 2 to run 798kW of equipment load, it would require 798kW of support load for a total of 1,596kW.  

Bottom line we use 573kW less support power to operate the same equipment.  This equates to a power bill that is $400k less per year (using industry average of $0.08/kWh).

I don't know about you, but this solidifies the statement we have been saying all along.  Economics drive the decisions in datacenters.  But if you can achieve efficiencies into your datacenters, you will achieve both economic and ecological results.  This PUE of 1.28 not only reduces our overall operating expense every year, it also lowers our carbon emissions substantially!  We're only using the power we have to.  No Eco hype, this is Eco reality.

Keep in mind that this PUE of 1.28 was achieved in our highest density datacenter globally.  It is also built on slab.  No raised floors to be seen.  The other aspect is that the datacenter is only 60% full (physically). We moved all of the equipment from Newark, CA but are continuing to add more equipment from our other campuses.   It is also a heterogeneous datacenter.  We have a mixture of all types of equipment from many vendors. Our plant is designed to 9MW (phase I) and we're drawing 3.5MW so far.  So, we can achieve extreme efficiencies with a mixture of equipment, even when we're not running at max capacity.  (Note: the remaining 13 datacenter rooms that make up the remaining load were not measured in the chill-off, but are all fed by the same efficient plant).

In support of our Eco Strategy, "Innovate, Act & Share", I want to have full disclosure as I share. I want everyone to understand that this room is a Tier I datacenter.  We don't have as much redundancy as higher Tier level datacenters, but the design approach is the same.  Our Variable Primary Loop chiller plant design is the same we would use in our Tier III sites.  We have removed an entire set of pumps and reduced the pipe distances because everything is dynamic from the plant to the rack. We only use the energy we have to. For all Tier level datacenters we would also be choosing very efficient transformers, ups and other components.  So, no matter what tier level you are trying to build, you can achieve these efficiencies if you embrace next generation designs.  But, regrettably, many design and construction firms out there are hesitant to do this.  You need to be specific on what you want and squelch the nay-sayers.

Here is the chart that shows the details.  PDF version is here.


So, the question is.  I've shown you mine.  Can you show me yours?

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at dean.nelson@sun.com and I'll tell you how.

 

Sunday Sep 30, 2007

The liquid sky is falling!

How many of you have heard that very soon, we MUST have liquid directly to racks to cool them? To me, I attribute this to the same hysteria of, "the sky is falling".


How many skyscrapers will you be building?


Today, the average in the industry for rack heat loads is between 4-6kw. For datacenters, the racks that have loads >15kw are the minority. This is analogous to having a small number of sky scrapers in a city, with a huge amount of smaller buildings surrounding it. While, there are many new boxes coming out that are increasing the load per rack, the average load will still only increase in 2-4kw increments over time. Skyscrapers aren't buit overnight. There will not be a rapid replacement to drive racks to an average of 20kw unless you are running a computation hungry facility that uses uniform equipment that is >90% optimized. The standard replacement of equipment in asset life cycles will drive the average load per footprint up, but the majority of companies will not have the average load per rack at these levels. The ramp is getting steeper, but It will take time, and money, to happen.


Future rack trends


With that said, there will, and are configurations that are requiring cooling for >30kw per footprint. You can use traditional localized cooling for racks up to 30kw in a mixed environment. But when you pass this point, you can no longer force enough air for this equipment to be cooled. The economics also start to hurt. Fans and chiller plants forcing air to extract heat, account for 33% of the datacenter operational costs. Liquid cooling really is the next step. But, I believe the liquid will be refrigerant, not water. Refrigerant is more efficient across coils, giving even cooling distribution to components. It is also much less risk in a datacenter since it changes to a gas if there is a leak. In conjunction with the cooling, power capacities will also need to increase. I believe it will be 480V direct, rather than 208V. This is the coming trend for racks in datacenters. My projection is 2 years for racks like this to be readily available from numerous vendors, but >7 years for the average load per rack getting to these levels.

My opinion is that companies do not need to consider liquid cool racks until the average load breaks 15kw. With tightly coupled air cooling and containment, you can achieve very efficient solutions for these skyscrapers, as well as the other racks in the datacenter. Here is a picture of our highest density datacenter that is currently cooling loads with an average of 12kw/cabinet. This design can scale to an average of 18kw per rack. Keep in mind, that we can roll in 30kw racks to this solution. The key here is the average is 18kw. So you can have a wide range of rack loads in the same datacenter and still achieve cooling efficiencies. This 2,132 square foot room will have 720kw of equipment load. And...it is on a SLAB! It is the highest density, and highest efficiency datacenter in our portfolio of 1.3M square feet.

So, don't believe the hype or doomsday projections. The sky is not falling. This is a very predictable curve that doesn't require you to take drastic action to solve your datacenter cooling loads. You just need to keep it simple. Our Santa Clara Datacenter design has covered the current equipment loads and scale for the next gen equipment, today.



<script type="text/javascript">
var sc_project=3009291;
var sc_invisible=0;
var sc_partition=32;
var sc_security="7cddf1df";
</script>

<script type="text/javascript" src="http://www.statcounter.com/counter/counter_xhtml.js"></script>

free statistics


View My Stats

About


Gig: GDS Director
Global Lab & Datacenter Design Services, Sun Microsyste

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today