Wednesday Aug 26, 2009

My blog has moved...

My blog has moved to Data Center Pulse.  You can follow my new blog here.  Subscribe with RSS here.

http://datacenterpulse.org/blogs/geekism

Stay tuned!

Friday Apr 10, 2009

The Chill Off Is Ready To Go!

The Chill Off testing will begin in two weeks.  In this Data Center Pulse episode, Brian Day and Mike Ryan step us through the test bed where the chill of tests will be conducted.  A lot of companies have donated their design expertise, parts and labor to get this ready.  This includes Sun Microsystems (test location, infrastructure, test equipment), Redwood City Electric (electrical design and installation), SynapSense (Meters, Sensors and other controls), Western Allied (Mechanical design, parts and labor), California Hydronics Corporation (Water Control Pump), Norman S Wright Mechanical Equipment Corporation (Variable Frequency Drive), F Rodgers Insulation & Specialtiy Contractors (Chilled Water Pipe Insulation), and Liebert Corporation (Refrigerant Pumping System - XDP).

 

 

Make sure to watch the video in HD!  The button is on the bottom right (HD).  God bless Google's YouTube upgrade!  :-) 

Watch the Data Center Pulse YouTube Channel for more updates. We will be highlighting each of the vendors technologies as the testing is conducted. The testing is scheduled to wrap up by August 15th. The results will be presented at the Silicon Valley Leadership Group Energy Summit slated for end of September, 2009.  Additional Chill off details can be followed at the official Chill Off Site on http://datacenterpulse.org. 

Wednesday Apr 01, 2009

Holy Battery Backup Batman!

 

 

Today I attended Google's Efficient Data Centers Summit at their Mountain View, CA Campus.  They unveiled how they achieved their average PUE of 1.21 across their six large datacenters.  It was great that they were sharing information with the public around how they measured and the innovations they have.  For cooling it was pretty simple.  Closely coupled cooling, raising the temperatures in the datacenters and utilizing economizers.

What jumped out the most was how they were able to achieve 99.9% efficiency in their UPS.  Talk about a different approach.  Instead of trying to raise the voltage and eliminate transformers or using DC power, or other methods, they solved the problem in server itself.  As many of you may be aware, google manufactures their own servers.  What they decided to do was remove the UPS all together.  They simplified the motherboard design to deliver only one 12V line to the monther board and then let the mother board distribute the power out the hard drives. Then, they put a simple battery on the server itself.  Consider it like adding a car battery to the server, just like a laptop.  Since this is only requiring 12V, it can be small.  Now, if there is a brownout or a spike the in-server battery takes over while the facility switches to the Generator. Simple.  Very simple.

I think this is the type of thinking that really changes the game. When you take away the constraints, you can then come up with innovation solutions.  Granted that this is not the single answer to the datacenter problems, it is a very real solution to a very real problem.  Google has the money and the clout to make things like this happen (imagine the PS and motherboard suppliers that had to change their design to deliver for Google).

I applaud Google's effort and look forward to further innovations.  Now, if they would just let the world actually walk into their datacenters. A virtual tour video is nice, but there is nothing better than seeing it first hand.  Sun has been doing that for the last two plus years.  We show what works and what doesn't.  That goes a long way.

So Google, let me know when I can come and see your Data Center!  I'm sure it has got to be one of the coolest batcaves in the industry!

Below is a picture and YouTube video of Ben Jai, Server Architect showing the battery based server from Google. 


 

 


Thursday Mar 26, 2009

The Internet IN A BOX!

The entire internet in one box.  Sound unbelievable?  Not so. Today, Sun launched a single Sun Modular Datacenter (SunMD), housed in our Santa Clara datacenter, that contains the Wayback Machine from Internet Archive.  IA has captured the web in the form of pages, graphics, videos, java scripts and more for the past 12 years!  Yes, the past twelve years.  Just imagine, all that "creative" stuff you posted on the web through your career (and yes you know those pictures and videos I'm talking about) have been captured by the Internet Archive crawlers.  I even checked one of the sites I had created in 1996 for a non profit called Child Quest International, Inc.  It was just as I had built it (boy my graphics were archaic then). 



I got involved with the project when Greg Papadopolous (our CTO) and David Douglas (our CSO) donated a SunMD to Brewster Kahle, Digital Librarian, Director and Co-Founder of Internet Archive. We then worked out a way to host that container in our Santa Clara, CA Datacenter.  We decided to place it in the courtyard between the two main datacenter buildings.  We bored through the wall to the plant, tapped the chilled water, piped to the UPS/GenSet and wired the internet access.  Then we poured the concrete and craned in the container.

 


Finally it was filled up with 4.5 petabytes worth of Sun X4500 storage (thumpers).  In just a few months, we had successfully deployed this container based solution into our site.  Amazingly fast, and cost effective when you think of the depreciation schedule you can apply to owned buildings.  For us it's 25 years.  We had already prepped for two more containers at the same time.  $90k of additional pipework and concrete over 25 years is peanuts.  The second container is already in place (right next to the IA SunMD) and will be used for the Chill Off 2 (CO2) testing starting April 1st!

 

 

I was bummed that I wasn't able to participate in the event, but I was traveling around Europe doing other Data Center stuff. But my team was there to answer questions and tour attendees through the SunMD and the surrounding datacenter complex that it sits in.   I'd like to thank a few folks on my team for making this project happen. Mike Ryan (Mechnical/Electrical), Serena Devito (Networking & Technical support), Brian Day (Coordination) and Alex Menchaca (Technical Support).  Without them, it could not have pulled this off as quickly as it was.

Watch Sun's rich interactive, multi-media web story on the Internet Archive here.  Michelle Sadowski did an incredible job driving this project and this multi-media product really shows it.

See more photos of the event from Jefferey Stein, Chairman of the IT History Society, on his shutterfly page.

Readthe success Story here

 

   


MEDIA INTEREST

And, you can only imagine how much press came out of this. Here are some of the articles.

1. Sun upgrades Internet Archive to 4.5 petabytes - vnunet.com, Iain Thomson; March 26, 2009 

2. Wayback Machine stowed in shipping container - Computing (UK), Dave Bailey; March 26, 2009

3. Sun store the Internet in a Box - Slash Gear, Chris Davies; March 26, 2009

4. Sun Packs 150 Billion Web Pages into Meat Locker - The Register, Cate Metz; March 25, 2009

5. Sun Crams the Entire Internet in a Box - GigaOM, Stacey Higginbotham; March 25, 2009

6. All the Web is Stored on One Sun Data Center - San Francisco Chronicle, Deborah Gage & Ryan Kim; March 25, 2009

7. Internet Archive, Sun Create ‘Living History’ of Web - San Jose Business Journal, Staff; March 25, 2009

8. The Internet Archive’s Wayback Machine Gets a New Data Center - Computerworld, Lucas Mearian; March 25, 2009

9. InfoStor - Sun and Internet Archive Store Internet in a Box - Recent Announcements from: March 25, 2009 Infostor, Staff; March 25, 2009

10. Internet Archive Upgrades Wayback Machine - PC World, Lucas Mearian; March 22, 2009

11. Huge Internet archive data centre opens - The Inquirer, Nick Farrell; March 20, 2009

12. New Digs for the Wayback Machine - IT Business Edge, Arthur Cole: March 20, 2009

13. Internet Archive to Open Data Center - Web Hosting Industry News, Justin Lee; March 20, 2009

14. Internet Archive to unveil massive Wayback Machine data center - Computerworld, Lucas Mearian; March 19, 2009

15. Data Storage Slideshow: Internet Archive Gets a Place in the Sun (Portable Data Center) - eWEEK, Chris Preimesberger; March 26, 2009 

Friday Nov 14, 2008

This puppy's growing!


I am still amazed with how quickly you can reach a global audience. It's been 61 days since we started Data Center Pulse.  Since then we have secured 312 members from 180 companies in 21 countries representing at least 20 different industries! The member list is growing like mad.

As Mark and I continued to recruit members a realization set in. The list of companies that are represented on Data Center Pulse are from every industry. This group spends or influences hundreds of billions of dollars every year. They build the engines that run banking, medical support systems, schools, military, government, transportation systems, and this little thing called the internet. :-)  Over the weekend we brainstormed how to best leverage the strength of this group. What is it that most of the people are interested in? What would be worth their time? Personally, I benefit greatly when I am able to sit down for detailed discussions and debate on Data Center topics with my peers in the industry. I really enjoyed the round table we held at the AFCOM session and wished there could be more.

So, Mark and I decided to take the next step.  We want to get this community together for a face to face working session.  The goal is to discuss and debate the top 10 topics that are on the minds of these owners and operators.  What are those topics? We want the community to decide.  Who will lead these discussions?  We want the leaders to be selected from the community. What will we do with this information? The topic leaders will present it out to the industry. What will the next steps be? The community will decide. In other words, the Data Center Pulse members will create the agenda, content, output and followup for this summit. When we harness the knowledge and experience of this group along with the challenges they face, some very useful information will be generated. Wouldn't you like to know what's on the mind of your customers? What they want?  What they need?  With this summit, you will get it directly from the horses mouth. 

Data Center Pulse Summit 

On February 17-18, 2009 we will host the first Data Center Pulse Summit in Northern California. You can read more about this event through the latest blog entry.

Keep in mind, that this is an invitation only event for Data Center Pulse members. But, anyone in the industry can submit topic ideas through this survey to be considered for the different tracks. We have also partnered with the AFCOM Northern California Chapter to host the read-out of the topic findings immediately following the event (February 18, 2009).  We have also partnered with Teladata to have the output of these 10 topics discussed at round tables at their Technology Convergence conference the following day (February 19, 2009).  

We have high hopes for this summit and would appreciate your input. Feel free to email dcpulse@me.com with any suggestions.

Sunday Oct 12, 2008

Secret Agent

Let me tell you why I have the coolest job at Sun.  First, I get to build datacenters all over the world, talk with thousands of customers, build new communities to unite datacenter peers globally, and play with all the new Sun technology years before it is released.  But, today I did something I have never done before. I became a Secret Agent...

The Marlowe-Pugnetti Company used our Santa Clara campus to film some scenes for their upcoming movie called "The Awakened". James Marlowe, the director, contacted Sun a few months back and asked if he could use the campus to finish his movie.  As soon as he saw the datacenter, he knew it was perfect.  It had "the high-tech look" for the agency, like it was in a bunker 30 floors underground. The "secret government agency" called S.H.A.D.O.W. had an intense situation room scene in the Mansion as well as a datacenter scene.


Brian Day, (my Chief of staff in real life) and I were extras playing agents from S.H.A.D.O.W., the Specialized Homeland Alien Department of World Wide Security.  We played well dressed datacenter technicians (we all know they usually wear shorts) working on the supercomputer in the back. They put a laptop on top of a pull out KVM and had the sun servers in the background. The laptop had some great computer animation including scrolling code (no idea what it was, but it looked good).  The geeky agency guy realizes there is a major paroblem and has to pull out his flash drive to keep the evidence. Brian was in the background playing with a power cord and I was in the back side have a conversation with Jennifer, another extra near the other "super computers".  This was Jennifer's first time as an extra as well.  She is a regional sales manager from Dynasplint. Looks like we will have our 3 seconds of fame in that scene (if it's not cut).  :-)  But, best of all our datacenter and Sun servers will now be immortalized in the movie.




The next scene I was in was in the mansion sitting with about 20 other extras simulating a war-room environment for this agency. Christina Lucke, from Sun marketing, was also able to join the scene and sit next to me at the conference table. Everyone but me seemed to get makeup. I guess my forehead doesn't shine.  :-)  The interesting thing was the other extras were from Applied Materials, VMWare, and even a Santa Clara coroner (72 years old and doesn't plan to retire). I believe these were just different people that James has worked for or with in the past on other corporate type of jobs.  It's all in who you know, eh?  All of us were different ages and nationalities and it made the scene look good.  Another extra was Hooman Khalili, the Alice radio personality and star of HoomanTV on YouTube.  In the scene, we were discussing amongst each other as the head guy came in. Robert Picardo, who played the doctor on Star Trek Voyager, and is the new commander of the Stargate Atlantis, is the agency head in the movie. He is an all-powerful commander type who lambastes a newly transfered agent from the NSA who is questioning the agencies spending and practices.  All of us had to give different reactions and facial expressions to the exchange between the agency head the NSA agent.  It was a pretty cool scene that took about 3 1/2 hours to film.  Four angles, wide, closeup and background shots. Robert was really nice and kept cracking jokes between scenes. 

Throughout the day James knew exactly what he wanted in the shots and seemed to really love what he was doing. Then again, who wouldn't love filming a movie. :-) While I was sitting at the table in the conference room scene I had my laptop open.  We had to be talking about something in the scene so I pulled up black box slide.  I doubt it will make it in, but was great behind the scenes dialog before the main actor walked in. I think remember saying something about "world domination", I doubt that will make it into the final scene either.  :-)

 


In the end, the mansion and the datacenter were perfect locations for the shoot.  I think it will add quite a bit to the production.  So, back to the datacenter (until Steve Spilberg calls).  :-)


  

Tuesday Oct 07, 2008

Chill-Off 2

Based on the feedback from the AFCOM NorCal chapter meeting roundtable and follow on meetings with SVLG and LBNL, we have finalized the approach for the Chill-Off 2.  Below is the stack that I created to represent all of the different layers that will be covered in this test.  It is a much more aggressive goal this year.  I have also included the updated presentation (pdf) that describes the Chill-Off 2 strategy and project information.  We are still searching for a few more companies to participate.

Please provide any feedback or suggestions directly to dean.nelson@sun.com.



Tuesday Sep 23, 2008

AFCOM NorCal - The Silicon Valley Chapter

On Wednesday September 24, 2008 we hosted the second AFCOM Northern California chapter meeting in Sun's Santa Clara, CA auditorium. I'm the board VP for this chapter that started in April 2008. The focus for this meeting was the Chill Off 2 and the first Data Center Pulse round table.

CHILL OFF 2

After Jim Gammond (membership), Eric Stromberg (President) and Maricel Cerruti (Communications) concluded chapter business, I presented a quick summary of the findings from the first chill off we hosted in June 2008. Then I gave a summary of the approach for Chill Off 2. Next we brought up a panel of subject matter experts to discuss the details between each other and the 80+ profesionals that attended. The panel I moderated included:

  1. Mike Ryan, GDS Sr Staff Engineer - Sun Microsystems

  2. Olivier Sanche, Sr Director - Data Center Operations for eBay 

  3. Phil Hughes, CEO of Clustered Systems

  4. Bill Tschudi, Lawrence Berkeley National Labs (LBNL)

Mike was there to represent the vendors from the last chill off and the testing environment. He was the technical lead for the first chill off. Bill will be conducting the testing again this year as an independent party. Olivier has agreed to provide the workload definition that answers two important questions for ebay data center investments/operations. Phil is a new vendor in the test with a passive cooling solution that removes all the fans from the servers. We started with a few questions from me and then opened it up to the crowd for questions and input. It was a very lively session with most of the dialog coming from the audience which represented the entire data center community from consultants to engineers to manufacturers to end users. We received excellent feedback and were able to achieve what we set out to do, have the community infulence the direction of the chill off 2.

Some of the learnings/changes that will be made from this session:

  1. Possibly have a mixture of hardware/vendors (blades, 1U, 2U, etc) in the test rather than all one form factor (sun x4100) - or do both? 

  2. 2N testing down to N to compare efficiency losses.

  3. Fault insertion that shows both equipment failures and humans faults such as opening the door of a container for maintenance, leaving a hole in an APC HACS, etc. Also, simulate moves adds and changes and their affect on the test environment.

  4. Mix the work load in the test, not serial tests. So HPC, Web and enterprise all in one test as the mixed workload.

  5. Add results from a tuned/contained raised floor/crac based DataCenter to the chart. Show the real differences, not just theoretical, against all the solutions.  Current chart shows a traditional open raised floor/crac installation.

  6. Standard communication protocol for data collection? How easily and effectively do these solutions tie into a BMS?

  7. Compare main PDU, in-rack PDU wireless sensors, env monitors and server sensor efficiency. Include mixture of vendors that can meter down to the plug level and see how acurate they are compared to the internal server readings.  Take temp and humidity readings across the equipment and the cooling devices to see how accurate they are.

  8. A fully isolated environment so there are no questions about other in-room conditions affecting the tests. Also have the ability to watch the affect of raised water temps and raised inlet temp, on each solution.

  9. TCO of each solution? This would be difficult, but would be very interesting data.

  10. Remove all the server fans in the container and compare that to the regularly loaded test. In other words only use the containers in-line cooling fans to move air rather than the servers.

  11. Do a full 3D CFD model of the solutions and compare that to what is actually seen.  Future Facilities.

 

The presentations with the updates based on the session is coming soon. I would appreciate feedback on any area in the test/presentation. If there are companies that would be interested in including their products, services, support, expertise or time, please email me. dean.nelson@sun.com

 

DATA CENTER PULSE ROUNDTABLE

After a break for networking, we started what I hope to be a regular occurance at data center chapter meetings like this all over the world. Earlier in September, Mark Thiele and I started a new exclusive group that only includes datacenter owners and operators called Data Center Pulse. You can see the details in my earlier blog entry.

The panel we assembled was quite impressive. We had data center owners from Cisco, Apple, VMware, eBay, Stanford and the CEO of IDS, a new startup company putting datacenter co-lo space on container ships.

I asked a number of questions that had come up in our Data Center Pulse group through likedin, some others I had prepared and questions that came out naturally during the dialog/debate.

Check out the specific questions and responses in this round table session through the Data Center Pulse Blog. (click on blog)

In the end we ran out of time with the amount of discussion and healthy debate. After we were finished, I really felt that we should have filmed the session for the rest of the community to watch. Later, an industry friend of mine suggested that we hold these kinds of roundtable discussions on satelite radio so everyone could benefit from them. I thought that was a great idea, but we should try through a differnet medium. Video podcasts or YouTube. I travel quite a bit and could film these sessions and have interviews with different DC owners all over the globe. So, starting next week, Mark and I will be filming the first episode introducing the group. We have identified about 10 sessions we want to film including an upcoming trip I'm taking to Barcelona, Spain where I hope to have another DC Pulse roundtable session.

I'd be very interested in input from DC professionals around the globe. Is this something you would watch? Is it something you would want to participate in as a panelist? Would you be interested in letting us tour your DC to share the challenges and your solutions/learnings first hand with the community? Other ideas? We'd love to hear them.

Sunday Sep 14, 2008

Data Center Pulse: The Community

This weekend I created a new group on Linkedin to bring the global datacenter community together.  It is called Data Center Pulse.  DCP is an exclusive group of global datacenter owners, operators and users. It already includes many from some of the largest datacenters in the world. DCP will track the pulse of the industry through discussion and debate with the goal of influencing the future of the datacenter.

No vendor consultants or individuals with primary roles in a sales, marketing or business development capacity will be allowed to join the group. Members must participate, or be responsible for, one or more of the following within their own company.

  • Data Center Strategy
  • Data Center Architecture/Design
  • Data Center Operations
  • Data Center Efficiency
  • Data Center Sustainability
  • Data Center Use

There are specific reasons why these rules for membership have been established.  I have been involved with a number of groups, forums and online bodies that have attempted to tap into the different datacenter trends.  The problem has usually been that they turn into individual promotion or company sales pitches.  Now don't get me wrong, consultants, business development, marketing and sales professionals provide great value to companies and the industry, but that is not what this group is about. We want to have open, passionate discussion and debate around problems data center professionals are facing, directly.  We then want to take the problems and ideas generated within the group and share them with the industry. In the near future, the threads generated in this group will be posted to http://datacenterpulse.com.  All particpant names and companies will be stripped from the threads to ensure our members anonymity.  (The DCP board maintains the right to remove discussion content if it is deemed inappropriate).  The board will also frequently scrub the member list to ensure that the community stays end-user focused. 

Anyone who is not part of this group, can submit questions to the community by emailing question to dean.nelson@sun.com.  (It is the sole discretion of the DCP board on wether or not these questions will be posted) 

Our goal is to build this global community to 1000 strong by the end of 2008. 

The more DC professionals we can add, the better.  To achieve this goal, we could use your help in recruiting for this community.  If you would like to participate, please request to join the Data Center Pulse Group.  Then, plan to bring it to the table...  :-)

If you have any questions feel free to contact me (Dean Nelson - dean.nelson@sun.com) or my co-chair, Mark Thiele, (mthiele@vmware.com) Director of R&D Business Operations from VMware.

Monday Jul 28, 2008

Innovation Award

I don't think I could smile any wider than I am right now.  On Wednesday July 16th, my team received Sun's Innovation Award at the Marriot in downtown San Jose, CA. The Innovation Award is one of the highest recognitions you can receive at Sun. It is presented by our CEO, Jonathan Schwartz and CTO, Greg Papadopolous at Sun's annual leadership summit for all global VPs, CTOs and Distinguished Engineers. To make this award even sweeter, it is the first time in Sun's history that the Workplace Resources group (Real Estate) has received this award.

 

 

 

 On award day, my small global team converged on San Jose. At the ceremony, Greg took the time to describe in his own words (I loved that he didn't just read something from a script) why our contributions have been so innovative for the company. One anecdote he cited was that he received a call from one of our largest customers to fly to their corporate headquarters and discuss datacenters. This customer was one of the 2200 people who toured our Santa Clara datacenter over the last year.  They are spending a half a billion on new datacenters and wanted to get some help. So Greg, Subodh and I are flying down to chat further.  The fact that our customers are seeing us as thought leaders in not only the technology arena but the underlying environment that allows those solutions to be deployed, e.g. datacenter design, is fabulous. I really appreciated the way that Greg was able to articulate our contributions to the company on many different levels. The team was on cloud nine. I have to say that we owe our success to the support from our executive management. It starts with David Harris, my boss who runs Workplace Resources and was the one who started this program four years ago. He has been actively supporting the program since his return last year. Then to Bill MacGowan, the EVP of People & Places who has embraced our efforts and never waivered in his support as we traversed the difficult path of change in the company. Without them, we would not have been able to get any traction.  With them, I feel like we have jets strapped to our backs.  :-)

Besides the obvious, there were two additional benefits I saw when we received this award. First, the Real Estate and Facilities teams are now receiving recognition for the difficult work that they perform. Like our IT group, it can be a thankless job. Usually service organizations become visible when there is a problem. When things are going well, people forget that the service is there. Secondly, it is a new, valuable connection with our customers. Having a peer to peer conversation about how we have solved our current internal datacenter problems as well as how we future proofed our datacenters to scale with next generation equipment (which we already have in our R&D datacenters), without additional investments, is extremely valuable insight for our customers.

This is the award description:

    Sun's Innovation Award recognizes those individuals and teams who have made a significant contribution to Sun through innovation. Innovation is a starting point for the Sun Strategy and is key to helping differentiate Sun and attract communities to Sun. Product, process, and project innovations have increased Sun's ability to grow, make money, build our communities, enlist champions, and accelerate our business. The purpose is to reinforce and recognize exceptional perform ance related to a key pillar of Sun's strategy and one of our key values: Innovation.

    Innovation happens across the world and in many functions and businesses. We want to recognize those innovators across our Sun community who make a difference in our business. Whether it is a project which has made an impact, or new products coming to market which have challenged conventional thinking and changed the marketplace, or a process that has driven significant speed to market or cost efficiency, we want to hear about it.

    Product: A new product or service that provides Sun with a competitive advantage in the markeplace and grows Sun's revenues.
    Process: Development of a breakthrough business process innovation that accelerates our time to market, decreases cost, improves quality, or gives Sun so me other competitive advantage.
    Project: A solution that provides Sun with a competitive advantage.

This is an exerpt from an email that Bill MacGowan's, Executive Vice President of People & Places, sent to our organization.
    I'm delighted to announce that seven Workplace Resources employees received the prestigious Innovation Award at last night's award ceremony. It's the first Innovation Award ever earned by Workplace Resources, and I'm excited to congratulate the entire Global Lab and Datacenter Design Services team:

      Brian Day
      Serena DeVito
      Ramesh K V
      Dean Nelson (Team Lead)
      Brett Rucker
      Michael Ryan
      Petr Vlasaty

    Sun has been facing the same datacenter challenges as our clients - in space, power, cooling, and rising utility costs. To bridge the gap between Facilities and IT/Engineering my boss, David Harris, started the datacenter and lab design competency center within his Workplace Resources organization. This team, led by P&P employee Dean Nelson, enabled a successful consolidation of Newark and Sunnyvale labs and datacenters into one efficient datacenter in Santa Clara - reducing lab space from 202,000 to 72,000 square feet, and consolidating 152 rooms into 14.

    This team has truly transformed the way datacenters and labs are built at Sun, lowering datacenter utility costs by a remarkable 50% and reducing Sun's global carbon emissions by over 1% in just three months. As a result of this effort, Sun received $1.2 million in energy rebates plus a $250,000 "Innovation Award" from Silicon Valley Power.

    In addition to addressing Sun's challenges, Dean Nelson's team has shared their design best practices with customers. Further, they've toured over 2,200 people from more than 300 customer companies through the Santa Clara datacenter, and published solution briefs on and videos on sun.com that have been downloaded more than 11,000 times. Finally, they just published the Sun Blueprint which details their design methodology: "Energy Efficient Datacenters: The Role of Modularity in Datacenter Design." Available free for download.

     

As my team sat down to dinner that night a few of us had a similar thought. Sean Connellan, my boss who passed away last year, would have been very proud of this achievement. He was a huge supporter of our efforts and constantly pushed us to show our value proposition for the company.  

Sunday Jul 20, 2008

Chill-Off!

On April 2, 2007 I met Ray Pfiefer from the Silicon Valley Leadership Group (SVLG).  He was soliciting help in datacenter demonstration projects for SVLG.  On May 21, 2007 I signed us up to host a demonstration project to compare cooling products for datacenters.  We were undergoing a major consolidation and had the perfect environment to be able to host these tests.  At that point I didn't realize how much we had signed up for.

Over the next year we coordinated the testing of five differant modular, closely coupled cooling solutions.  It was dubbed the "Chill-Off".  Brian Day, my chief of staff, was the program manager, Mike Ryan assisted in all of the mechanical/electrical details, Serena Devito coordinated all server and network needs, and Eric Garcia was the technican that racked, reracked and put the sweat into getting the tests setup. Power Assure did the server characterization and load testing.  Modius did the measurement of over 1500 points, collecting over 40 million measurements. Tim Xu from Lawrence Berkeley National Labs (LBNL) was the independent party that defined the testing methodology then executed the tests and reported the results.  The participants were Liebert XD, APC InRow, Rittal LCP+, IBM/Vette Cool Blue and Spraycool chip-level cooling.  For this test we decided not to use new, high efficient equipment. We used older V20 Sun servers to create a more hostile environment for the solutions.

My goal in hosting this test was to get to find out how efficient these devices really were. We design datacenters all over the world and talk to customers and peers building datacenters almost every day.  I was tired of hearing the marketing hype, opinions and posiitioning of these solutions.  I wanted an independent test that could be shared with the industry so we all could make our own call on which one will solve the problem.

The effort payed off.  On June 26, 2008, we hosted the SVLG Data Center Energy Summit on our Santa Clara, CA campus. We had over 300 datacenter customers and industry professionals show up to hear the results of this test as well as 10 others.  I had purposely not looked at any of the results during the test.  I waited until the draft reports were finished and reviwed them just like my peers and customers would.  I then took what I gleaned from the reports and created a simplified presentation of the findings.  I told the paricipating vendors that I wanted to spice the presentation up.  The people didn't come to hear me talk, they came to get content to help them make informed decisions.  So, I decided to present a quick history, have Tim describe the testing methodology, show a single results slide comparing the vendors and then conclusions.  I got through this as quickly as I could and then called representatives from each of the companies on stage for a panel debate. I prepared a list of questions to ask them that would give them a chance to defend their results and spark debate. It was a blast. To give you an example, I asked Rittal why their numbers were so much higher than the others.  I asked Liebert and APC to tell me why I would buy one of their products since they tied. I also asked all the vendors why we wouldn't just raise the inlet temperatures of the all the IT equipment to 80+ F degrees, use economizers and eliminate the need for their modular products. That one was fun. All of the vendors took my questions and attempts to increase the debate very well.  I was hoping for more fireworks, but it was a great session. The audience was very receptive to the format and the questions.

You can get a copy of my presentation here (pdf format).  You can get the final Accenture/SLVG report here.  All of the case studies can be found here including our Modular Cooling Test, the Chill Off.  In the end, I found a number things that stood out the most for me in this test.

  1. All of the modular cooling solutions are more efficient than traditional datacenter designs.
  2. As an industry, we need to keep pushinng the inlet temperatures higher.  Our server fans did not kick into higher speed even with an inlet temperature of 80 degrees F.
  3. If we can get these temperatures higher, it opens a huge amount of the world to economizers.
  4. The collaboration and effort from all involved was incredible, even from from fierce competitors.

 

Lastly, I realized that we have just started.  We need to continue these collaborative tests and challenge each other to push the envelope. At the end of the presentation & panel, I put the challenge out to our competitors and partners. Lets get more equipment and more solutions side by side in Chill-off 2. Containers, ducting, mixed vendor hardware loads and locations, highly optimized equipment via virtualization, raised air and water temperatures,  30kw loads, plus what ever we can come up with. I want to drive more independent content so that everyone who is contemplating datacenter construction or retrofits will be able to make informed decisions.

I will be soliciting input via a roundtable at the next AFCOM Northern California chapter meeting. Stay tuned for details.  If you are not an AFCOM member, please contact Craig Easley (ceasly@afcomnorcal.org) to get set up.  I'm looking forward to an even better event next year.

Stay tuned.

PS. I gave a tour to some reporters and industry analysts the morning of the SVLG event.  One of the reporters posted a video of the tour on youtube.  Check out the two parts of the tour here.


 

PSS. Deborah Grove, from Grove Associates, interviewed some of the presentors during the SVLG event.  See the summary video here.


Monday Jun 09, 2008

The Role of Modularity in Datacenter Design


In June of 2006 we presented our proposal to Jonathan Schwartz's staff for the largest, most complex and aggressive datacenter consolidation in Sun's history.  We had just completed a years worth of great datacenter projects in Czech Republic, China, UK, and Norway and we were raring to go. The proposal said that in just 12 months we would consolidate 202,000 square feet of datacenter space from of our Newark & Sunnyvale, CA campuses into less than 80,000 square feet of new datacenter space in our Santa Clara, CA campus. At the end of that meeting we received the approval for the project and Jonathan asked one final question, "Where's the book that I can hand out to customers?".  Needless to say, we had our hands full with the California consolidation and the new acquisition of StorageTek.



I am happy to say that today we finally finished that task. You can download the blueprint here:


Energy Efficient Datacenters: The Role of Modularity in Datacenter Design.


In August, 2007 we finished the California consolidation project on schedule, under budget and with the ability to almost triple our densities as our load increased all while maintaining a brutally efficient PUE. We had also completed an Eco Launch that highlighted some of our successes in the form of Solution Briefs and a Datacenter Tour Video. We were also well underway with the next largest consolidation, Louisville, CO (StorageTek) to Broomfield, CO. In addition, we received a deluge of requests for tours of the new Santa Clara datacenter. Our customers, industry and partners wanted to see what we had done first hand.


We started on the blueprint in January 2008 in addition to the 40+ active projects we had globally. We knew this was important and we wanted to fulfill the commitment we had made to Jonathan, albeit later than he had likely expected. In the end, I am really glad we waited. The Blueprint is full of examples of what we did right, what we did wrong and what we believe the future holds for datacenter design. Remember, we are the company that solves our customers technical problems with our technology innovations, but, we are also generating more heat in the datacenter with those solutions. This blueprint is a guide to help our customers understand our approach and lessons learned along the way. The datacenter is a complete eco system that requires a dynamic balance between the IT load and support load to enable flexibility and ensure efficiencies for both economic and ecological gains. In my job my team is blessed to work with the product groups who are building the next generation equipment.  These are the products that will be rolling into our customers datacenter 1-3 years from now.  In other words, it's like having a crystal ball.  We have tomorrows technology in our datacenters today including the power, cooling and connectivity challenges that come along with them.


To date, we have had over 2,200 people walk through Santa Clara. The Datacenter Tour and Solution briefs have been downloaded over 12,000 times from the sun.com website.  We also distribute this content on memory sticks to the thousands of people on the tours and different conferences we speak at.


This blueprint is the first of nine chapters that will be released over the next 12 months. I encourage you to give your opinions and suggestions or raise your questions and concerns through this blog entry, email or the blueprint wiki site.


Stay tuned. This is just getting fun... :-)

Sunday Apr 20, 2008

I'll show you mine...



 

So how efficient is your datacenter?


Last month I received some pretty cool news.  The Chill-Off we have been hosting for the Silicon Valley Leadership Group (SVLG) in their datacenter demonstration project, was completed.  This was a head to head test against APC In-Row, Liebert XDV, Spraycool, Rittal Liquid Racks and IBM Rear Door Heat Exchanger. Sun was the host that provided the datacenter, plant, compute equipment, and support.  Lawrence Berkeley National Labs (LBNL) was the group conducting the test on behalf of the California Energy Commission (CEC)  The results from this test will be published in a report in June of this year and we will be hosting the event.

But one piece of information came out of this that I could not wait to share.  As part of the chill-off, LBNL did a baseline of our plant in Santa Clara.  Mike Ryan on my staff then captured the usage data for the remaining portions of the datacenter.  This gave us our PUE or DCiE (pick you favorite) number.

 

Surprise, Surprise!

I knew that our datacenter would be efficient because of the way we had designed it, but I did not have any data to back it up yet.  We had been telling our customers, and others that toured the center, that we are pretty sure we would be under the industry targeted PUE of 2 (Created by Christian Belady from the Green Grid).  That was a conservative number.  But when I got the data back, even I was surprised at how efficient it was.

 

We achieved a PUE of 1.28!

 

That was 36% more efficient than the Uptime Institute target (2) and almost 50% more efficient than the standard PUE (2.5)!

For those of you who may not be familiar with PUE, it is Power Usage Effectiveness.  The Green Grid's measurement of efficiencies in the datacenter.  In other words, how much power do you use to run your equipment.  For a PUE of 2, you have 1 watt for the equipment and 1 watt for the other equipment (chillers, ups, transformers, etc.) to run it.  That means if you have a 1MWatt datacenter, you would only have 500kW of power for compute equipment.

This means that with a PUE of 1.28 our datacenter can run 798kW of equipment load and only require 225kW of support load for a total load of 1,023kW.  In a datacenter with a PUE of 2 to run 798kW of equipment load, it would require 798kW of support load for a total of 1,596kW.  

Bottom line we use 573kW less support power to operate the same equipment.  This equates to a power bill that is $400k less per year (using industry average of $0.08/kWh).

I don't know about you, but this solidifies the statement we have been saying all along.  Economics drive the decisions in datacenters.  But if you can achieve efficiencies into your datacenters, you will achieve both economic and ecological results.  This PUE of 1.28 not only reduces our overall operating expense every year, it also lowers our carbon emissions substantially!  We're only using the power we have to.  No Eco hype, this is Eco reality.

Keep in mind that this PUE of 1.28 was achieved in our highest density datacenter globally.  It is also built on slab.  No raised floors to be seen.  The other aspect is that the datacenter is only 60% full (physically). We moved all of the equipment from Newark, CA but are continuing to add more equipment from our other campuses.   It is also a heterogeneous datacenter.  We have a mixture of all types of equipment from many vendors. Our plant is designed to 9MW (phase I) and we're drawing 3.5MW so far.  So, we can achieve extreme efficiencies with a mixture of equipment, even when we're not running at max capacity.  (Note: the remaining 13 datacenter rooms that make up the remaining load were not measured in the chill-off, but are all fed by the same efficient plant).

In support of our Eco Strategy, "Innovate, Act & Share", I want to have full disclosure as I share. I want everyone to understand that this room is a Tier I datacenter.  We don't have as much redundancy as higher Tier level datacenters, but the design approach is the same.  Our Variable Primary Loop chiller plant design is the same we would use in our Tier III sites.  We have removed an entire set of pumps and reduced the pipe distances because everything is dynamic from the plant to the rack. We only use the energy we have to. For all Tier level datacenters we would also be choosing very efficient transformers, ups and other components.  So, no matter what tier level you are trying to build, you can achieve these efficiencies if you embrace next generation designs.  But, regrettably, many design and construction firms out there are hesitant to do this.  You need to be specific on what you want and squelch the nay-sayers.

Here is the chart that shows the details.  PDF version is here.


So, the question is.  I've shown you mine.  Can you show me yours?

Better yet, would you like Sun to help design your datacenter to achieve the same efficiencies?  Let us drive the next generations physical and technical solutions for you.  Sun's Eco Virtualization practice can do just that.  Email me at dean.nelson@sun.com and I'll tell you how.

 

Sunday Oct 21, 2007

Not Rocket Science!

Man, what a month.

Earlier this month, I did an interview with Contrarian Minds editor, Al Riske. He captured my ramblings and then published the following report titled, "Not Rocket Science". It was an honor to be on the same website as people like Scott McNealy, Jonathan Schwartz, Greg Papadopoulos, Radia Permlan and James Gosling to name a few. Talk about some brain power. :-)

Instead of complicating things, my team and I have really tried to simply them when it comes to datacenter design philosophies that support the equipment of today and tomorrow. Take a look here:

    Not Rocket Science...

I won't be building a rocket any time soon. Then again that may be kinda fun...

Sunday Sep 30, 2007

The liquid sky is falling!

How many of you have heard that very soon, we MUST have liquid directly to racks to cool them? To me, I attribute this to the same hysteria of, "the sky is falling".


How many skyscrapers will you be building?


Today, the average in the industry for rack heat loads is between 4-6kw. For datacenters, the racks that have loads >15kw are the minority. This is analogous to having a small number of sky scrapers in a city, with a huge amount of smaller buildings surrounding it. While, there are many new boxes coming out that are increasing the load per rack, the average load will still only increase in 2-4kw increments over time. Skyscrapers aren't buit overnight. There will not be a rapid replacement to drive racks to an average of 20kw unless you are running a computation hungry facility that uses uniform equipment that is >90% optimized. The standard replacement of equipment in asset life cycles will drive the average load per footprint up, but the majority of companies will not have the average load per rack at these levels. The ramp is getting steeper, but It will take time, and money, to happen.


Future rack trends


With that said, there will, and are configurations that are requiring cooling for >30kw per footprint. You can use traditional localized cooling for racks up to 30kw in a mixed environment. But when you pass this point, you can no longer force enough air for this equipment to be cooled. The economics also start to hurt. Fans and chiller plants forcing air to extract heat, account for 33% of the datacenter operational costs. Liquid cooling really is the next step. But, I believe the liquid will be refrigerant, not water. Refrigerant is more efficient across coils, giving even cooling distribution to components. It is also much less risk in a datacenter since it changes to a gas if there is a leak. In conjunction with the cooling, power capacities will also need to increase. I believe it will be 480V direct, rather than 208V. This is the coming trend for racks in datacenters. My projection is 2 years for racks like this to be readily available from numerous vendors, but >7 years for the average load per rack getting to these levels.

My opinion is that companies do not need to consider liquid cool racks until the average load breaks 15kw. With tightly coupled air cooling and containment, you can achieve very efficient solutions for these skyscrapers, as well as the other racks in the datacenter. Here is a picture of our highest density datacenter that is currently cooling loads with an average of 12kw/cabinet. This design can scale to an average of 18kw per rack. Keep in mind, that we can roll in 30kw racks to this solution. The key here is the average is 18kw. So you can have a wide range of rack loads in the same datacenter and still achieve cooling efficiencies. This 2,132 square foot room will have 720kw of equipment load. And...it is on a SLAB! It is the highest density, and highest efficiency datacenter in our portfolio of 1.3M square feet.

So, don't believe the hype or doomsday projections. The sky is not falling. This is a very predictable curve that doesn't require you to take drastic action to solve your datacenter cooling loads. You just need to keep it simple. Our Santa Clara Datacenter design has covered the current equipment loads and scale for the next gen equipment, today.



<script type="text/javascript">
var sc_project=3009291;
var sc_invisible=0;
var sc_partition=32;
var sc_security="7cddf1df";
</script>

<script type="text/javascript" src="http://www.statcounter.com/counter/counter_xhtml.js"></script>

free statistics


View My Stats

About


Gig: GDS Director
Global Lab & Datacenter Design Services, Sun Microsyste

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today