Sun's Open Network System Design Approach to Virtualization

By Ron Graham, Systems Technical Marketing


This blog is about how to build a medium to large virtualization system with all the great new technology that we have today. The discussion points will take you from why virtualization, to defining some of the components that I would use, and then putting them all together.

Many years ago when I was working with customers we used to talk about virtualization technology, and most customers were just kicking the tires of this technology. Today, customers are implementing virtualization technology into their data center with great success. Server virtualization projects are everywhere and it's hard to talk to customer's without talking about virtualization. The goal of virtualization is to reduce the amount of hardware and increase the availability of the hardware. Hopefully you kind of get that green feeling when you talk about virtualization and consolidation.

For most IT shops, they have calculated that the average CPU utilization for servers in the data centers is less than 10 percent. This is not desirable for any business. Think about owning a rental car business and only renting the cars 10% of the time. You would stand to make a lot more money if you rented the cars out 70% of the time. On the other hand, if you had a tool that would save you money, you would use that tool a lot more than 10% of the time. Also, by increasing the utilization of systems through consolidation, you would also be saving money on power and cooling. Because you have less machines, your maintenance cost would decrease. Hopefully you get the idea.

Sun has been a leader in virtualization for over 20 years now in one form or another. I say this, because there are a lot of companies that can make two cpu socket virtualization systems. But, find a company that will invest into scaling up to eight sockets for x86 systems, and you have found a company that has the vision to solve difficult problems. This is where I see the true value of Sun and a differentiator. Not only do we design and build smaller two socket systems, but we also have great four and eight socket systems.

Intel has just announced the Nehalem CPU micro-architecture and Sun has designed some very elegant virtualization solutions around this technology. The solution that I will cover is virtualization with a new network platform that Sun has developed on the Sun Blade 6000 chassis. Sun has designed a couple of chips (ASIC) into a Network Express Module (NEM) that provides up to 10 Gb Ethernet to the server modules or blades. This is an inexpensive way to reduce the number of cables by a factor of 10 when compared to 1GB ethernet.

Sun has also come out with a storage technology called the Unified Storage system. This  system has some great features like easy-to-use Dtrace analytics. This features allow users to drill down into their storage system and graphically figure out where problem areas are. Also, it is easy to setup and use. I set up an NFS server with the 7410, and it literally took me a few minutes to setup my storage pool and configure it. After inputting credentials and setup on my virtualization management, I was able to see and use the new storage pool. Truly remarkable!

What a great time to implement virtualization. With the Nehalem architecture we can put up to  twice as many virtual machines on a server than older two socket architectures (based on internal testing). This means running fewer servers and saving money. With the new Sun  Virtualized NEM, we can save on setup, cable aggregation, and there is zero network administration. Storage is a key to virtualization and with our Unified storage System we can easily setup, debug, and manage out storage infrastructure.

So this is how I see this particular solution playing out. Our Sun Blade 6000 chassis can hold 10 server modules. The new server module from Sun based on Intel Nehalem technology is the Sun Blade X6270 sever module. Two cpu sockets with up to 144 GB of RAM and 270 Gbps of IO – sweet. Memory is key in virtualization technologies, most of my customers are more memory bound than cpu bound. Using DDR3 memory with this new architecture allows virtualization engines to really perform on this platform.

Two Sun Blade 6000 Virtualized Multi-Fabric 10GbE NEMs will provide most of the IO throughput and redundancy. This way, each blade has two 10GbE ports and two 1GbE ports. for best practice I recommend the following network connections:

1Gb    - Management
1Gb     - Backup and Recovery
1Gb    - Vmotion
4Gb    - Data
2Gb    - Storage

The above is for each server and depending how much data IO your planning on driving your throughput could vary. But lets use this as a starting point. Best practices also states that some of the above networks should be dedicated. With the Sun blade 6000 we always have the option of putting in two industry standard express modules and configure each server any way we want. For example, we can put in combo cards that have fibre channel and networking, more 10GbE, quad 1GbE, and so on. We have a lot of choices. It all depends on how much money you want to spend and what your current environment consist of.




For arguments sake, lets configure the system with one dual port 1GbE card for each of the 10 servers. This way, each physical server has 2 10GbE and 4 1GbE NICs to handle IO.  The configuration would have full failover and redundancy on all NIC ports and have enough headroom to take on additional load if one should fail. Data would go on one 10 GbE port and storage on the second 10GbE port. The two 10GbE ports could be setup to failover to each other. The rest of the network can be distributed on the 4 GbE ports with Vmotion using it's own dedicated network. Vmotion needs all the bandwidth when moving vritual machines from one server to another.

Looking back at this setup it's not that complicated to setup and manage. From a performance perspective, you should have plenty of bandwidth. I like to look at things to see how balanced the architecture is. Starting from CPUs and memory bandwidth, we have some of the fastest CPUs developed for virtualization, the memory has three channels with an on board memory controller that can run at 1333MT/s, and the Sun Blade X6270 has bigger, faster, and more pipes to move memory, which is key to virtualization performance. With PCI Express generation 2, our IO becomes twice as fast as generation 1. Then with four 10gbE ports and a couple of 1GbE ports, lots of IO bandwidth to communicate with the rest of the world. Like I said, a balanced architecture.

This is blog is only intended to start with the hardware piece and I will later blog about installing virtualization software. Comparing the differences between Vmware, Hyper-V will be interesting to take a look at management and performance on this platform.

Comments:

Wow! This is a great write up on the 7410 and blade system! I'm looking into virtualizing ~180 desktops using VMware View and am seriously looking at the 7410 for storage. I like the 10GbE being available for sure! Especially when you can connect that directly into your blade system!

How many desktops with mid-heavy load VMs would you put on one 7410 with 64GB RAM, 100GB ReadZilla and one JBOD with 10x1TB+2x18GB LogZilla? The desktops will run some CAD software and other standard apps for lab use at a University!

Thanks again! This is a great article!

Posted by Scott Behrends on May 01, 2009 at 01:16 PM PDT #

great article, thanks

RV

Posted by Eremas Kua on July 29, 2009 at 02:17 PM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

blueprints

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Feeds