Diving in at the deep end - testing scalability of the xVM Ops Center at TACC

xVM Ops Center is the software that we are busy building for managing data centers (yes, in the plural).

It needs to scale up to thousands and thousands of managed systems, spread across multiple geos.

One question we faced was how to test this kind of scale? Each engineer doesn't have this kind of setup to do testing on.

The approach used in initial development testing going beyond testing in our own small labs (which contain a real heterogeneous mix of machines, already a good test of heterogeneity) was to create a pseudo-data-center, a very simple software simulation of managed resources in a subnet, with the basic management characteristics and configuration of a real set of machines. In our implementation, this came down to having a JMX MBeanServer with an MBean for each managed resource (chassis, server, OS, ...) which implements the same management interface that we require to be implemented by drivers for real managed systems.

Developers can then configure a pseudo-data-center of their own (it's just a JVM), and install the pseudo-data-center driver into the product, which allows for the discovery and management of pseudo-resources instead of real resources - our drivers just query and manipulate the MBeans using JMX Remoting, instead of manipulating real resources on the network over IPMI, SNMP, ssh, or another management protocols.

This lets us test scale up to thousands of pseudo systems... but who says that this is going to work on a set of real data-centers? And who says that things like OS provisioning that are typically huge network resource hogs will work as expected in the real world?

Well, one answer is to increase the level of reality of the pseudo-data-center, and another is to go test against real systems.

Both have been done.

 


What better way to dive in at the deep end than to deploy a preview version of Ops Center at one of the world's largest super-computing data-centers and give it a test-drive...
 

Diving in at the deep end
tacc        

 

Sun is providing the TACC with a Constellation-Class super-computer called "Ranger", with 3,936 nodes and 123TB of memory. That's quite a system to test out our product scalability on!

 




We've been collaborating with this project from the early days, and are busy using a preview version of Ops Center on the Ranger supercomputer on site, to help them with their systems management and firmware provisioning, at the same time we're able to test our scalability.

tacc-wiring
ops-center screenshot at TACC



Initial results are extremely promising... of course, not everything has worked right first time, and we've found and fixed various issues going from anything from locking and synchronization problems to queue length issues and job weights ... this is a great "dive-in-at-the-deep-end" opportunity to validate our product ahead of getting it to market.


We're also doing internal data-center level testing using the Sun Grid Compute Facility, which whilst we don't have access to such a big system, lets us test other topologies, hardware, and system configurations.
hardware

 


...by combining the above approaches, together with careful algorithmic design with respect to scalability requirements, we've a high confidence level that we can hit this nail on the head.

Comments:

Post a Comment:
Comments are closed for this entry.
About

nickstephen

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today