Tuesday Oct 09, 2007

Performability analysis of T5120 and T5220

In complex systems, we must often trade-off performance against reliability, availability, or serviceability. In many cases, a system design will include both performance and availability requirements. We use performability analysis to examine the performance versus availability trade-off. Performability is simply the ability to perform. A performability analysis combines performance characterization for systems under the possible combinations of degraded states with the probability that the system will be operating the degraded states.

The simplest performability analysis is often appropriate for multiple node, shared nothing clusters which scale performance perfectly. For example, in a simple web server farm, you might have N servers capable of delivering M pages per server. Disregarding other bottlenecks in the system such, as the capacity of the internet connection to the server farm, we can say that N+1 servers will deliver M\*(N+1) performance. Thus we can estimate the aggregate performance of any number of web servers.

We can also perform an availability analysis on a web server. We can build Markov models which consider the reliability of the components in a server and their expected time to repair. The output of the models will provide the estimated time per year that each web server may be operational. More specifically, we will know the staying time per year for each of the model states. For a simple model, the performance reward for an up state is M and a down state is 0. A system which provides 99.99% (four-nines) availability can be expected to be down for approximately 53 minutes per year and up for the remainder.

For a shared nothing cluster, we can further simplify the analysis by ignoring common fault effects. In practice, this means that a failure or repair in one web server does not affect any other web servers. In many respects, this is the same simplifying assumption we made with performance, where the performance of a web server is dependent on any of the other web servers.

The shared nothing cluster availability model will contain the following system states and the annual staying time in each state: all up, one down (N-1 up), two down (N-2 up), three down (N-3 up), and so on. The availability model inputs include the unscheduled mean time between system interruption (U_MTBSI) and mean time to repair (MTTR) for the nodes. We often choose a MTTR value by considering the cost of service response time. For many shared nothing clusters, a service response time of 48 hours may be reasonable – a value which may not be reasonable for a database or storage tier. Model results might look like this:

System State

Annual Staying Time (minutes)

Cumulative Uptime (%)

Performance Reward

All up

521,395.20

99.2

M \* N

1 down

4,162.75

99.992

M \* (N - 1)

2 down

39.95

99.9996

M \* (N - 2)

3 down

2.00

99.99998

M \* (N - 3)

> 3 down

0.11

100

< M \* (N - 4)

Total

525,600.00

100


Now we have enough data to evaluate the performability of the system. For the simple analysis, we accept the cumulative uptime result for the minimum required performance. We can then compare various systems considering performability.

We have modeled the new Sun SPARC Enterprise T5120 and Sun SPARC Enterprise T5220 servers against the venerable Sun Fire V490 servers. For this analysis we chose a performance benchmark with a metric that showed we needed 6 T5120 or T5220 servers to match the performance of 9 V490 servers. We will choose to overprovision by one server, which is often optimum for such architectures. The performability results are:

Servers

Units

Performability (%)

Sun SPARC Enterprise T5120

6 + 1

99.99988

Sun SPARC Enterprise T5220

6 + 1

99.99988

Sun Fire V490

9 + 1

99.99893

You might notice that the T5120 and T5220 have the same performability results. This is because they share the same motherboard design, disks, power supplies, etc. It is much more interesting to compare these to the V490. Even though we use more V490 systems, the T5120 and T5220 solution provides better performability. Fewer, faster, more reliable servers should generally have better performability than more, slower, less reliable servers.

 

Monday Apr 23, 2007

Mainframe inspired RAS features in new SPARC Enterprise Servers

My colleague, Gary Combs, put together a podcast describing the new RAS features found in the Sun SPARC Enterprise Servers. The M4000, M5000, M8000, and M9000 servers have very advanced RAS features, which put them head and shoulders above the competition. Here is my list of favorites, in no particular order:

  1. Memory mirroring. This is like RAID-1 for main memory. As I've said many times, there are 4 types of components which tend to break most often: disks, DIMMs (memory), fans, and power supplies. Memory mirroring brings the fully redundant reliability techniques often used for disks, fans, and power supplies to DIMMs.
  2. Extended ECC for main memory.  Full chip failures on a DIMM can be tolerated.
  3. Instruction retry. The processor can detect faulty operation and retry instructions. This feature has been available on mainframes, and is now available for the general purpose computing markets.
  4. Improved data path protection. Many improvements here, along the entire data path.  ECC protection is provided for all of the on-processor memory.
  5. Reduced part count from the older generation Sun Fire E25K.  Better integration allows us to do more with fewer parts while simultaneously improving the error detection and correction capabilities of the subsystems.
  6. Open-source Solaris Fault Management Architecture (FMA) integration. This allows systems administrators to see what faults the system has detected and the system will automatically heal itself.
  7. Enhanced dynamic reconfiguration.  Dynamic reconfiguration can be done at the processor, DIMM (bank), and PCI-E (pairs) level of grainularity.
  8. Solaris Cluster support.  Of course Solaris Cluster is supported including clustering between Solaris containers, dynamic system domains, or chassis.
  9. Comprehensive service processor. The service processor monitors the health of the system and controls system operation and reconfiguration. This is the most advanced service processor we've developed. Another welcome feature is the ability to delegate responsibilities to different system administrators with restrictions so that they cannot control the entire chassis.  This will be greatly appreciated in large organizations where multiple groups need computing resources.
  10. Dual power grid. You can connect the power supplies to two different power grids. Many people do not have the luxury of access to two different power grids, but those who have been bitten by a grid outage will really appreciate this feature.  Think of this as RAID-1 for your power source.

I don't think you'll see anything revolutionary in my favorites list. This is due to the continuous improvements in the RAS technologies.  The older Sun Fire servers were already very reliable, and it is hard to create a revolutionary change for mature technologies.  We have goals to make every generation better, and we've made many advances with this new generation.  If the RAS guys do their job right, you won't notice it - things will just keep working.

About

relling

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today