Monday Oct 13, 2008

T5440 Fault Management

10/13/2008: Today Sun announced the T5440 platforms centered around the UltraSPARC T2 Plus processor. The T5440 is the big brother of the T5140/T5240 systems packing 256 strands of execution into a single system. And I'm happy to report that Solaris' Predictive Self Healing feature (also known as the Fault Management Architecture (FMA)) has been extended to include coverage for the T5440.

With respect to fault management, T5440 inherits both the T5140/T5240 FMA features and the T5120/T520 FMA features. Just some of the features included are:

  • diagnosis of CPU errors at the strand, core, and chip level
  • offlining of problematic strands and cores
  • diagnosis of the memory subsystem
  • automatic FB-DIMM lane failover
  • extended IO controller diagnosis
  • identification of local vs remote
  • cryptographic unit offlining
The T5440 introduces a coherency interconnect tying the 4 UltraSPARC T2 Plus processors together. The interconnect ASICs have their own set of error detectors. The fault management system covers interconnect-detected errors as well.

Additionally, the FMA Demo Kit has been updated for the T5440. For those not familiar with the kit, it provides a harness for executing fault management demos on CPU, Memory and PCI errors. It runs on out-of-the-box Solaris - no custom hardware, error injectors, or configuration necessary. It's a great - and if using the simulation mode, safe - way to get familiar with how Solaris will respond to certain types of errors.


Example: T5440 Coherency Plane ASIC Error
If one of the coherency ASICs detects errors with coherency between the processors, the system may or may not continue to operate, depending on the nature of the error. A prime tenet of the hardware is to disallow propagation of bogus transations - we don't want data corruption. An example of a fatal error on the coherency plane:

SUNW-MSG-ID: SUN4V-8001-R3, TYPE: Fault, VER: 1, SEVERITY: Critical EVENT-TIME: Wed Oct 24 15:28:05 EDT 2007 PLATFORM: SUNW,T5440, CSN: -, HOSTNAME: wgs48-88 SOURCE: eft, REV: 1.16 EVENT-ID: bc7a8eb5-86be-c138-eece-e65e57840b95 DESC: The ultraSPARC-T2plus Coherency Interconnect has detected a serious communication problem between it and the CPU. Refer to http://sun.com/msg/SUN4V-8001-R3 for more information. AUTO-RESPONSE: No automated reponse. IMPACT: The system's integrity is seriously compromised. REC-ACTION: Schedule a repair procedure to replace the affected component, the identity of which can be determined by using fmdump -v -u <EVENT_ID>.

Tuesday Sep 16, 2008

sun4v Chip Offline Using the FMA Demo Kit

I met with a customer today - and it's been far too long since I'd done that. They were putting their T5140/T5240 systems through the paces, including a variety of fault scenarios. One of the scenarios they wanted to test is when all strands on a single UltraSPARC-T2plus chip are offlined. First, I explained that the fault policies for FMA on T5140/T5240 (and that of all sun4v platforms presently) do not offline all strands on an entire chip. However, the situation could arise if there were a series of failures at the core level. A contrived situation, perhaps, since after the first or second core failure, corrective action would likely take place (we're all very good about addressing faulted components in our systems, right?).

Enter the FMA Demo Kit. The demo for T2plus processors in the kit is for a core offline. The demo kit finds the lowest numbered online processor strand and uses that strand as the simulated error detector. By running a series of FMA Demos, we can have FMA offline all strands on a chip.

WARNING: This uses the Demo Kit in "live" mode. It is not suggested to run in live mode on your production systems. Faults generated will be transported to your SP, and any other monitoring software (e.g. SNMP)

Assuming a healthy, full-up T5140/T5240 with 128 strands online:

# ./run_fmdemo -d cpu -L ### offlines VF0 core 0 # ./run_fmdemo -d cpu -L ### offlines VF0 core 1 # ./run_fmdemo -d cpu -L ### offlines VF0 core 2 # ./run_fmdemo -d cpu -L ### offlines VF0 core 3 # ./run_fmdemo -d cpu -L ### offlines VF0 core 4 # ./run_fmdemo -d cpu -L ### offlines VF0 core 5 # ./run_fmdemo -d cpu -L ### offlines VF0 core 6 # ./run_fmdemo -d cpu -L ### offlines VF0 core 7

The end result:

# psrinfo |grep faulted | wc -l 64 # psrinfo |grep faulted 0 faulted since 09/15/2008 12:42:03 1 faulted since 09/15/2008 12:42:03 2 faulted since 09/15/2008 12:42:03 ... 63 faulted since 09/15/2008 12:42:03

Of course, after running the demo kit in live mode, you've got cleanup to do with 'fmadm repair'.

And yes....I'm aware that a straight psradm command can also offline a slew of strands. A small semantic difference is that the psrinfo status would read off-line instead of faulted.

:wq

Tuesday Sep 09, 2008

FMA on xVM

FMA support for xVM hit snv_99 and the xVM gates very early this morning (PDT). It provdes near-native fault management capabilities for CPU and Memory. A very thorough flag day notice was issued, which should hit http://opensolaris.org/os/community/on/flag-days/96-100/ in a day or two.

Update: Very nice description of xVM/FMA in Gavin's blog.

:wq

About

user9148476

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today