Thursday Nov 10, 2011

Oracle Solaris 11 - New Fault Management Features

Oracle Solaris 11 has been launched today! Here is a quick writeup of some of the new fault management features in Solaris 11.

Friday Apr 01, 2011

Notifications for SMF Instance State Transitions

Something you may or may not have noticed in the Oracle Solaris 11 Express release is the ability to monitor SMF service instance transitions, such as to maintenance state or from online to disabled. This leverages the Solaris FMA notification daemons, which were introduced at the same time as this SMF support (previously SNMP notification for email was delivered as an fmd plugin, and there was no email notification).

First thing to know is that the notification daemons are packaged apart from the fault manager itself. There are separate packages for SMTP notification and for SNMP notification, and each deliver a corresponding SMF service:

# pkg install smtp-notify
# pkg install snmp-notify
# svcadm enable smtp-notify
# svcadm enable snmp-notify
# svcs smtp-notify
# svcs snmp-notify
You need only install the mechanism(s) you want. If the packages are absent you are still able to configure notification preferences with svccfg as below - but notifications will not work until the packages are added and the services online.

Next you need to setup your notification preferences. Like many things nowadays these are stored in SMF, and we use an updated svccfg command line. See manpages svccfg(1m) for subcommands listnotify, setnotify and delnotify as well smtp-notify(1m) and snmp-notify(1m). For the die-hards there is more detail in smf(5).

While you can notify of any transition and configure per-service in addition to globally, most-commonly it will be transitions to maintenance state that will be configured for global notification: (I'll use $EMAIL here both to protect the innocent and to stop email address obfuscation from wreckig the example!):

# svccfg listnotify -g
# svccfg setnotify -g to-maintenance mailto:$EMAIL
# svccfg listnotify -g
    Event: to-maintenance (source: svc:/system/svc/global:default)
        Notification Type: smtp
            Active: true
            to: john.q.citizen-AT-elsewhere-DOT-com
That configures email notification when any service instance transitions to maintenance. Similarly you can configure such global actions for transitions such as to-offline.

If you only want to monitor particular services, or appoint different actions for different services such as to email different aliases) then drop the -g (global) and list the instance you are interested in:

# svccfg -s svc:/network/http:apache22 setnotify to-offline snmp:active

Finally, you can notify of FMA problem lifecycle events using the same mechanism:

# svccfg setnotify problem-diagnosed mailto:$EMAIL
The manpages detail the other lifecycle events.

Tuesday Sep 16, 2008

New! Solaris x86 xVM CPU & Memory Fault Handling

We recently completed the first phase of re-enabling traditional Solaris Fault Management features on x86 systems running the Solaris xVM Hypervisor; this support was integrated into Solaris Nevada build 99, and will from there find its way into a future version of the xVM Server product (post 1.0).

In such systems prior to build 99 there is essentially no hardware error handling or fault diagnosis. Our existing Solaris features in this area were designed for native ("bare metal") environments in which the Solaris OS instance has full access to and control over the underlying hardware. When a type 1 hypervisor is inserted it becomes the privileged owner of the hardware, and native error handling and fault management capabilities of a dom0 operating system such as Solaris are nullified.

The two most-important sources or hardware errors for our purposes are cpu/memory and IO (PCI/PCI-E). This first phase restores cpu/memory error handling and fault diagnosis to near-native capabilities. A future phase will restore IO fault diagnosis.

Both AMD and Intel systems are equally supported in this implementation. The examples below are from an AMD lab system - details on an Intel system will differ trivially.

A Worked Example - A Full Fault Lifecycle Illustration

We'll follow a cpu fault through its full lifecycle. I'm going to include sample output from administrative commands only, and I'm not going to look "under the hood" - the intention is to show how simple this is and hide the under-the-hood complexities!

I will use my favourite lab system - a Sun Fire v40z server with 4 x AMD Opteron, one of which has a reliable fault that will produce no end of correctable instruction cache and l2 cache single-bit errors. I've used this host "parity" in a past blog some time ago - if you contrast the procedure and output there with that below you'll (hopefully!) see we've come a long way.

The administrative interface for Solaris Fault Management is unchanged with this new feature - you now just run fmadm on a Solaris dom0 instead of natively. The output below is from host "parity" dom0 (in an i86xpv Solaris xVM boot).

Notification Of Fault Diagnosis - (dom0) Console, SNMP

Prior to diagnosis of a fault you'll see nothing on the system console. Errors are being logged in dom0 and analysed, but we'll only trouble the console once a fault is diagnosed (console spam is bad). So after a period of uptime on this host we see the following on the console:

SUNW-MSG-ID: AMD-8000-7U, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Mon Sep 15 20:54:06 PDT 2008
PLATFORM: Sun Fire V40z, CSN: XG051535088, HOSTNAME: parity
SOURCE: eft, REV: 1.16
EVENT-ID: 8db62dbe-a091-e222-a6b4-dd95f015b4cc
DESC: The number of errors associated with this CPU has exceeded acceptable
	levels.  Refer to http://sun.com/msg/AMD-8000-7U for more information.
AUTO-RESPONSE: An attempt will be made to remove this CPU from service.
IMPACT: Performance of this system may be affected.
REC-ACTION: Schedule a repair procedure to replace the affected CPU.
	Use 'fmadm faulty' to identify the CPU.
If you've configured your system to raise SNMP traps on fault diagnosis you'll be notified by whatever software has been plumbed in to that mechanism, too.

Why, you ask, does the console message not tell us which cpu is faulty? Well that's because the convoluted means by which we generate these messages (convoluted to facilitate localization of messages in localized version of Solaris) has restricted us to a limited amount of dynamic content in the messages - an improvement in this area is in the works.

Show Current Fault State - fmadm faulty

For now we following the advice and run fmadm faulty on dom0.

dom0# fmadm faulty   
--------------- ------------------------------------  -------------- ---------
TIME            EVENT-ID                              MSG-ID         SEVERITY
--------------- ------------------------------------  -------------- ---------
Sep 15 20:54:06 8db62dbe-a091-e222-a6b4-dd95f015b4cc  AMD-8000-7U    Major    

Fault class : fault.cpu.amd.icachedata
Affects     : hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=3/core=0/strand=0
                  faulted and taken out of service
FRU         : "CPU 3" (hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=3)
                  faulty

Description : The number of errors associated with this CPU has exceeded
              acceptable levels.  Refer to http://sun.com/msg/AMD-8000-7U for
              more information.

Response    : An attempt will be made to remove this CPU from service.

Impact      : Performance of this system may be affected.

Action      : Schedule a repair procedure to replace the affected CPU.  Use
              'fmadm faulty' to identify the CPU.
Note the faulted and taken out of service - we have diagnosed this resource as faulty and have offlined the processor at the hypervisor level. This system supports cpu and memory FRU labels so all that hc://.... mumbo-jumbo can be ignored - the thing to replace is the processor in the socket labelled "CPU 3" (either on the board or on the service sticker affixed to the chassis lid, or both - depends on the system).

Replace The Bad CPU

Schedule downtime to replace the chip in location "CPU 3". Software can't do this for you!

Declare Your Replacement Action

dom0# fmadm replaced "CPU 3"
fmadm: recorded replacement of CPU 3
This is an unfortunate requirement: x86 systems do not support cpu serial numbers (the privacy world rioted when Intel included such functionality and enabled it by default), and so we cannot auto-detect the replacement of a cpu FRU. On sparc systems, and on Sun x86 systems for memory dimms, we do have FRU serial number support and this step is not required.

When the replacement is declared as above (or for FRU types that have serial numbers, at reboot following FRU replacment) the console will confirm that all aspects of this fault case have been addressed. At this time the cpu has been brought back online at the hypervisor level.

Did You See Any xVM In That?

No, you didn't! - native/xVM support is seamless. The steps and output above did not involve any xVM-specfic steps or verbage. Solaris is running as a dom0, really!

dom0# uname -a
SunOS parity 5.11 on-fx-dev i86pc i386 i86xpv
Actually there are a few differences elsewhere. Most notable is that psrinfo output will not change when the physical cpu is offlined at the hypervisor level:
dom0# psrinfo   
0       on-line   since 09/09/2008 03:25:15
1       on-line   since 09/09/2008 03:25:19
2       on-line   since 09/09/2008 03:25:19
3       on-line   since 09/09/2008 03:25:19
That's because psrinfo is reporting the state of virtual cpus and not physical cpus. We plan to integrate functionality to query and control the state of physical cpus, but that is yet to come.

Another Example - Hypervisor Panic

In this case the processor suffers an uncorrected data cache error, losing hypervisor state. The hypervisor handles the exception, dispatches telemetry towards dom0, and initiates a panic. The system was booted under Solaris xVM at the time of the error:

Xen panic[dom=0xffff8300e319c080/vcpu=0xffff8300e3de8080]: Hypervisor state lost due to machine check exception.

	rdi: ffff828c801e7348 rsi: ffff8300e2e09d48 rdx: ffff8300e2e09d78
	rcx: ffff8300e3dfe1a0  r8: ffff828c801dd5fa  r9:        88026e07f
	rax: ffff8300e2e09e38 rbx: ffff828c8026c800 rbp: ffff8300e2e09e28
	r10: ffff8300e319c080 r11: ffff8300e2e09cb8 r12:                0
	r13:        a00000400 r14: ffff828c801f12e8 r15: ffff8300e2e09dd8
	fsb:                0 gsb: ffffff09231ae580  ds:               4b
	 es:               4b  fs:                0  gs:              1c3
	 cs:             e008 rfl:              282 rsp: ffff8300e2e09d30
	rip: ffff828c80158eef:  ss:             e010
	cr0: 8005003b  cr2: fffffe0307699458  cr3: 6dfeb5000  cr4:      6f0
Xen panic[dom=0xffff8300e319c080/vcpu=0xffff8300e3de8080]: Hypervisor state lost due to machine check exception.


ffff8300e2e09e28 xpv:mcheck_cmn_handler+302
ffff8300e2e09ee8 xpv:k8_machine_check+22
ffff8300e2e09f08 xpv:machine_check_vector+21
ffff8300e2e09f28 xpv:do_machine_check+1c
ffff8300e2e09f48 xpv:early_page_fault+77
ffff8300e2e0fc48 xpv:smp_call_function_interrupt+86
ffff8300e2e0fc78 xpv:call_function_interrupt+2b
ffff8300e2e0fd98 xpv:do_mca+9f0
ffff8300e2e0ff18 xpv:syscall_enter+67
ffffff003bda1b90 unix:xpv_int+46 ()
ffffff003bda1bc0 unix:cmi_hdl_int+42 ()
ffffff003bda1d20 specfs:spec_ioctl+86 ()
ffffff003bda1da0 genunix:fop_ioctl+7b ()
ffffff003bda1eb0 genunix:ioctl+174 ()
ffffff003bda1f00 unix:brand_sys_syscall32+328 ()

syncing file systems... done
ereport.cpu.amd.dc.data_eccm ena=f0b88edf0810001 detector=[ version=0 scheme=
 "hc" hc-list=[ hc-name="motherboard" hc-id="0" hc-name="chip" hc-id="1"
 hc-name="core" hc-id="0" hc-name="strand" hc-id="0" ] ] compound_errorname=
 "DCACHEL1_DRD_ERR" disp="unconstrained,forcefatal" IA32_MCG_STATUS=7
 machine_check_in_progress=1 ip=0 privileged=0 bank_number=0 bank_msr_offset=
 400 IA32_MCi_STATUS=b600a00000000135 overflow=0 error_uncorrected=1
 error_enabled=1 processor_context_corrupt=1 error_code=135
 model_specific_error_code=0 IA32_MCi_ADDR=2e7872000 syndrome=1 syndrome-type=
 "E" __injected=1

dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
100% done
rebooting
Note that as part of the panic flow dom0 has retrieved the fatal error telemetry from the hypervisor address space, and it dumps the error report to the console. Hopefully we won't have to interpret that output - it's only there in case the panic fails (e.g., no dump device). The ereport is also written to the system dump device at a well-known location, and the fault management service will retrieve this telemetry when it restarts on reboot.

On the subsequent reboot we see:

v3.1.4-xvm chgset 'Sun Sep 07 13:00:23 2008 -0700 15898:354d7899bf35'
SunOS Release 5.11 Version on-fx-dev 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
DEBUG enabled
(xVM) Xen trace buffers: initialized
Hostname: parity
NIS domain name is scalab.sfbay.sun.com
Reading ZFS config: done.
Mounting ZFS filesystems: (9/9)

parity console login:
(xVM) Prepare to bring CPU1 down...

SUNW-MSG-ID: AMD-8000-AV, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Mon Sep 15 23:05:21 PDT 2008
PLATFORM: Sun Fire V40z, CSN: XG051535088, HOSTNAME: parity
SOURCE: eft, REV: 1.16
EVENT-ID: f107e323-9355-4f70-dd98-b73ba7881cc5
DESC: The number of errors associated with this CPU has exceeded acceptable levels.  Refer to http://sun.com/msg/AMD-8000-AV for more information.
AUTO-RESPONSE: An attempt will be made to remove this CPU from service.
IMPACT: Performance of this system may be affected.
REC-ACTION: Schedule a repair procedure to replace the affected CPU.  Use 'fmadm faulty' to identify the module.
(xVM) CPU 1 is now offline
Pretty cool - huh? We diagnosed a terminal MCA event that caused a hypervisor panic and system reset. You'll only see the xVM-prefixed messages if you have directed hypervisor serial console output to dom0 console. At this stage we can see the fault in fmadm:
dom0# fmadm faulty -f
------------------------------------------------------------------------------
"CPU 1" (hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=1) faulty
f107e323-9355-4f70-dd98-b73ba7881cc5 1 suspects in this FRU total certainty 100%

Description : The number of errors associated with this CPU has exceeded
              acceptable levels.  Refer to http://sun.com/msg/AMD-8000-AV for
              more information.

Response    : An attempt will be made to remove this CPU from service.

Impact      : Performance of this system may be affected.

Action      : Schedule a repair procedure to replace the affected CPU.  Use
              'fmadm faulty' to identify the module.

Phase 1 Architecture

"Architecture" may be overstating it a bit - I like to call this an "initial enabling" since the architecture will have to evolve considerably as the x86 platform itself matures to offer features that will make many of the sun4v/ldoms hardware fault management features possible on x86.

The aim was to re-use as much existing Solaris error handling and fault diagnosis code as possible. That's not just for efficiency - it also allows interoperability and compatibility between native and virtualized boots for fault management. The first requirement was to expose physical cpu information to dom0 from the hypervisor: dom0 is normally presented with virtual cpu information - no good for diagnosing real physical faults. That information allows Solaris to initialize a cpu module handle for each physical (chipid, coreid, strandid) combination in the system, and to associate telemetry arising from the hypervisor with a particular cpu resource. It also allows us to enumerate the cpu resources in our topology library - we use this in our diagnosis software.

Next we had to modify our chip topology, as reported in /usr/lib/fm/fmd/fmtopo and as used in diagnosis software, from chip/cpu (misguidedly introduced with the orginal Solaris AMD fault management work - see previous blogs) to a more physical representation of chip/core/strand - you can see the alignment with cpu module handles mentioned above.

Then communication channels were introduced from the hypervisor to dom0 in order to deliver error telemetry to dom0 for logging and diagnosis:

  • Error exceptions are handled in the hypervisor; it decides whether we can continue or need to panic. In both cases the available telemetry (read from MCA registers) is packaged up and forwarded to dom0.
  • If the hypervisor needs to panic as the result of an error then the crashdump procedure, handled as today by dom0, extracts the in-flight terminal error telemetry and stores it at a special location on the dump device. At subsequent reboot (whether native or virtualized) we extract this telemetry and submit it for logging and diagnosis.
  • The hypervisor polls MCA state for corrected errors (it's done this for ages) and forwards any telemetry it finds on to dom0 for logging and diagnosis (instead of simple printk to the hypervisor console).
  • The dom0 OS has full PCI access; when error notifications arrive in dom0 we check the memory-controller device for any additional telemetry. We also perform periodic poll of the memory-controller device from dom0.
Once telemetry is received in dom0 a thin layer of "glue" kernel code injects the data into the common (existing, now shared between native and i86xpv boots) infrastructure which then operates without modification to raise error reports to dom0 userland diagnosis software. The userland diagnosis software applies the same algorithms as on native; if a fault is diagnosed then the retire agent may request to offline the physical processor to isolate it - this is effected through a new hypercall request to the hypervisor.

The obligatory picture:

Acknowldgements

The hypervisor to dom0 communication channel for MCA telemetry and the MCA hypercall itself that we've used in our project is derived from work contributed to the Xen community by Christoph Egger of AMD. We worked with Christoph on some design requirements for that work.

Frank van der Linden did the non-MCA hypervisor work - physical cpu information for dom0, and hypervisor-level cpu offline/online.

Sean Ye designed and implemented our new /dev/fm driver; the new topology enumerator that works identically regardless of dom0 or native; topology-method based resource retire for FMA agents. Sean also performed most of the glue work in putting various pieces together as they became available.

John Cui extended our fault test harness to test all the possible error and fault scenarios under native and xVM, and performed all our regression and a great deal of unit testing on a host of systems.

I did the MCA-related hypervisor and kernel work, and the userland error injector changes.

We started this project in January, with a developer in each of the United Kingdom, Netherlands, and China. We finished in September with the same three developers, but with some continent reuffles: the Netherlands moving to the US in March/April, and the UK moving to Australia in May/June. Thanks to our management for their indulgence and patience while we went off-the-map for extended periods mid-project!

Technorati Tags: ,

Sunday Sep 14, 2008

Generic Machine Check Architecture (MCA) In Solaris

The work described below was integrated into Solaris Nevada way back in August 2007 - build 76; it has since been backported to Solaris 10. It's never too late to blog about things! Actually, I just want to separate this description from the entry that will follow - Solaris x86 xVM Fault Management.

Why Generic MCA?

In past blogs I have described x86 cpu and memory fault management feature-additions for specific processor types: AMD Opteron family 0xf revisions B-E, and AMD Opteron family 0xf revisions F and G. At the time of the first AMD work Sun was not shipping any Intel x64 systems; since then, of course, Sun has famously begun a partnership with Intel and so we needed to look at offering fault management support for our new Intel-based platforms.

Both AMD and Intel base their processor error handling and reporting on the generic Machine Check Architecture (MCA). AMD public documentation actually details and classifies error detectors and error types well beyond what generic MCA allows, while Intel public documentation generally maintains the abstract nature of MCA by not publishing model-specific details. Suppose we observe an error on MCA bank number 2 of a processor with status register MC2_STATUS reading 0xd40040000000017a. In generic MCA that will be classified as "GCACHEL2_EVICT_ERR in bank 2" - a generic transaction type experienced a level-2 cache error during a cache eviction event; other fields of the status register indicate that this event has model-specific error code 0, whether the error was hardware-corrected and so on. In the case of Intel processors, public documentation does not generally describe additional classification of the error - for example they do not document which processor unit "bank 2" might correspond to, nor what model-specific error code 0 for the above error might tell us. In the AMD case all that detail is available, and we could further classify the error as having been detected by the "bus unit" (bank 2), the extended error code (part of the MCA model-specific error code) of 0 on an evict transacton indicates a "Data Copyback" error single-bit ECC error from L2 cache.

Our first AMD support in Solaris included support for all this additional classification detail, and involved a substantial amount of work for each new processor revision. In the case of AMD family 0xf revision F processors (the first support DDR2 memory) we were not able to get the required support back into Solaris 10 in time for the first Sun systems using those processors! We began to realize that this detailed model-specific approach was never going to scale - that was probably obvious at the beginning, but our SPARC roots had not prepared us for how quickly the x86 world rolls out new processor revisions! When the additional requirement to support Intel processors was made, we soon decided it was time to adopt a generic MCA approach.

We also realized that, with a few notable exceptions, we were not actually making any real use of the additional detail available in the existing AMD implementations. For example, our diagnosis algorithms for the error above would simply count the error as a corrected error from the L2 Cache data array - and all that information was available via generic MCA. So the design would be to implement most support in a generic MCA module, and layer model-specific support on top of that to facilitate value-add from model-specific details when we can put them to good use for diagnosis, surviving the error where we might not from generic information, and so on.

What Does Solaris Generic MCA Support Look Like?

It may not have been a blog, but I have documented this before - in the Generic MCA FMA Portfolio documentation (we prepare such a portfolio for all FMA work). In particular, the Generic MCA Philosophy document describes how errors are classified, what error report class is used for each type of generic MCA error, what ereport payload content is included for each, etc.

Generic MCA Preparation For Virtualization - A Handle On Physical CPUs

In the generic MCA project we also decided to prepare the way x86 cpu and memory fault handling in virtualized environments, such as Solaris x86 xVM Server. Our existing cpu module API at the time was all modelled around the assumption that there was a fixed, one-to-one relationship between Solaris logical cpus (as reported in psrinfo for example) and real, physical processor execution resources. If running as a dom0, however, we may be limited to 2 virtual cpus (vcpus) while the underlying physical hardware may have 16 physical cpus (say 8 dual-core chips); and while Solaris dom0 may bind a software thread to a vcpu it is actually the hypervisor that is scheduling vcpus onto real physical cpus (pcpus) and binding to a vcpu does not imply binding to a pcpu - if you're binding to read a few hardware registers from what should be the same cpu for all reads you're going to be disappointed if the hypervisor switches you from one pcpu to another midway! And, anyway, you'd be lucky if the hypervisor let you succeed in reading such registers at all - and you don't know whether any values you read were modified by the hypervisor!

So the generic MCA project, already reworking the cpu module interface very substantially, chose to insert a "handle" argument to most cpu module interfaces. A handle is associated with each physical (chip-id, core-id, strand-id) processor instance, and one needs to lookup and quote the handle you wish to operate on throughout the cpu module interface and model-specific overlay interface, as well as in kernel modules delivering cpu module implementations, or model-specific implementations.

Supported Platforms With Generic MCA in Solaris

The generic MCA project raises the baseline level of MCA support for all x86 processors. Before this project Solaris had detailed MCA support for AMD Opteron family 0xf, and very poor support for anything else - little more than dumping some raw telemetry to the console on fatal events, with no telemetry raised and therefore no logging or diagnosis of telemetry. With generic MCA we raise telemetry and diagnose all correctable and uncorrectable errors (the latter provided the hardware does not simply reset, not allowing the OS any option of gathering telemetry) on all x86 platforms, regardless of chip manufacturer or system vendor.

The MCA does not, however, cover all aspects of cpu/memory fault management:

  • Not all cpu and memory error reporting necessarily falls under the MCA umbrella. Most notably, Intel memory-controller hub (MCH) systems have an off-chip memory-controller which can raise notifications through MCA but most telemetry is available through device registers not from the registers that are part of the MCA.
  • The MCA does a pretty complete job of classifying within-chip errors down to a unit that you can track errors on (e.g., down to the instruction cache data and tag arrays). It does not classify off-chip errors other than generically, and additional model-specific support is required to interpret such errors. For example, AMD Opteron has on-chip memory-controller which reports under the MCA but generic MCA can only classify as far as "memory error, external to the chip": model-specific support can refine the classification to recognise a single-bit ECC error affecting bit 56 at a particular address, and a partnering memory-controller driver could resolve that address down to a memory dimm and rank thereon.
  • For full fault management we require access to additional information about resources that we are performing diagnosis on. For example, we might diagnose that the chip with HyperTransport ID 2 has a bad L2 Cache - but to complete the picture it is nice to provide a FRU ("field-replaceable unit") label such as "CPU_2" that matches the service labelling on the system itself, and ideally also a serial number associated with the FRU.

So while all platforms have equal generic MCA capabilities, some platforms are more equal than others once additional value-add functionality is taken into account:

Memory-Controlller Drivers

Some platforms have corresponding memory-controller drivers. Today we have such drivers for AMD family 0xf (mc-amd), the Intel 5000/5400/7300 chipset series (intel_nb5000), and Intel Nehalem systems (intel_nhm). These drivers provide at minimum an address-to-memory-resource translation service that lets us translate an error address (say 0x10344540) to a memory resource (e.g., chip 3, dimm 2, rank 1). They also supply memory topology information. Such drivers operate on all systems built with the chipset they implement support for, whoever the vendor.

Model-specific MCA Support

Some platforms have additional model-specific MCA support layered on top of the generic MCA code. "Legacy" AMD family 0xf support is maintained unchanged by a model-specific layer which is permitted to rewrite the generic ereport classes in those more-specific classes used in the original MCA work, and to add ereport payload members that are non-architectural such as ECC syndrome. We also added a "generic AMD" support module which would apply in the absence of any more-specific support; right now this applies to all AMD families after 0xf, i.e. 0x10 and 0x11. This module permits the recognition and diagnosis of all memory error even in the absence of a memory controller driver for the platform. The Intel Nehalem support taught the Intel module to recognise additional Nehalem-specific error types and handle them appropriately. Such model-specific support applies regardless of vendor.

FRU Labelling

Some platforms have cpu and memory FRU label support, i.e., the ability to diagnose "CPU2 DIMM3" as bad instead of something like "chip=2/memory-controller=0/dimm=3" which actually requires platform schematics to map to a particualar dimm with absolute certainty. For many reasons, this support is delivered via hardcoded rules in XML map files. Solaris today only delivers such information for Sun x64 systems; support for additional systems, including non-Sun platforms, is trivial to add in the XML - the tricky part is in obaining the schematics etc to perform the mapping of chip/dimm instances to FRU labels.

FRU Serial Numbers

Some platforms have memory DIMM serial number support. Again we only deliver such support for Sun x64 systems, and how this is done varies a little (on Intel the memory-controller driver reads the DIMM SPD proms itself; on AMD we let the service processor do that and read the records from the service processor).

What?! - No Examples?

I'll leave a full worked example to my next blog entry, where I'll demonstrate the full story under Solaris xVM x86 hypervisor.

Technorati Tags: ,

Monday Nov 20, 2006

London OpenSolaris FMA Presentation

On Wednesday last week I presented some aspects of Solaris FMA at the November meeting of the OpenSolaris User Group. It seemed to go well (no obvious snoring, and some good feedback) although it went over time by 10 or 15 minutes which is a dangerous thing to do when post-meeting beer and food beckons (worked around from the beginning by bringing the beer into the presentation). At Chris Gerhard's prompting I'm posting the slides here on my blog.

Technorati Tags: ,

Tuesday Oct 24, 2006

FMA Support For AMD Opteron rev F

In my last blog entry I discussed the initial support in Solaris for fault management on AMD Opteron, Athlon 64 and Turion 64 processor systems (more accurately known as AMD Family 0xf, or the K8 family). That support applied to revisions E and earlier of those chips, which were current at the time, and has since been backported to Solaris 10 Update 2. In this entry I will discuss recent changes to support "revision F", and I'll followup in the next few weeks with a side-by-side comparison of our error handling and fault management with that of one or two other OS offerings.

In August this year AMD introduced it's "Next Generation AMD Opteron processor" which has a bunch of cool featues (such as AMD Virtualization technology and a socket-compatible upgrade path to Quad-Core chips when they release) but the main change as far as error handling and fault management go is the switch to DDR2 memory. This next generation is also known as revision F (of AMD family 0xf) and sometimes also as the "socket F" (really F(1207)) and "socket AM2" processors; the initial Solaris support for AMD fault management does not apply to revision F (the cpu module driver would choose not to initialize for revisions beyond E).

Quick Summary

If you want to avoid the detail below here is the quick summary. Solaris now supports revision F in Nevada build 51, and targeting Solaris 10 Update 4 next year. The support works pretty much as for earlier revisions (most code shared) - identically for cpu diagnosis, and with minor changes in memory diagnosis.

Revision F Support in Nevada Build 51

In early October I putback support for rev F, together with a number of improvements and bugfixes to the original project. Below is the putback notification including both the bugids and the files changed - most files that implement the AMD fault management support were touched to some degree, so this provides a handy list for anyone wanting to explore more deeply:

Event:            putback-to
Parent workspace: /ws/onnv-gate
                  (elpaso:/ws/onnv-gate)
Child workspace:  /net/tb2.uk.sun.com/u/gavinm/revF-clone4pb/onnv-dev
                  (tb2.uk.sun.com:/u/gavinm/revF-clone4pb/onnv-dev)
User:             gavinm

Comment:
PSARC 2006/564 FMA for Athlon 64 and Opteron Rev F/G Processors
PSARC 2006/566 eversholt language enhancements: confprop_defined
6362846 eversholt doesn't allow dashes in pathname components
6391591 AMD NB config should not set NbMcaToMstCpuEn
6391605 AMD DRAM scrubber should be disabled when errata #99 applies
6398506 memory controller driver should not bother to attach at all on rev F
6424822 FMA needs to support AMD family 0xf revs F and G
6443847 FMA x64 multibit ChipKill rules need to follow MQSC guidelines
6443849 Accrue inf_sys and s_ecc ECC errors against memory
6443858 mc-amd can free unitsrtr before usage in subsequent error path
6443891 mc-amd does not recognise mismatched dimm support
6455363 x86 error injector should allow addr option for most errors
6455370 Opteron erratum 101 only applies on revs D and earlier
6455373 Identify chip-select lines used on a dimm
6455377 improve x64 quadrank dimm support
6455382 add generic interfaces for amd chip revision and package/socket type
6468723 mem scheme fmri containment test for hc scheme is busted
6473807 eversholt could use some mdb support
6473811 eversholt needs a confprop_defined function
6473819 eversholt should show version of rules active in DE
6475302 ::nvlist broken by some runtime link ordering changes

Files:
update: usr/closed/cmd/mtst/x86/common/opteron/opt.h
update: usr/closed/cmd/mtst/x86/common/opteron/opt_common.c
update: usr/closed/cmd/mtst/x86/common/opteron/opt_main.c
update: usr/closed/cmd/mtst/x86/common/opteron/opt_nb.c
update: usr/src/cmd/fm/dicts/AMD.dict
update: usr/src/cmd/fm/dicts/AMD.po
update: usr/src/cmd/fm/eversholt/common/check.c
update: usr/src/cmd/fm/eversholt/common/eftread.c
update: usr/src/cmd/fm/eversholt/common/eftread.h
update: usr/src/cmd/fm/eversholt/common/esclex.c
update: usr/src/cmd/fm/eversholt/common/escparse.y
update: usr/src/cmd/fm/eversholt/common/literals.h
update: usr/src/cmd/fm/eversholt/common/tree.c
update: usr/src/cmd/fm/eversholt/common/tree.h
update: usr/src/cmd/fm/eversholt/files/i386/i86pc/amd64.esc
update: usr/src/cmd/fm/modules/Makefile.plugin
update: usr/src/cmd/fm/modules/common/cpumem-retire/cma_main.c
update: usr/src/cmd/fm/modules/common/cpumem-retire/cma_page.c
update: usr/src/cmd/fm/modules/common/eversholt/Makefile
update: usr/src/cmd/fm/modules/common/eversholt/eval.c
update: usr/src/cmd/fm/modules/common/eversholt/fme.c
update: usr/src/cmd/fm/modules/common/eversholt/platform.c
update: usr/src/cmd/fm/schemes/mem/mem_unum.c
update: usr/src/cmd/mdb/common/modules/genunix/genunix.c
update: usr/src/cmd/mdb/common/modules/genunix/nvpair.c
update: usr/src/cmd/mdb/common/modules/genunix/nvpair.h
update: usr/src/cmd/mdb/common/modules/libnvpair/libnvpair.c
update: usr/src/cmd/mdb/i86pc/modules/amd_opteron/ao.c
update: usr/src/common/mc/mc-amd/mcamd_api.h
update: usr/src/common/mc/mc-amd/mcamd_misc.c
update: usr/src/common/mc/mc-amd/mcamd_patounum.c
update: usr/src/common/mc/mc-amd/mcamd_rowcol.c
update: usr/src/common/mc/mc-amd/mcamd_rowcol_impl.h
update: usr/src/common/mc/mc-amd/mcamd_rowcol_tbl.c
update: usr/src/common/mc/mc-amd/mcamd_unumtopa.c
update: usr/src/lib/fm/topo/libtopo/common/hc_canon.h
update: usr/src/lib/fm/topo/libtopo/common/mem.c
update: usr/src/lib/fm/topo/libtopo/common/topo_protocol.c
update: usr/src/lib/fm/topo/modules/i86pc/chip/chip.c
update: usr/src/lib/fm/topo/modules/i86pc/chip/chip.h
update: usr/src/pkgdefs/SUNWfmd/prototype_com
update: usr/src/uts/i86pc/Makefile.files
update: usr/src/uts/i86pc/Makefile.workarounds
update: usr/src/uts/i86pc/cpu/amd_opteron/ao.h
update: usr/src/uts/i86pc/cpu/amd_opteron/ao_cpu.c
update: usr/src/uts/i86pc/cpu/amd_opteron/ao_main.c
update: usr/src/uts/i86pc/cpu/amd_opteron/ao_mca.c
update: usr/src/uts/i86pc/cpu/amd_opteron/ao_mca_disp.in
update: usr/src/uts/i86pc/cpu/amd_opteron/ao_poll.c
update: usr/src/uts/i86pc/io/mc/mcamd.h
update: usr/src/uts/i86pc/io/mc/mcamd_drv.c
update: usr/src/uts/i86pc/io/mc/mcamd_off.in
update: usr/src/uts/i86pc/io/mc/mcamd_subr.c
update: usr/src/uts/i86pc/mc-amd/Makefile
update: usr/src/uts/i86pc/os/cmi.c
update: usr/src/uts/i86pc/os/cpuid.c
update: usr/src/uts/i86pc/sys/cpu_module.h
update: usr/src/uts/i86pc/sys/cpu_module_impl.h
update: usr/src/uts/intel/sys/fm/cpu/AMD.h
update: usr/src/uts/intel/sys/mc.h
update: usr/src/uts/intel/sys/mc_amd.h
update: usr/src/uts/intel/sys/mca_amd.h
update: usr/src/uts/intel/sys/x86_archext.h
create: usr/src/cmd/fm/modules/common/eversholt/eft_mdb.c
create: usr/src/uts/i86pc/io/mc/mcamd_dimmcfg.c
create: usr/src/uts/i86pc/io/mc/mcamd_dimmcfg.h
create: usr/src/uts/i86pc/io/mc/mcamd_dimmcfg_impl.h
create: usr/src/uts/i86pc/io/mc/mcamd_pcicfg.c
create: usr/src/uts/i86pc/io/mc/mcamd_pcicfg.h
These changes will appear in Nevada build 51, and are slated for backport to Solaris 10 Update 4 (which is a bit distant, but Update 3 is almost out the door and long-since frozen for project backports).

New RAS Features In Revision F

In terms of error handling the processor cores and attendent caches in revision F chips are substantially unchanged from earlier revisions. There are no new error types detected in these banks (icache, dcache, load-store unit, bus unit aka l2cache) so no additional error reports for us to raise or any change required to the diagnosis rules that consume those error reports.

While the switch to DDR2 is a substantial feature change for the AMD chip (and makes it even faster!) this really only affects how we discover memory configuration. The types of error and fault that DDR2 memory DIMMs can experience are much the same as for DDR1, and so the set of errors reported from memory are nearly identical across all revisions - revision F adds parity checking on the dram command and address lines, and otherwise detects and reports the same set of errors.

An Online Spare Chip-Select is introduced in revision F. The BIOS can choose (if the option is available and selected) to set aside a chip-select on each memory node to act as a standby should any other chip-select be determined to be bad. When a bad chip-select is diagnosed software (OS or BIOS) can write the details to the Online Spare Control Register to initiate a copy of the bad chip-select contents to the online spare, and to redirect all access to the bad chip-select to the spare. The copy proceeds at a huge rate (over 1GB/s) and does not interrupt system operation at all.

Revision F introduces some hardware support for counting ECC error occurences. The NorthBridge counts all ECC errors it sees (regardless of source on that memory node) in a 12-bit counter of the DRAM Errors Threshold Register. Software can set a starting value for this counter which will increment with every ECC error observed by the NorthBridge; when the counter overflows (tries to go beyond 4095) software will receive a preselected interrupt (either for the OS or for the BIOS, depending on setup). The interrupt handler can diagnose a dimm as faulty, perhaps, although it does not know from this information alone which DIMM(s) to suspect.

The Online Spare Control Register also exposes 16 4-bit counters - one for each of 8 possible chip-selects on each of dram channels A and B. Again software may initialize the counters and receive an interrupt when the counter overflows. With information from all chip-selects of each channel it is possible to narrow the source to a single rank (side) of a single DIMM.

An Overview of Code Changes

With processor cores substantially unchanged from our point-of-view, introducing error handling and fault management for cpu errors for these chips was pretty trivial - really just a question of allow the cpu module to initialize in ao_init.

The changes for the transition from DDR1 to DDR2 memory are all within the memory controller and dram controllers in the on-chip NorthBridge, seen by Solaris as additions and changes to the memory controller registers that it can quiz and set via PCI config space accesses. Replacing our previous #define collection for bitmasks and bitfields of the memory controller registers with data strucutres that describe the bitfields for all revisions and some MC_REV_\* and MCREG_\* macros for easy revision-test and structure access flattens out these differences when the mcamd_drv.c code uses these macros. Our mc-amd driver is reponsible for discovering the memory configuration details of a system, and also for performing error address to dimm resource translation. So with it now able to read the changed memory controller registers the next step was to teach it to interpret the values in the bitfields of those register as changed for revision F. Most of the work here is in teaching it how to resolve a dram address into row, column and internal bank addresses for the new revision but the error address to dimm resource translation algorithm also required changes to allow for the possible presence of an online spare chip-select.

The cpu module cpu.AuthenticAMD.15 for AMD family 15 (0xf) mostly required changes to allow for revision-dependent error detector bank configuration. It also was changed to initialize the online spare control register and NorthBridge ECC Error Threshold Register - more about that below - and to provide easier control over its hardware configuration activities for the three hardware scrubbers, the hardware watchdog, and the NorthBridge MCA Config register.

The mc-amd driver also grew to understand all possible DIMM configurations on the AMD platform. It achieves this through working out which socket type we're running on (754, 939, 940, AM2, F(1207), S1g1) and what class of DIMMs we are working with - normal, quadrank registered DIMMs, or quadrank SODIMM - and performing a table lookup. The result determines, for each chip-select base and mask pair, how the resulting logical DIMM is numbered and which physical chip-select lines are used in operating that chip-select (from channel, socket number and rank number). The particular table selected for lookup is also determined by whether mismatched DIMM support is present - unmatched DIMMs present in the channelA/channelB DIMM socket pairs. This information determines how we will number the DIMMs and ranks thereon in the topology that is built in the mc-amd driver and thereafter in full glory in the topology library. This is very important, since the memory controller registers only work in terms of nodes and chip-selects while we want to diagnose down to individual DIMM and rank thereon (requesting replacement of a particular DIMM is so much better than requesting replacement of a chip-select).

With detailed information on the ranks of each DIMM now available I changed the cpu/memory diagnosis topology to list the ranks of a DIMM, and the diagnosis rules to diagnose down to the individual DIMM rank. This was so as to facilitate a swap to any online spare chip-select when we diagnose a dimm fault (chip-selects are built from ranks, not DIMMs). The new cpu/memory topology is as follows:

hc:///motherboard=0/chip=0/cpu=0
hc:///motherboard=0/chip=0/cpu=1
hc:///motherboard=0/chip=0/memory-controller=0
hc:///motherboard=0/chip=0/memory-controller=0/dram-channel=0
hc:///motherboard=0/chip=0/memory-controller=0/dram-channel=1
hc:///motherboard=0/chip=0/memory-controller=0/chip-select=0
hc:///motherboard=0/chip=0/memory-controller=0/chip-select=1
hc:///motherboard=0/chip=0/memory-controller=0/dimm=0
hc:///motherboard=0/chip=0/memory-controller=0/dimm=0/rank=0
hc:///motherboard=0/chip=0/memory-controller=0/dimm=0/rank=1
hc:///motherboard=0/chip=0/memory-controller=0/dimm=1
hc:///motherboard=0/chip=0/memory-controller=0/dimm=1/rank=0
hc:///motherboard=0/chip=0/memory-controller=0/dimm=1/rank=1
This is a single-socket (chip=0) dual-core (cpu=0 and cpu=1 on chip 0) socket AM2 system with two DIMMs installed (dimm=0 and dimm=1) both dual-rank (rank=0 and rank=1 on each DIMM). The DIMMS are numbered 0 and 1 based on the DIMM configuration information discussed above - chip-select base/mask pair 0 is active for the 128-bit wide chip-select formed from the two rank=0 ranks, and chip-select base/mask pair 1 is active for the 128 bit wide chip-select formed from the two rank=1 ranks. All of this information, and more, is present as properties on the topology nodes. The following shows the properties of the memory-controller node, one active chip-select on it and the two DIMMs contributing to that chip-select (one from each channel) and the specific ranks on those two DIMMs that are used in the chip-select.
hc:///motherboard=0/chip=0/memory-controller=0
  group: memory-controller-properties   version: 1   stability: Private/Private
    num               uint64    0x0
    revision          uint64    0x20f0020
    revname           string    F
    socket            string    Socket AM2
    ecc-type          string    Normal 64/8
    base-addr         uint64    0x0
    lim-addr          uint64    0x7fffffff
    node-ilen         uint64    0x0
    node-ilsel        uint64    0x0
    cs-intlv-factor   uint64    0x2
    dram-hole-size    uint64    0x0
    access-width      uint64    0x80
    bank-mapping      uint64    0x2
    bankswizzle       uint64    0x0
    mismatched-dimm-support uint64    0x0

hc:///motherboard=0/chip=0/memory-controller=0/chip-select=0
  group: chip-select-properties         version: 1   stability: Private/Private
    num               uint64    0x0
    base-addr         uint64    0x0
    mask              uint64    0x7ffeffff
    size              uint64    0x40000000
    dimm1-num         uint64    0x0
    dimm1-csname      string    MA0_CS_L[0]
    dimm2-num         uint64    0x1
    dimm2-csname      string    MB0_CS_L[0]

hc:///motherboard=0/chip=0/memory-controller=0/dimm=0
    num               uint64    0x0
    size              uint64    0x40000000

hc:///motherboard=0/chip=0/memory-controller=0/dimm=0/rank=0
  group: rank-properties                version: 1   stability: Private/Private
    size              uint64    0x20000000
    csname            string    MA0_CS_L[0]
    csnum             uint64    0x0

hc:///motherboard=0/chip=0/memory-controller=0/dimm=1
    num               uint64    0x1
    size              uint64    0x40000000

hc:///motherboard=0/chip=0/memory-controller=0/dimm=1/rank=0
  group: rank-properties                version: 1   stability: Private/Private
    size              uint64    0x20000000
    csname            string    MB0_CS_L[0]
    csnum             uint64    0x0

ECC Error Thresholding, And Lack Thereof

At boot, in functions nb_mcamisc_init and ao_sparectl_cfg , Solaris clears any BIOS-registered interrupt for ECC counter overflow and does not install its own. Thus we choose not to make use of the hardware ECC counting and thresholding mechanism, since we can already perform more advanced counting and diagnostic analysis in our diagnosis engine rules. The payload of ECC error ereports is modified to include the NorthBridge Threshold Register in case it should prove interesting in any required human analysis of the ereport logs (not normally required); the counts from the Online Spare Control Register are not in the payload (I may add them soon).

Further Work In Progress

The following are all in-progress in the FMA group at the moment, with the intention and hope of catching Solaris 10 Update 4 (some may miss if they prove to be more difficult than expected).

PCI/PCIE Diagnosis and Driver Hardening API

This is already in Nevada, since build 43. It provides both full PCI/PCIE diagnosis and an API with which drivers may harden themselves to various types of device error.

Topology Library Updates

This is approaching putback to Nevada. The topology library is one of the foundations of FMA as we move forward - it will be our repository for information on all things FMA.

More Advanced ECC Telemetry Analysis

In build 51 memory ECC telemetry is fed into simple SERD engines which "count" occurences and decay them over time (so events from months ago are not necessarily considered to match and aggravate occurences from seconds ago). We are currently developing more advanced diagnosis rules which will distinguish cases of isolated memory cell faults, a frequent error from something like a stuck pin or failed sdram in ChipKill mode, and will check which bits are in error to see if an uncorrectable error appears to be imminent.

CPU/Memory FRU Labelling for AMD Platforms

I touched on how we number cpus and memory above. How these relate to the actual FRU labels on a random AMD platform, if at all, is difficult to impossible to determine in software (notwithstanding, or perhaps because of, the efforts of SMBIOS etc!). So our "dimm0" may be silkscreened as "A0", "B0D0", "BOD3", "DIMM5" etc. With SMBIOS proving to be pretty useless in providing these labels accurately, nevermind associating them with the memory configuration one can discover through the memory controller we will resort to some form of hardcoded table for at least the AMD platforms that Sun has shipped.

CPU/Memory FRU Serial Numbers for AMD Platforms

Again, these are tricky to come by in a generic fashion but for Sun platforms we can teach our software the various ways of retrieving serial numbers. This will assist in replacement of the correct FRU, and in allowing the FMA software to detect when a faulted FRU has been replaced.


Technorati Tags: ,

Thursday Mar 16, 2006

AMD Opteron/Athlon64/Turion64 Fault Management

Fault Management for Athlon64 and Opteron

In February we (the Solaris Kernel/RAS group) integrated the "fma x64" project into Solaris Nevada, delivering Fault Management for the AMD K8 family of chips (Athlon(TM) 64, Opteron(TM), Turion(TM) 64). This brings fault management on the Solaris AMD64 platform for cpu and memory up to par with that already present on Sun's current SPARC platforms, and addresses one of the most-requested missing functionalities required by customers (or potential customers) of Sun's impressive (and growing) AMD Opteron family offerings (the project, of course, benefits all AMD 64-bit systems not just those from Sun). We had planned some blogging from the rooftops about the project back at integration, but instead we all concentrated on addressing the sleep deficit that the last few hectic weeks of a big project brought and since putback to the Solaris 11 gate there has also been much effort in preparing the backport to Solaris 10 Update 2 (aka Solaris 10 06/06).

Well, it has already hit the streets now that Solaris Express Community Edition build 34 is available for download and the corresponding source is available at cvs.opensolaris.org (around 315 files, search for "2006/020" in file history). There are a few bug fixes that will appear in build 36, but build 34 has all of the primary fault management functionality.

In this blog I'll attempt an overview of the project functionality, with some examples. In future entries I'll delve into some of the more subtle nuances. Before I begin I'll highlight something the project does not deliver: any significant improvement in machine error handling and fault diagnosis for Intel chips (i.e., anything more than a terse console message). This stuff is very platform/chip dependent, and since Sun has a number of AMD Opteron workstation and server offerings with much more in the pipeline it was the natural first target for enhanced support. The project also does not delivered hardened IO drivers and corresponding diagnosis - that is the subject of a follow-on project due in a couple of months.

AMD64 Platform Error Detection Architecture Overview

The following image shows the basics of a 2 chip dual-core (4 cores total) AMD system (with apologies to StarOffice power users):

Each chip may have more than one core (current AMD offerings have up to two cores). Each core has an associated on-chip level 1 instruction cache, level 2 data cache, and level 2 cache. There is also an on-chip memory controller (one per chip) which can control up to 8 DDR memory modules. All cores on all chips can access all memory, but the access is not uniform - accesses to a "remote" node involve a HyperTransport request to the owning node which will respond with the data. For historical reasons the functional unit within the chip that includes the memory controller, dram controller, hypertransport logic, crossbar and no doubt a few other tricks is known as the "Northbridge".

This project is concerned with the initial handling of an error, marshalling of error telemetry from the cpu and memory components (a followup project, due in the next few months, will do the same for telemetry from the IO subsystem), and then consuming that telemetry to produce any appropriate diagnosis of any fault that is determined to be present. These chips have a useful array of error detectors, as described in the following table:

Functional Unit Array Protection
Instruction Cache (IC) Icache main tag array Parity
Icache snoop tag array Parity
Instruction L1 TLB Parity
Instruction L2 TLB Parity
Icache data array Parity
Data Cache (DC) Dcache main tag array Parity
Dcache snoop tag array Parity
Dcache L1 TLB Parity
Dcache L2 TLB Parity
Dcache data array ECC
L2 Cache ("bus unit") (BU) L2 cache main tag array ECC and Parity
L2 cache data array ECC
Northbridge (NB) Memory controlled by this node ECC (depends on dimms)
Table 1: SRAM and DRAM memory arrays

There are a number of other detectors present, notably in the Northbridge, but the above are the main sram and dram arrays.

If an error is recoverable then it does not raise a Machine Check Exception (MCE or mc#) when detected. The recoverable errors, broadly speaking, are single-bit ECC errors from ECC-protected arrays and parity errors on clean parity-protected arrays such as the Icache and the TLBs (translation lookaside buffers - virtual to physical address translations). Instead of a mc# the recoverable errors simply log error data into machine check architecture registers of the detecting bank (IC/DC/BU/NB, and one we don't mention in the table above the Load-Store unit LS) and the operating system (or advanced BIOS implementations) can poll those registers to harvest the information. No special handling of the error is required (e.g., no need to flush caches, panic etc).

If an error is irrecoverable then detection of that error will raise a machine check exception (if the bit that controls mc# for that error type is set; if not you'll either never know or you pick it up by polling). The mc# handler can extract information about the error from the machine check architecture registers as before, but has the additional responsibility of deciding what further actions (which may include panic and reboot) are required. A machine check exception is a form of interrupt which allows immediate notification of an error condition - you can't afford to wait to poll for the error since that could result in the use of bad data and associated data corruption.

Traditional Error Handling - The "Head in the Sand" Approach

The traditional operating system (all OS, not just Solaris) approach to errors in the x86 cpu architecture is as follows:

  • leave the BIOS to choose which error detecting banks to enable, and which irrecoverable errors that are detected will raise a mc#
  • if a mc# is raised the OS fields it and terminates with a useful diagnostic message such as "Machine Check Occured", not event hinting at the affected resource
  • ignore recoverable errors, i.e. don't poll for their occurence, a more advanced BIOS will perhaps poll for these errors but is not in a position to do anything about them while the OS is running
That is not unreasonable for the x86 cpus of years gone by, since they typically had very little in the way of data protection either on-chip or for memory. But more recently they have improved in this area, and protection of the on-chip arrays is now common as is ECC-protected main memory. With the sizes of on-chip memory arrays such as the L2 cache growing, especially for chips offered for typical server use, there is also all the more chance that they will have defects introduced during manufacturing, subsequent handling, while installed etc.

Recognising the increased need for error handling and fault management on the x86 platform, some operating systems have begun to offer limited support in this area. Solaris has been doing this for some time on sparc (let's just say the the US-II E-cache disaster did have some good side-effects!) and so in Solaris we will offer the well-rounded end-to-end fault management on amd64 platforms that we already have on sparc.

A Better Approach - Sun's Fault Management Architecture "FMA"

In a previous blog entry I described the Sun Fault Management Architecture. Error events flow into a Fault Manager and associated Diagnosis Engines which may produce fault diagnoses which can be acted upon not just for determining repair actions but also to isolate the fault before it affects system availability (e.g., to offline a cpu that is experiecing errors at a sustained rate). This architecture has been in use for some time now in the sparc world, and this project expands it to include AMD chips.

FMA for AMD64

To deliver FMA for AMD64 systems the project has:

  • made Solaris take responsibility (in addition to the BIOS) for deciding which error-detecting banks to enable and which error types will raise machine-check exceptions
  • taught Solaris how to recognize all the error types documented in the AMD Bios and Kernel Developer's Guide
  • delivered an intelligent machine-check exception handler and periodic poller (for recoverable errors) which collect all error data available, determine what error type has occured and propogate it for logging, and take appropriate action (if any)
  • introduced cpu driver modules to the Solaris x86 kernel (as have existed on sparc for many years) so that features of a particular processor family (such as the AMD processors) may be specifically supported
  • introduced a memory-controller kernel driver module whose job it is to understand everything about the memory configuration of a node (e.g., to provide translation from a fault address to which dimm is affected)
  • developed rules for consuming the error telemetry with the "eft" diagnosis engine; these are written using the "eversholt" diagnosis language, and their task is to diagnose any faults that the incoming telemetry may indicate
  • delivered an enhanced "platform topology" library to describe the inter-relationship of the hardware components of a platform and to provide a repository for hardware component properties
An earlier putback, also as part of this project, introduced SMBIOS support to Solaris so that we can have some access to platform details (type, slot labelling, configuration, ...). That list seems pretty small for what was a considerable effort - there's lots of detail to each item.

With all this in place we are now able to diagnose the following fault classes:

Fault Class Description
fault.cpu.amd.dcachedata DC data array fault
fault.cpu.amd.dcachetag DC main tag array fault
fault.cpu.amd.dcachestag DC snoop tag array fault
fault.cpu.amd.l1dtlb DC L1TLB fault
fault.cpu.amd.l2dtlb DC L2TLB fault
fault.cpu.amd.icachedata IC data array fault
fault.cpu.amd.icachetag IC main tag array fault
fault.cpu.amd.icachestag IC snoop tag array fault
fault.cpu.amd.l1itlb IC L1TLB fault
fault.cpu.amd.l2itlb IC L2TLB fault
fault.cpu.amd.l2cachedata L2 data array fault
fault.cpu.amd.l2cachetag L2 tag fault
fault.memory.page Individual page fault
fault.memory.dimm_sb A DIMM experiencing sustained excessive single-bit errors
fault.memory.dimm_ck A DIMM with a ChipKill-correctable multiple-bit faults
fault.memory.dimm_ue A DIMM with an uncorrectable (not even with ChipKill, if present and enabled) multiple-bit fault

An Example - A CPU With Single-bit Errors

The system is a v40z with hostname 'parity' (we also have other cheesy hostnames such as 'chipkill', 'crc', 'hamming' etc!). It has 4 single-core Opteron cpus. If we clear all fault management history and let it run for a while (or give it a little load to speed things up) we very soon see the following message on the console:

SUNW-MSG-ID: AMD-8000-5M, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Wed Mar 15 08:06:08 PST 2006
PLATFORM: i86pc, CSN: -, HOSTNAME: parity
SOURCE: eft, REV: 1.16
EVENT-ID: 1e8e6d7a-fd3a-e4a4-c211-dece00e68c65
DESC: The number of errors associated with this CPU has exceeded acceptable levels.
Refer to http://sun.com/msg/AMD-8000-5M for more information.
AUTO-RESPONSE: An attempt will be made to remove this CPU from service.
IMPACT: Performance of this system may be affected.
REC-ACTION: Schedule a repair procedure to replace the affected CPU.
Use fmdump -v -u <EVENT-ID> to identify the module.
Running the indicated command we see that cpu 3 has a fault:
# fmdump -v -u 1e8e6d7a-fd3a-e4a4-c211-dece00e68c65
TIME                 UUID                                 SUNW-MSG-ID
Mar 15 08:06:08.5797 1e8e6d7a-fd3a-e4a4-c211-dece00e68c65 AMD-8000-5M
  100%  fault.cpu.amd.l2cachedata

        Problem in: hc:///motherboard=0/chip=3/cpu=0
           Affects: cpu:///cpuid=3
               FRU: hc:///motherboard=0/chip=3
That tells us the resource affected (chip3, cpu core 0), it's logical identifier (cpuid 3, as used in psrinfo etc), and the field replaceable unit that should be replaced (the chip, you can't replace a core). In future we intend to extract FRU labelling information from SMBIOS but at the moment there are difficulties with smbios data and the accuracy thereof that make that harder than it should be.

If you didn't see or notice the console message then running fmadm faulty highlights resources that have been diagnosed as faulty:

# fmadm faulty
   STATE RESOURCE / UUID
-------- ----------------------------------------------------------------------
 faulted cpu:///cpuid=3
         1e8e6d7a-fd3a-e4a4-c211-dece00e68c65
-------- ----------------------------------------------------------------------
In Solaris 11 already and coming to Solaris 10 Update 2 is SNMP trap support for FMA fault events, which provides another avenue by which you can become aware of a newly-diagnosed fault.

We can see the automated response that was performed upon making the diagnosis of a cpu fault:

# psrinfo
0       on-line   since 03/11/2006 00:27:08
1       on-line   since 03/11/2006 00:27:08
2       on-line   since 03/10/2006 23:28:51
3       faulted   since 03/15/2006 08:06:08
The faulted resource has been isolated by offlining the cpu. If you reboot then the cache of faults will cause the cpu to offlined again.

Note that the event id appears in the fmadm faulty output, so you can formulate the fmdump command line shown in the console message if you wish and visit sun.com/msg and enter the indicated SUNW-MSG-ID (quick aside: we have some people working on beefing up the amd64 knowledge articles there, the current ones are pretty uninformative). We can also use the event id to see what error reports led to this diagnosis:

# fmdump -e -u 1e8e6d7a-fd3a-e4a4-c211-dece00e68c65
TIME                 CLASS
Mar 15 08:05:18.1624 ereport.cpu.amd.bu.l2d_ecc1     
Mar 15 08:04:48.1624 ereport.cpu.amd.bu.l2d_ecc1     
Mar 15 08:04:48.1624 ereport.cpu.amd.dc.inf_l2_ecc1  
Mar 15 08:06:08.1624 ereport.cpu.amd.dc.inf_l2_ecc1  
The -e option selects dumping of the error log instead of the fault log, so we can see the error telemetry that led to the diagnosis. So we see that in the space of a few seconds this cpu experienced 4 single-bit errors from the L2 cache - we are happy to tolerate occasional single-bit errors but not at this rate, so we diagnose a fault. If we use option -V we can see the full error report contents, for example for the last ereport above:
Mar 15 2006 08:06:08.162418201 ereport.cpu.amd.dc.inf_l2_ecc1
nvlist version: 0
        class = ereport.cpu.amd.dc.inf_l2_ecc1
        ena = 0x62a5aaa964f00c01
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = hc
                hc-list = (array of embedded nvlists)
                (start hc-list[0])
                nvlist version: 0
                        hc-name = motherboard
                        hc-id = 0
                (end hc-list[0])
                (start hc-list[1])
                nvlist version: 0
                        hc-name = chip
                        hc-id = 3
                (end hc-list[1])
                (start hc-list[2])
                nvlist version: 0
                        hc-name = cpu
                        hc-id = 0
                (end hc-list[2])

        (end detector)

        bank-status = 0x9432400000000136
        bank-number = 0x0
        addr = 0x5f76a9ac0
        addr-valid = 1
        syndrome = 0x64
        syndrome-type = E
        ip = 0x0
        privileged = 1
        __ttl = 0x1
        __tod = 0x44183b70 0x9ae4e19
One day we'll teach fmdump (or some new command) to mark all that stuff up into human-readable output. For now it shows the raw(ish) telemetry read from the machine check architecture registers when we polled for this event. This telemetry is consumed by the diagnosis rules to produce any appropriate fault diagnosis.

We're Not Done Yet

There are a number of additional features that we'd like to bring to amd64 fault management. For example:

  • use more SMBIOS info (on platforms that have SMBIOS support, and which give accurate data!) to discover FRU labelling etc
  • introduce serial number support so that we can detect when a cpu or dimm has been replaced (currently you have to perform manual fmadm repair
  • introduce some form of communication (if only one-way) between the service processor (on systems that have such a thing, such as the current Sun AMD offerings) and the diagnosis software
  • extend the diagnosis rules to perform more complex predictive diagnosis for DIMM errors based on what we have learned on sparc
Many of these are complicated by the historical lack of architectural standards for the "PC" platform. But we have a solid start, but there's still plenty of functional and usability features we'd like to add.


Technorati Tags: ,

Tuesday May 10, 2005

Fault Management Top 10: #2

Structured Error Events

When subsystems are left to invent their own error handling infrastructure the natural baseline they tend towards is either that of zero infrastructure (just die horribly on any error) or printf logging/debugging (some odd message that has little or no structure, is not registered in any repository, probably means little to anyone other than the developer, and often contains no useful diagnosis information).  I'll term this unstructured printf messaging for this discussion.
There was a time several years ago when Solaris hardware error handling amounted to little greater than either of those baselines.  "Fortunately" the UltraSPARC-II Ecache ... errmmm ... niggle provided the much-needed catalyst towards the very substantial improvements we see today (and a grand future roadmap).  I'll use this as an example of what I'll term structured printf messaging below.
Finally I will contrast these with structured error events.

Unstructured Printf Messaging

The following is an example message that would be barfed (I find that word really suitable in these descriptions) to /var/adm/messages and console in older/unpatched versions of Solaris prior to the first round of hardware error handling improvements.  The message is the result of a corrected error from memory.
May 8 14:35:30 thishost SUNW,UltraSPARC-II: CPU1 CE Error: AFSR 
0x00000000.00100000 AFAR 0x00000000.8abb5a00 UDBH Syndrome 0x85 MemMod U0904
May 8 14:35:30 thishost SUNW,UltraSPARC-II: ECC Data Bit 63 was corrected
May 8 14:35:30 thishost unix: Softerror: Intermittent ECC Memory Error, U0904
At least there is some diagnostic information, such as telling as which bit was in error (we could grok for patterns).  While you could argue that there is some structure to that message you still have to write/maintain a custom grokker to extract it.  The task, of course, is complicated by other related messages sharing little or no common structure.  Here's an uncorrectable error from memory:
panic[CPU1]/thread=2a1000R7dd40: UE Error: AFSR 0x00000000.80200000 AFAR
0x00000000.089cd740 Id 0 Inst 0 MemMod U0501 U0401
Making things worse, in the corrected memory error case, these things were poured out to the console on every occurence.  The implicit expectation had been that you'd see very few such errors - not allowing for a bad pin, for instance.  So while the errors are correctable and survivable (albeit coming thick and fast) the handlers chose to spam the console making it near useless!

Structured Printf Messaging

Taking the time to plan and coordinate error messaging can pay substantial dividends.  This was one of the first reponses to the UltraSPARC-II Ecache problem - to analyse the requirements for error handling and diagnosis on that platform and, with  understanding of that umbrella-view in place, restructure associated handling and messaging.  Here are correctable and uncorrectable memory errors in this improved format:
[AFT0] Corrected Memory Error on CPU1, errID 0x00000036.629edc25
 AFSR 0x00000000.00100000<CE> AFAR 0x00000000.00347dc0
 AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC 0x1002fe20
 UDBH Syndrome 0x85 Memory Module 1904
[AFT0] errID 0x00000036.629edc25 Corrected Memory Error on 1904 is Intermittent
[AFT0] errID 0x00000036.629edc25 ECC Data Bit 63 was in error and corrected

WARNING: [AFT1] Uncorrectable Memory Error on CPU1 Instruction access at TL=0, 
errID 0x0000004f.818d9280
 AFSR 0x00000000.80200000<PRIV,UE> AFAR 0x00000000.0685c7a0
 AFSR.PSYND 0x0000(Score 05) AFSR.ETS 0x00 Fault_PC 0x7815c7a0
 UDBH 0x0203<UE> UDBH.ESYND 0x03 UDBL 0x0000 UDBL.ESYND 0x00
 UDBH Syndrome 0x3 Memory Module 190x
WARNING: [AFT1] errID 0x0000004f.818d9280 Syndrome 0x3 indicates that this may 
not be a memory module problem
Behind the scenes a new shared error/fault information structure was introduced, and all kernel hardware error handlers were changed to accrue relevant information in an instance of one of those structures.  A new and robust (lockless) error dispatching mechanism would carry these structures to logging code which, instead of multiple independent printf (cmn_err) calls passed the error structure to a common function to log the error and passed flags to that function indicating what was relevant for the particular error.
Such messaging (along with many other improvements) significantly enhanced diagnosis and trend analysis.  You still had to grok messages files, but at least the messages were in a structured format that was quite readily extracted.  Of course there was still much to be done.  Programmtic access to the logged data was not readily available, syslog is not the most reliable message transport, syslog logs get aged away, harmless errors (say from an isolated cosmic ray incident) litter messages files and cause unnecessary alarm, etc.

Structured Error Events

Roll on the structured events described in my last blog entry.  When we prepare an error report we tag it with a pre-registered name (event class) and fill it with all relevant (and also pre-registered) event payload data.  That event payload data is presented to us a name-value list (really a name-type-value list).  When it is received by the fault manager is it logged away (independently of any diagnosis modules) in a structured fault log (binary, records are kept in the extended accounting format and the record data is in the form of name-value lists).  Here is a corrected memory event dumped from the error log using fmdump(1M):
TIME                          CLASS
Jun 23 2004 02:59:03.207995640 ereport.cpu.ultraSPARC-IIIplus.ce
nvlist version: 0
        class =
ereport.cpu.ultraSPARC-IIIplus.ce
        ena = 0xd67a7253e3002c01
        detector = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = cpu
                cpuid = 0xb
                cpumask = 0x23
                serial = 0xd2528092482
        (end detector)

        afsr = 0x200000187
        afar-status = 0x1
        afar = 0x23c8f72dc0
        pc = 0x10391e4
        tl = 0x0
        tt = 0x63
        privileged = 1
        multiple = 0
        syndrome-status = 0x1
        syndrome = 0x187
        error-type = Persistent
        l2-cache-ways = 0x2
        l2-cache-data = 0xec0106f1a6
0x772dc0 0x0 0xbd000000 0xe8 0xc05b9 0x840400003e9287b6 0xc05b9
0x850400003e92a7b6 0xfeb5 0xc05b9 0x860400003e92c7b6 0xc05b9
0x870400003e92e7b6 0x3dd24 0xec0106f1a6 0x372dc0 0x1 0xbd000000 0xe8
0x0 0x0 0x0 0x4000000001 0x2e 0x0 0x0 0x0 0x0 0x0
        dcache-ways = 0x0
        icache-ways = 0x0
        resource = (embedded nvlist)
        nvlist version: 0
                version = 0x0
                scheme = mem
                unum = /N0/SB4/P3/B1/D2 J16501
        (end resource)

In software (such as a diagnosis module) given a name-value list we can access the associated data either through generic event walkers (use the libnvpair(3LIB) API to iterate over list pairs, get the type for a pair, lookup that name-value pair for the given type) or (for example in the case of a diagnosis module that has subscribed to corrected memory error events and knows, from the registry, what payload data and types to expect):

void
analyze_error(nvlist_t \*nvl, ...)
{
        uint64_t afar;
        uint16_t synd;
        char \*type;

        if (nvlist_lookup_pairs(nvl, 0,
            "afar", DATA_TYPE_UINT64, &afar,
            "syndrome", DATA_TYPE_UINT16, &synd,
            "error-type", DATA_TYPE_STRING, &type,
            NULL) != 0)
                return (BAD_PAYLOAD);
        ...
}

Bye-bye hacky grokking scripts - we now have self-describing error event information!

Why Do We Care?

I'm hoping this message is getting across.  In the past, with no structured event protocol, standards for error collation etc to do it properly is just too difficult.  Subsystems would just deliver what was adequate during development, leave random printf debugging in place, perform no higher-level correlation of errors, etc.  Now they/we have no excuse - all the infrastructure is present (and easy to use) and it is vastly simpler to deliver high level error and fault analysis for your product (whatever it is - I keep offering examples from hardware errors but all this is generic).  Watch the Sun space for many leaps forward in fault management as a result.

Thursday Apr 28, 2005

Fault Management Top 10: #1

Solaris Fault Management Top 10: New Architecture Adopted

Time to get my promised fault management top 10 off the ground.
Sun has adopted an overarching Fault Management Architecture (FMA) to extend across its product line (of course it will take some time before it is realized in all products!).  Before we dive in let me say that I am not the architect of this architecture - I'm just a keen fan and beneficiary.
I'll describe some basic elements of the architecture below, but let's begin with a review of the architecture it replaces:

Outgoing Fault Management Architecture

(This space intentionally left blank)

OK it wasn't quite as bad as that, but it is true to say that we have not previously enjoyed any formal or consistent fault management model or architecture and that what functionality there was was rather ad-hoc and (in Solaris, and generally across the product line) focussed on error handling and not on fault diagnosis:
  • There was no centralised view of errors - each error handler would do the necessary handling as prescribed (e.g., rewrite a memory address in response to a correctable error from memory) but attempts at higher level monitoring of trends within these error handlers (e.g., are all errors from the same DIMM, all suffering the same bit in error, all reported by the same cpu and never any other cpu, etc)  were  somewhat limited and clumsy since there was no framework for collation and communication of error reports and trends.
  • Error handlers are part of the kernel, and that is not the best or easiest place to perform high-level trend analysis and diagnosis.  But the only communication up to userland level was of unstructured messages to syslog which hardly facilitates analysis in userland.  With diagnosis difficult in the kernel error handlers concentrated on error handling (surprise!) and not on analysing error trends to recognise faults - or where they did they were necessarily somewhat limited.
  • Error messages were sent to syslog and barfed to the console.  If you suffered a zillion errors but nonetheless survived, your console might be spammed with a zillion messages (of a few lines each).
  • Don't think just of hardware errors.  Consider the example of a driver encountering some odd condition, again and again.  Typically it would use cmn_err(9F) to barf some warning message to the console, repeatedly.  Chances are that it did not count those events (e.g., wouldn't know if a given event were the first or the 100th), did not kick off any additional diagnostics to look any deeper, would take no corrective action etc.  While some drivers do a better job they were obliged to invent the infrastructure for themselves, and there'd be no re-use in other drivers.
  • The list goes on.  There's some overlap with my previous post here: things were in this state because error handling and fault diagnosis were an afterthought in unix, subscribing to the "errors shouldn't happen - why design for them?" doctrine.
Those points are very much Solaris (and Solaris kernel) centric, but they're representative of the overall picture as it was.

New Fault Management Architecture

Let's start with a simple picture to establish some terminolgy:

  • An error is some unexpected/anomolous/interesting condition/result/signal/datum; it may be observed/detected or go undetected.  An example might be a correctable memory error from ECC-protected memory.
  • A fault is a defect that may produce an error.  Not all errors are necessarily the result of a defect.  For example a cosmic ray may induce an isolated few correctable memory errors, but there is no fault present;  on the other hand a DIMM that has suffered electrostatic damage in mishandling may produce a stream of correctable errors - there is a fault present.
  • Error Handlers respond to the detection or observation of an error.  Examples are:
    • a kernel cpu error trap handler -  a hardware exception that is raised on the cpu in response to  some hardware detection of an error (say an uncorrectable error from a memory DIMM),
    • a SCSI driver detects an unexpected bus reset
    • an application performs an illegal access and gets a SIGSEGV and has installed a signal handler to handle it
    • svc.startd(1M) decides that a service is restarting too frequently
    The error handler takes any required immediate action to properly handle the error, and then (if it wishes to produce events into the fault manager) collects what error data is available and prepares an error report for transmission to the Fault Manager.
  • The Error Report (data gathered in response to the observation of an error) is encoded according to a newly defined FMA event protocol to make an Error Event, which is then transported towards the Fault Manager.
  • The Fault Manager is a software component that is responsible for fault diagnosis through pluggable diagnosis engines.  It is also responsible for logging the error events it receives and for maintaining overall fault management state.  The fault manager receives error events from various error handlers through various transport mechanisms (the error handlers and the fault manager may reside in quite distinct locations, e.g. an error handler within the Solaris kernel and a fault manager running within the service processor or on another independent system). 
  • The FMA event protocol, among many other things, arranges events in a single hierarchical tree (intended to span all events across all Sun products) with each node named by its event class.  Events include error events as produced by the error handler and fault events as produced by the fault manager when a fault is diagnosed (events are generic and are certainly not limited to error and fault events).  The event protocol also specifies how event data is encoded.  Continuing our correctable memory error example, such an error may arrive as an error event of class ereport.cpu.ultraSPARC-IV.ce and have corresponding event payload data (which we'll take a look at another day).
  • Pluggable Diagnosis Engine clients of the fault manager subscribe to incoming error telemetry for event classes that they have knowledge of (i.e., are able to assist in the diagnosis of associated faults).  They apply various algorithms to the incoming error telemetry and attempt to diagnose any faults that are present in the system; when a fault is diagnosed a corresponding fault event is published.
  • FMA Agents subscribe to fault events produced by the diagnosis engines and may take any appropriate remedial or mitigating action.
That's enough terminology for now.

The Fault Manager(s)

The fault manager plays a central role in the architecture.  Furthermore, fault managers may be arranged hierarchically with events flowing between them.  For example one fault manager instance might run within each Solaris instance (say on domains A and B within a Sun-Fire 6900), and another fault manager instance might run on the system controller for the platform.  Events that are detected within the running OS image can produce error reports into the domain fault managers; events that are deteced within the componentry managed by the SC (say an error observed on an ASIC that Solaris does not know exists and for which the error is corrected before it gets anywhere near Solaris) can produce ereports that flow towards the SC fault manager.  And since these things are never entirely independent it is interesting for the SC to know what the domain has seen (and vice versa) so, for example, the domain fault manager might produce events consumed by the SC fault manager.

An Example - Solaris FMA Components

Let's look a bit closer at some of the FMA infrastructure as realized in Solaris 10 (keeping in mind that FMA stretches way beyond just the OS).  It's probably most illustrative to contrast old with new all along, so I'll approach it that way.
Let's consider the example of a correctable error from memory.  The following isn't real Solaris code (that's coming soon) and is a substantial simplification but it gives the flavour.

Old New
CPU traps to notify of a recent correctable error event.  The trap handler gathers data that is considered relevant/interesting such as the detector details (variable cpuid, an integer), address of the error (variable afar, a uint64_t), the error syndrome (variable synd, an integer), which DIMM holds that address (loc, a string), whether the error could be eliminated by a rewrite (status, a boolean), etc.  We now want to forward this data for pattern and trend analysis.
Vomit a message to the syslog with code that amount to little more than the following:

cmn_err(CE_WARN, "Corrected Memory Error detected by CPU 0x%x, "
"address 0x%llu, syndrome 0x%x (%s bit %d corrected), "
"status %s, memory location %s ",
cpuid, afar, synd, synd_to_bit_type(synd), synd_to_bit_number(synd), status ? "Fixable" : "Stuck",  loc);
Create some name-value lists which we'll populate with the relevant information and submit to our event transport:

dtcr = nvlist_create();
payload = nvlist_create();


We're not going to barf a message into the ether and hope for the best!  We're going to produce an error event according to the event protocol (implemented in various support functions which we'll use here).  We start by recording version and event class information:

/\*
 \* Event class naming is performed according to entries in a formal
 \* registry.  For this case assume the registered leaf node is
 \* a "mem-ce".  We'll prepend the full event class name to form a
 \* full class name along the lines of "ereport.cpu.ultraSPARC-IIIplus"
 \*/
payload_class_set(payload, cpu_gen(cpuid, "mem-ce"));


Populate the detector nvlist with information regarding the cpu and add this info into the payload:

(void) cpu_set(dtcr, cpuid, cpuversion(cpuid));
payload_set(payload, "detector", TYPE_NVLIST, dtcr);

Populate the payload with the error information:

payload_set(payload, "afar", TYPE_UINT64, afar);
payload_set(payload, "syndrome", TYPE_UINT16, synd);
payload_set(payload, "status", TYPE_BOOLEAN, status);
payload_set(payload, "location", TYPE_STRING, loc);


Submit the error report into our event transport:

event_submit(payload);

The /var/adm/messages file and possible the console get spammed with a message that might look like this:

Corrected Memory Error detected by cpu 0x38,
address 0x53272400, syndrome 0x34 (data bit 59 corrected),
status Fixable, memory location /N0/SB4/P0/B1/S0 J13301

If syslogd is having a bad day at the office it may even lose this message - e.g., if it reads the message from the kernel and then dies before logging it.  Assuming it does arrive it has also reached the end of the road - any further processing is going to rely on grokking through messages files.
The event transport used to safely transport the event from the kernel to the userland fmd is the sysevent transport.  The error event arrives at the fault manager for further processing.
-
The error event is logged by fmd.  If we were to look at the logged event with fmadm(1M) it would appear along these lines:

May 04 2005 16:26:48.718581240 ereport.cpu.ultraSPARC-IIIplus.mem-ce
nvlist version: 0
class = ereport.cpu.ultraSPARC-IIIplus.mem-ce
detector =  (embeeded nvlist)
nvlist version 0
version = 0x0
scheme = cpu
cpuid = 0x38
afar = 0x53272400
syndrome = 0x34
status = 1
location =
/N0/SB4/P0/B1/S0 J13301
Of course a real ereport has a bit more complexity than that.
-
The fault manager supplies a copy of the event to each plugin module that has subscribed to this event class.  Each receives it as a name-value list and can access all the data fields through the API provided my libnvpair(3LIB).
No userland level diagnosis is performed, other than ad hoc processing of messages files by various custom utilities.  The kernel error handler do perform limited diagnosis but, since the kernel context is necessarily somewhat limited, these are not sophisticated and each handler tends to roll its own approach.
Diagnosis Engine modules can collate statistics on the event stream.  The fmd API provides the infrastructure that a diagnosis engine might want in keeping track of various error and fault information.
High-level diagnosis languages are implemented in custom diagnosis engine modules, allowing diagnosis rules etc to be specified in a high level language rather than coding to a C API.

The DE may eventually decide that there is a fault (e.g., it notices that all events are coming from the same page of memory) and it will publish a fault event.  We can use fmdump to see the fault log.

An agent subscribing to the fault event class can retire the page of memory so that it is no longer used.

The example is incomplete (on both sides) but fair.  I hope it illustrates how unstructured the old regime is, and how structured and how extendable the new is.  Getting additional telemetry out of the kernel into a module that can consume that data to make intelligent diagnosis decisions is pretty trivial now.

Thursday Mar 10, 2005

Error Handling Philosophy: It Happens

In my first entry I referred to error handling and fault management as appearing "unglamorous".  What do I mean by that?  Well, of the more than three-quarters of a million downloads of Solaris 10 to date in how many cases do you think the uppermost question to be answered in subsequently trying it out was "Let's see how well this baby handles hardware faults?".  Answer: not enough.

There is, of course, an informed minority who recognise the value and necessity of proper error and fault handling.  But it seems that too many people subscribe to the quaint fallacy that "hardware errors should not happen, and where they do they indicate a vendor design or manufacturing flaw".  Hmmm, even given a perfect chip design it's more than a little counter-intuitive to believe that we can squeeze a few gazillion transistors onto a tiny chip and expect perfect operation from what is a physical process.  The reality is that electronics, like software, is imperfect both in design and in implementation.  Moreover it is realized in a physical medium and is subject to the laws of physics more than the laws of your data centre or desktop!  Now take all these components and imagine assembling them even into a simple system - opportunities abound!

So rather than stick your head in the sand, accept that hardware errors do happen, that they are expected, and that (whatever your hardware, from whoever) they near certainly have happened to you.  Whether or not you or anything actually noticed and, if so, did anything about it is quite another question.  We're into the domain of detection, correction, and remedial action.

Good hardware components and systems are designed to, at the very least, detect the presence of errors they have suffered (they detect the consequence of the event, say a bit flip, rather than the event itself which may have occured some time earlier).  I'm always amused by the existence of "non-parity memory" for many PC systems (especially those sometimes quoted in the "build your own server for a few hundred bucks and run your business on it - why pay a vendor" articles.  "Non-parity" makes it sounds like a feature, not an ommision; like "Non-fattening" foods are a good thing so "non-parity" must be good.  Lacking even parity protection means that your data in memory is completely unprotected - if it is corrupted by a bit flip the only way you'll ever know is if you notice "cheap" spelled as "nasty" in your precious document (and many would blame the application), or if higher-level application software is performing checksumming on your data for you (not common).  The system has silently corrupted your data and allowed use of that corrupted data as if it had been good, and you're none the wiser!

Of course any self-respecting system nowadays has ECC on memory (typically single-bit-error correction and dual-bit-error detection without correction).  OK, so my cheapo home PC doesn't but that's only used for private email so I'll get over it - but I wouldn't run anything important on there.  A self-respecting system also has data protection, and increasingly correction, on datapaths, ASICs, CPU caches etc.

At it's simplest, system error-handling software need only do what is required by the architecture to correctly survive the error.  This may range from nothing at all (hardware detected and corrected a single bit error and, optionally, let you know about its good deed) to cache flushes, memory rewrites etc.  But we want to do a great deal more than that.  We want to log the event, look for patterns and trends, predict impending uncorrectable failures and "head them off at the pass" (apologies) through preemptive actions, classify errors (eg, transient or not going away) and so on. 

Furthermore, if we accept that "it happens" - errors and faults will occur - we should also accept that they're not always going to be neat and tidy.  For example a particle strike that upsets a single memory cell is easily handled - it's an isolated event and the overhead of handling it (correction, logging, collating in case this is a part of a pattern) is trivial - but a damaged or compromised memory chip (manufacturing defect, electrostatic handling failures, nuts and bolts loose in the system, poor memory slot insertion, coffee in the memory slot etc) may produce a "storm" of errors - do you want the system to spend all its time logging and collating those or would you prefer it also do some useful work on your application? 

While a single memory chip problem will only affect accesses that involve that memory, think what happens when such problems beset the CPU - say a "stuck" bit (always reads 0 even after you write a 1 to it, or vice-versa) is in a primary (on-chip) cache such as the data cache.  Such a failure is going to generate many many error events - how do you make sure nobody notices (i.e., no data is corrupted, everybody runs at full-speed etc).

Of course some failures are simply not correctable.  They may not involve extremes of storms and stuck bits - just 2 flipped bits in a memory word cannot be corrected with most ECC codes currently in use.  Data may be lost (e.g., if it can't be recovered from another copy elsewhere) and process or even operating system state may be compromised.  How do you contain the damage done with least interruption to system service?

I hope I've made the beginnings of a convincing case of why error and fault handling as a first-class citizen is essential in modern systems and operating systems.  In followup posts (probaby after I get some such software putback-ready) I'll continue to make the case, describe where Sun is at and where we're going in the arena etc.  It's perhaps less glamouress working in the cesspit of things that we'd prefer did not happen, but since they do happen and will continue to with current technology it's certainly sexy when you can handle them with barely a glitch to the system or predict the occurence and have already taken steps to contain the damage.

Tuesday Feb 01, 2005

Top Solaris 10 features for fault management

A number of Sun bloggers have posted top N lists of cool and new features in Solaris 10 (e.g., in Adam's blog). I thought I'd have a go at a Solaris 10 top 10 from the error handling and fault management and "Predictive Self-Healing" point of view, and then go into each item in a bit more detail in future entries.
  1. Sun has adopted a fault management architecture and Solaris 10 delivers the first (of many planned) offerings implementing this architecture in the fault management  daemon  (part of the svc:/system/fmd:default service).
  2. Error event handlers now propogate structured error reports to the fault manager where they are logged in perpetuity.
  3. Diagnosis Engines now exist to automate much diagnosis.
  4. Agent software can implement various policies given a diagnosis, e.g. to offline and blacklist a cpu or to retire some memory.
  5. Only diagnoses will appear on consoles etc, and they reference web-based knowledge articles.
  6. The contract filesystem ctfs provides a mechanism by which we can communicate hardware errors to groups of affected processes.
  7. The Service Management Facility is available to manage services affected by errors.
  8. Error trap handlers, now that there is a clear separation of responsibilities, are more robust.
  9. Getting error telemetry out of the kernel is dead easy now.
  10.  Fault management is no longer an afterthought!  And it is set to grow and grow.
About

I work in the Fault Management core group; this blog describes some of the work performed in that group.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Site Pages
OpenSolaris
Sun Bloggers

No bookmarks in folder