Thursday Nov 10, 2011
Friday Apr 01, 2011
By user12652883 on Apr 01, 2011
Something you may or may not have noticed in the Oracle Solaris 11 Express release is the ability to monitor SMF service instance transitions, such as to maintenance state or from online to disabled. This leverages the Solaris FMA notification daemons, which were introduced at the same time as this SMF support (previously SNMP notification for email was delivered as an fmd plugin, and there was no email notification).
First thing to know is that the notification daemons are packaged apart from the fault manager itself. There are separate packages for SMTP notification and for SNMP notification, and each deliver a corresponding SMF service:
You need only install the mechanism(s) you want. If the packages are absent you are still able to configure notification preferences with# pkg install smtp-notify # pkg install snmp-notify # svcadm enable smtp-notify # svcadm enable snmp-notify # svcs smtp-notify # svcs snmp-notify
svccfgas below - but notifications will not work until the packages are added and the services online.
Next you need to setup your notification preferences. Like many things nowadays these are stored in SMF, and we use
svccfg command line. See manpages svccfg(1m) for subcommands
delnotify as well smtp-notify(1m) and snmp-notify(1m). For the die-hards
there is more detail in smf(5).
While you can notify of any transition and configure per-service in addition to globally, most-commonly
it will be transitions to maintenance state that will be configured for global notification: (I'll
That configures email notification when any service instance transitions to maintenance. Similarly you can configure such global actions for transitions such as# svccfg listnotify -g # svccfg setnotify -g to-maintenance mailto:$EMAIL # svccfg listnotify -g Event: to-maintenance (source: svc:/system/svc/global:default) Notification Type: smtp Active: true to: john.q.citizen-AT-elsewhere-DOT-com
If you only want to monitor particular services, or appoint different actions for different services such as to email different aliases) then drop the -g (global) and list the instance you are interested in:
# svccfg -s svc:/network/http:apache22 setnotify to-offline snmp:active
Finally, you can notify of FMA problem lifecycle events using the same mechanism:
The manpages detail the other lifecycle events.# svccfg setnotify problem-diagnosed mailto:$EMAIL
Tuesday Sep 16, 2008
By user12652883 on Sep 16, 2008
We recently completed the first phase of re-enabling traditional Solaris Fault Management features on x86 systems running the Solaris xVM Hypervisor; this support was integrated into Solaris Nevada build 99, and will from there find its way into a future version of the xVM Server product (post 1.0).
In such systems prior to build 99 there is essentially no hardware error handling or fault diagnosis. Our existing Solaris features in this area were designed for native ("bare metal") environments in which the Solaris OS instance has full access to and control over the underlying hardware. When a type 1 hypervisor is inserted it becomes the privileged owner of the hardware, and native error handling and fault management capabilities of a dom0 operating system such as Solaris are nullified.
The two most-important sources or hardware errors for our purposes are cpu/memory and IO (PCI/PCI-E). This first phase restores cpu/memory error handling and fault diagnosis to near-native capabilities. A future phase will restore IO fault diagnosis.
Both AMD and Intel systems are equally supported in this implementation. The examples below are from an AMD lab system - details on an Intel system will differ trivially.
A Worked Example - A Full Fault Lifecycle Illustration
We'll follow a cpu fault through its full lifecycle. I'm going to include sample output from administrative commands only, and I'm not going to look "under the hood" - the intention is to show how simple this is and hide the under-the-hood complexities!
I will use my favourite lab system - a Sun Fire v40z server with 4 x AMD Opteron, one of which has a reliable fault that will produce no end of correctable instruction cache and l2 cache single-bit errors. I've used this host "parity" in a past blog some time ago - if you contrast the procedure and output there with that below you'll (hopefully!) see we've come a long way.
The administrative interface for Solaris Fault Management is unchanged with this new feature - you now just run fmadm on a Solaris dom0 instead of natively. The output below is from host "parity" dom0 (in an i86xpv Solaris xVM boot).
Notification Of Fault Diagnosis - (dom0) Console, SNMP
Prior to diagnosis of a fault you'll see nothing on the system console. Errors are being logged in dom0 and analysed, but we'll only trouble the console once a fault is diagnosed (console spam is bad). So after a period of uptime on this host we see the following on the console:
If you've configured your system to raise SNMP traps on fault diagnosis you'll be notified by whatever software has been plumbed in to that mechanism, too.SUNW-MSG-ID: AMD-8000-7U, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Mon Sep 15 20:54:06 PDT 2008 PLATFORM: Sun Fire V40z, CSN: XG051535088, HOSTNAME: parity SOURCE: eft, REV: 1.16 EVENT-ID: 8db62dbe-a091-e222-a6b4-dd95f015b4cc DESC: The number of errors associated with this CPU has exceeded acceptable levels. Refer to http://sun.com/msg/AMD-8000-7U for more information. AUTO-RESPONSE: An attempt will be made to remove this CPU from service. IMPACT: Performance of this system may be affected. REC-ACTION: Schedule a repair procedure to replace the affected CPU. Use 'fmadm faulty' to identify the CPU.
Why, you ask, does the console message not tell us which cpu is faulty? Well that's because the convoluted means by which we generate these messages (convoluted to facilitate localization of messages in localized version of Solaris) has restricted us to a limited amount of dynamic content in the messages - an improvement in this area is in the works.
Show Current Fault State - fmadm faulty
For now we following the advice and run fmadm faulty on dom0.
Note the faulted and taken out of service - we have diagnosed this resource as faulty and have offlined the processor at the hypervisor level. This system supports cpu and memory FRU labels so all that hc://.... mumbo-jumbo can be ignored - the thing to replace is the processor in the socket labelled "CPU 3" (either on the board or on the service sticker affixed to the chassis lid, or both - depends on the system).dom0# fmadm faulty --------------- ------------------------------------ -------------- --------- TIME EVENT-ID MSG-ID SEVERITY --------------- ------------------------------------ -------------- --------- Sep 15 20:54:06 8db62dbe-a091-e222-a6b4-dd95f015b4cc AMD-8000-7U Major Fault class : fault.cpu.amd.icachedata Affects : hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=3/core=0/strand=0 faulted and taken out of service FRU : "CPU 3" (hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=3) faulty Description : The number of errors associated with this CPU has exceeded acceptable levels. Refer to http://sun.com/msg/AMD-8000-7U for more information. Response : An attempt will be made to remove this CPU from service. Impact : Performance of this system may be affected. Action : Schedule a repair procedure to replace the affected CPU. Use 'fmadm faulty' to identify the CPU.
Replace The Bad CPU
Schedule downtime to replace the chip in location "CPU 3". Software can't do this for you!
Declare Your Replacement Action
This is an unfortunate requirement: x86 systems do not support cpu serial numbers (the privacy world rioted when Intel included such functionality and enabled it by default), and so we cannot auto-detect the replacement of a cpu FRU. On sparc systems, and on Sun x86 systems for memory dimms, we do have FRU serial number support and this step is not required.dom0# fmadm replaced "CPU 3" fmadm: recorded replacement of CPU 3
When the replacement is declared as above (or for FRU types that have serial numbers, at reboot following FRU replacment) the console will confirm that all aspects of this fault case have been addressed. At this time the cpu has been brought back online at the hypervisor level.
Did You See Any xVM In That?
No, you didn't! - native/xVM support is seamless. The steps and output above did not involve any xVM-specfic steps or verbage. Solaris is running as a dom0, really!
Actually there are a few differences elsewhere. Most notable is that psrinfo output will not change when the physical cpu is offlined at the hypervisor level:dom0# uname -a SunOS parity 5.11 on-fx-dev i86pc i386 i86xpv
That's because psrinfo is reporting the state of virtual cpus and not physical cpus. We plan to integrate functionality to query and control the state of physical cpus, but that is yet to come.dom0# psrinfo 0 on-line since 09/09/2008 03:25:15 1 on-line since 09/09/2008 03:25:19 2 on-line since 09/09/2008 03:25:19 3 on-line since 09/09/2008 03:25:19
Another Example - Hypervisor Panic
In this case the processor suffers an uncorrected data cache error, losing hypervisor state. The hypervisor handles the exception, dispatches telemetry towards dom0, and initiates a panic. The system was booted under Solaris xVM at the time of the error:
Note that as part of the panic flow dom0 has retrieved the fatal error telemetry from the hypervisor address space, and it dumps the error report to the console. Hopefully we won't have to interpret that output - it's only there in case the panic fails (e.g., no dump device). The ereport is also written to the system dump device at a well-known location, and the fault management service will retrieve this telemetry when it restarts on reboot.Xen panic[dom=0xffff8300e319c080/vcpu=0xffff8300e3de8080]: Hypervisor state lost due to machine check exception. rdi: ffff828c801e7348 rsi: ffff8300e2e09d48 rdx: ffff8300e2e09d78 rcx: ffff8300e3dfe1a0 r8: ffff828c801dd5fa r9: 88026e07f rax: ffff8300e2e09e38 rbx: ffff828c8026c800 rbp: ffff8300e2e09e28 r10: ffff8300e319c080 r11: ffff8300e2e09cb8 r12: 0 r13: a00000400 r14: ffff828c801f12e8 r15: ffff8300e2e09dd8 fsb: 0 gsb: ffffff09231ae580 ds: 4b es: 4b fs: 0 gs: 1c3 cs: e008 rfl: 282 rsp: ffff8300e2e09d30 rip: ffff828c80158eef: ss: e010 cr0: 8005003b cr2: fffffe0307699458 cr3: 6dfeb5000 cr4: 6f0 Xen panic[dom=0xffff8300e319c080/vcpu=0xffff8300e3de8080]: Hypervisor state lost due to machine check exception. ffff8300e2e09e28 xpv:mcheck_cmn_handler+302 ffff8300e2e09ee8 xpv:k8_machine_check+22 ffff8300e2e09f08 xpv:machine_check_vector+21 ffff8300e2e09f28 xpv:do_machine_check+1c ffff8300e2e09f48 xpv:early_page_fault+77 ffff8300e2e0fc48 xpv:smp_call_function_interrupt+86 ffff8300e2e0fc78 xpv:call_function_interrupt+2b ffff8300e2e0fd98 xpv:do_mca+9f0 ffff8300e2e0ff18 xpv:syscall_enter+67 ffffff003bda1b90 unix:xpv_int+46 () ffffff003bda1bc0 unix:cmi_hdl_int+42 () ffffff003bda1d20 specfs:spec_ioctl+86 () ffffff003bda1da0 genunix:fop_ioctl+7b () ffffff003bda1eb0 genunix:ioctl+174 () ffffff003bda1f00 unix:brand_sys_syscall32+328 () syncing file systems... done ereport.cpu.amd.dc.data_eccm ena=f0b88edf0810001 detector=[ version=0 scheme= "hc" hc-list=[ hc-name="motherboard" hc-id="0" hc-name="chip" hc-id="1" hc-name="core" hc-id="0" hc-name="strand" hc-id="0" ] ] compound_errorname= "DCACHEL1_DRD_ERR" disp="unconstrained,forcefatal" IA32_MCG_STATUS=7 machine_check_in_progress=1 ip=0 privileged=0 bank_number=0 bank_msr_offset= 400 IA32_MCi_STATUS=b600a00000000135 overflow=0 error_uncorrected=1 error_enabled=1 processor_context_corrupt=1 error_code=135 model_specific_error_code=0 IA32_MCi_ADDR=2e7872000 syndrome=1 syndrome-type= "E" __injected=1 dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel 100% done rebooting
On the subsequent reboot we see:
Pretty cool - huh? We diagnosed a terminal MCA event that caused a hypervisor panic and system reset. You'll only see the xVM-prefixed messages if you have directed hypervisor serial console output to dom0 console. At this stage we can see the fault in fmadm:v3.1.4-xvm chgset 'Sun Sep 07 13:00:23 2008 -0700 15898:354d7899bf35' SunOS Release 5.11 Version on-fx-dev 64-bit Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. DEBUG enabled (xVM) Xen trace buffers: initialized Hostname: parity NIS domain name is scalab.sfbay.sun.com Reading ZFS config: done. Mounting ZFS filesystems: (9/9) parity console login: (xVM) Prepare to bring CPU1 down... SUNW-MSG-ID: AMD-8000-AV, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Mon Sep 15 23:05:21 PDT 2008 PLATFORM: Sun Fire V40z, CSN: XG051535088, HOSTNAME: parity SOURCE: eft, REV: 1.16 EVENT-ID: f107e323-9355-4f70-dd98-b73ba7881cc5 DESC: The number of errors associated with this CPU has exceeded acceptable levels. Refer to http://sun.com/msg/AMD-8000-AV for more information. AUTO-RESPONSE: An attempt will be made to remove this CPU from service. IMPACT: Performance of this system may be affected. REC-ACTION: Schedule a repair procedure to replace the affected CPU. Use 'fmadm faulty' to identify the module. (xVM) CPU 1 is now offline
dom0# fmadm faulty -f ------------------------------------------------------------------------------ "CPU 1" (hc://:product-id=Sun-Fire-V40z:chassis-id=XG051535088:server-id=parity/motherboard=0/chip=1) faulty f107e323-9355-4f70-dd98-b73ba7881cc5 1 suspects in this FRU total certainty 100% Description : The number of errors associated with this CPU has exceeded acceptable levels. Refer to http://sun.com/msg/AMD-8000-AV for more information. Response : An attempt will be made to remove this CPU from service. Impact : Performance of this system may be affected. Action : Schedule a repair procedure to replace the affected CPU. Use 'fmadm faulty' to identify the module.
Phase 1 Architecture
"Architecture" may be overstating it a bit - I like to call this an "initial enabling" since the architecture will have to evolve considerably as the x86 platform itself matures to offer features that will make many of the sun4v/ldoms hardware fault management features possible on x86.
The aim was to re-use as much existing Solaris error handling and fault diagnosis code as possible. That's not just for efficiency - it also allows interoperability and compatibility between native and virtualized boots for fault management. The first requirement was to expose physical cpu information to dom0 from the hypervisor: dom0 is normally presented with virtual cpu information - no good for diagnosing real physical faults. That information allows Solaris to initialize a cpu module handle for each physical (chipid, coreid, strandid) combination in the system, and to associate telemetry arising from the hypervisor with a particular cpu resource. It also allows us to enumerate the cpu resources in our topology library - we use this in our diagnosis software.
Next we had to modify our chip topology, as reported in /usr/lib/fm/fmd/fmtopo and as used in diagnosis software, from chip/cpu (misguidedly introduced with the orginal Solaris AMD fault management work - see previous blogs) to a more physical representation of chip/core/strand - you can see the alignment with cpu module handles mentioned above.
Then communication channels were introduced from the hypervisor to dom0 in order to deliver error telemetry to dom0 for logging and diagnosis:
- Error exceptions are handled in the hypervisor; it decides whether we can continue or need to panic. In both cases the available telemetry (read from MCA registers) is packaged up and forwarded to dom0.
- If the hypervisor needs to panic as the result of an error then the crashdump procedure, handled as today by dom0, extracts the in-flight terminal error telemetry and stores it at a special location on the dump device. At subsequent reboot (whether native or virtualized) we extract this telemetry and submit it for logging and diagnosis.
- The hypervisor polls MCA state for corrected errors (it's done this for ages) and forwards any telemetry it finds on to dom0 for logging and diagnosis (instead of simple printk to the hypervisor console).
- The dom0 OS has full PCI access; when error notifications arrive in dom0 we check the memory-controller device for any additional telemetry. We also perform periodic poll of the memory-controller device from dom0.
The obligatory picture:
The hypervisor to dom0 communication channel for MCA telemetry and the MCA hypercall itself that we've used in our project is derived from work contributed to the Xen community by Christoph Egger of AMD. We worked with Christoph on some design requirements for that work.
Frank van der Linden did the non-MCA hypervisor work - physical cpu information for dom0, and hypervisor-level cpu offline/online.
Sean Ye designed and implemented our new /dev/fm driver; the new topology enumerator that works identically regardless of dom0 or native; topology-method based resource retire for FMA agents. Sean also performed most of the glue work in putting various pieces together as they became available.
John Cui extended our fault test harness to test all the possible error and fault scenarios under native and xVM, and performed all our regression and a great deal of unit testing on a host of systems.
I did the MCA-related hypervisor and kernel work, and the userland error injector changes.
We started this project in January, with a developer in each of the United Kingdom, Netherlands, and China. We finished in September with the same three developers, but with some continent reuffles: the Netherlands moving to the US in March/April, and the UK moving to Australia in May/June. Thanks to our management for their indulgence and patience while we went off-the-map for extended periods mid-project!