ZFSSA update 2011.1.6.0 is now available

For those who have not noticed, ZFSSA version 2011.1.6.0 came out on April 29th.

Go get it.

Release notes are here:



Hey Steve,

My University has a nice 7410 cluster that has a bunch of J4400 disk arrays attached to it. When I applied this latest update to my production cluster, one head at a time, it went through and updated all the 1TB drives to new firmware. I am wondering how the drives can get firmware updates while they are actively spinning/sharing their data?

Thanks for the great blog, keep those posts coming!


Posted by guest on May 22, 2013 at 10:18 AM PDT #

Great question. Anytime someone says "Great question", that is code for "I don't know". No joke, remember that for everything in life. Now, I do know some of it... if the pool is a mirror or RAID stripe, it can easily update the disk firmware one drive at a time and offline a drive with no problem. I have sent a question out to the rest of my group to ask what happens in the case of a pure stripe (RAID 0) pool, where even one drive coming off-line will kill you. I will post the answer when I find out.

Posted by Steve on May 22, 2013 at 10:57 AM PDT #

see below, from a recent SR where I asked this question...
This doesnt specifically say how the fw is done, but its got the theory and jibes with what you said about safely offlining disks in a mirror or raid set... not sure who would run the appliance in no-redundancy mode (stripe only).. sounds insane.


Hardware Firmware Updates
Following the application of a software upgrade, any hardware for which the upgrade includes newer versions of firmware will be upgraded. There are several types of devices for which firmware upgrades may be made available; each has distinct characteristics.

Disks, storage enclosures, and certain internal SAS devices will be upgraded in the background. When this is occurring, the firmware upgrade progress will be displayed in the left panel of the Maintenance/System BUI view, or in the maintenance system updates CLI context. These firmware updates are almost always hardware related, though it may briefly show some number of outstanding updates when applying certain deferred updates to components other than hardware.

Applying hardware updates is always done in a completely safe manner. This means that the system may be in a state where hardware updates cannot be applied. This is particularly important in the context of clustered configurations. During takeover and failback operations, any in-progress firmware upgrade will be completed; pending firmware upgrades will be suspended until the takeover or failback has completed, at which time the restrictions described below will be reevaluated in the context of the new cluster state and, if possible, firmware upgrades will resume. Important: Unless absolutely necessary, takeover and failback operations should not be performed while firmware upgrades are in progress. The rolling upgrade procedure documented below meets all of these best practices and addresses the per-device-class restrictions described below. It should always be followed when performing upgrades in a clustered environment. In both clustered and standalone environments, these criteria will also be reevaluated upon any reboot or diagnostic system software restart, which may cause previously suspended or incomplete firmware upgrades to resume.

Components internal to the storage controller (such as HBAs and network devices) other than disks and certain SAS devices will generally be upgraded automatically during boot; these upgrades are not visible and will have completed by the time the management interfaces become available.

Upgrading disk or flash device firmware requires that the device be taken offline during the process. If there is insufficient redundancy in the containing storage pool to allow this operation, the firmware upgrade will not complete and may appear "stalled." Disks and flash devices that are part of a storage pool which is currently in use by the cluster peer, if any, will not be upgraded. Finally, disks and flash devices that are not part of any storage pool will not be upgraded.

Upgrading the firmware in a disk shelf requires that both back-end storage paths be active to all disks within all enclosures, and for storage to be configured on all shelves to be upgraded. For clusters with at least one active pool on each controller, these restrictions mean that disk shelf firmware upgrade can be performed only a controller that is in the "owner" state.

During the firmware upgrade process, hardware may appear to be removed and inserted, or offlined and onlined. While alerts attributed to these actions are suppressed, if you are viewing the Maintenance/Hardware screen or the Configuration/Storage screen, you may see the effects of these upgrades in the UI in the form of missing or offline devices. This is not a cause for concern; however, if a device remains offline or missing for an extended period of time (several minutes or more) even after refreshing the hardware view, this may be an indication of a problem with the device. Check the Maintenance/Problems view for any relevant faults that may have been identified. Additionally, in some cases, the controllers in the disk shelves may remain offline during firmware upgrade. If this occurs, no other controllers will be updated until this condition is fixed. If an enclosure is listed as only having a single path for an extended period of time, check the physical enclosure to determine ]whether the green link lights on the back of the SIM are active. If not, remove and re-insert the SIM to re-establish the connection. Verify that all enclosures are reachable by two paths.

Posted by guest on May 22, 2013 at 03:02 PM PDT #

Post a Comment:
Comments are closed for this entry.

This blog is a way for Steve to send out his tips, ideas, links, and general sarcasm. Almost all related to the Oracle 7000, code named ZFSSA, or Amber Road, or Open Storage, or Unified Storage. You are welcome to contact Steve.Tunstall@Oracle.com with any comments or questions


« July 2016