X

The ZFS blog provides product announcements and technological insights

  • March 4, 2016

Recommended ZFS Configurations and Practices

Cindy Swearingen
Principal Product Manager

We recently celebrated 10 years of ZFS in Oracle Solaris so I've spent many years working with people internally and externally to ensure optimal ZFS configurations and practices. You can review recommended ZFS configurations and practices in the ZFS Best Practices section of the docs.

ZFS brings modern simplicity and redundancy to the enterprise datacenter where previously we wrestled with file systems and devices.

I used to whine like the exasperated comedienne of data storage, asking incredulously: Is it a disk...a slice…a partition? Do I need a volume…sub mirror…plex or WHAT? VTOC…SMI…or EFI??? Not to mention the 17-steps to create a disk slice.

ZFS reduces those complications.

Today, I compare modern ZFS to the past and also provide a review of current ZFS best practices:

  • ZFS is both a file system and a volume manager but with one simple interface so you don't have to learn separate products just to manage your data.
  • If you are transitioning your systems running Solaris 10 to Solaris 11, your task is simplified because ZFS is basically the same on both releases.
  • ZFS supports EFI-labeled disks for both root pools and non-root (data) pools, which means whole disks can be used for your pools and file systems. This means never having to say you're sorry or translated another way: never having to use the format utility again.
  • Redundant ZFS storage pools are created in one step, no special formatting, slices, or partitions are required. We really are living happily ever after.
  • Redundant ZFS storage pools are highly recommended and should be part of every ZFS configuration. See the best practices for tips on creating a recommended ZFS storage configuration. 
    • Even with redundant ZFS storage pools, you should always have spares and a recent backup because disasters happen.
    • Rather than including a spare on a ZFS root pool, you can easily create a 2-, 3-, or 4-way (or more) mirror, depending on your paranoia level.
  • In the past, if you were using a traditional volume management product, you could break a mirror, patch the system and if all went well, re-attach the mirror and let it sync. 
    • Today, we update a Solaris boot environment (based on a ZFS snapshot) so if anything goes wrong during the update, you can easily rollback. No need exists to detach disks to do a software update.
    • Put another way: No need exists to detach disks and compromise your redundancy.
    • Caution: If you detach a disk from a mirrored ZFS root pool, the detached disk is no longer bootable.
  • ZFS file systems are created from a ZFS storage pool, which is a collection of disks, so all file systems consume space in the pool. Disk space consumed by ZFS file systems can be managed with quotas and reservations. If the file systems need more space, just add more disks to the pool.
    • No one-to-one correspondence exists between a ZFS file system and a disk slice unless you want to simulate the traditional approach. This configuration is more granular and harder to manage if a file system fills up and is not recommended.
  • We saw a lot of hard lessons in the early days of ZFS because of poor configurations or failing hardware and that motivated us to provide data recovery tools.
    • The best lesson is to always host important data on recommended ZFS storage pool configurations that include redundancy and always have regular backups at the onset. You can choose between existing enterprise backup tools or something like Oracle Secure Backup. Learn more about this low-cost backup solution here.
    • Selecting hardware for hosting critical data should match the value of the data. Disks are relatively cheap compared to the value of your data so I don't consider that mirrored configurations double the disk space cost. This is the cost of keeping your data available and if you consider the cost of recovery (in time and money), the mirroring decision should be a snap.
      • Expensive hardware arrays are not immune to bugs or failure so consider configuring the array in JBOD mode and presenting individual disks to ZFS so that they can be mirrored.
    • Should your ZFS pool encounter corruption due to hardware issues, see the ZFS Pool Recovery section in the ZFS Admin Guide about suggested actions for handling situations such as individual file corruption or pool metadata corruption. Testing previous pool states to see if a
      good state can be found is often the first step.  For more complex corruption situations, please open a Service Request through My Oracle Support to speak with a ZFS specialist.
Life is complicated enough. Your data storage doesn't have to be.

Join the discussion

Comments ( 2 )
  • Randall Badilla Friday, March 11, 2016

    Hi Cindys:

    Thanks a lot by this post and links.

    ZFS is a wonderful technology, avoiding big-little endian issues when moving data from X86 to Sparc and viceversa have save me a lot of time and work!

    And the simplicity is incredible welcome.

    I wish the every OS (including Windows and OSX) at least could use it natively.

    Will be publish more about storage alignment and the FOSS/Oracle ZFS implementations relationship?

    Thanks again!


  • Randall Badilla Friday, March 11, 2016

    Hi Cindys:

    Thanks a lot by this post and links.

    ZFS is a wonderful technology, avoiding big-little endian issues when moving data from X86 to Sparc and viceversa have save me a lot of time and work!

    And the simplicity is incredible welcome.

    I wish the every OS (including Windows and OSX) at least could use it natively.

    Will be publish more about storage alignment and the FOSS/Oracle ZFS implementations relationship?

    Thanks again!


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.Captcha