Multiple pools in 2010.Q1

Guest Author

When the Sun Storage 7000 was first introduced, a key design decision was to allow only a single ZFS storage pool per host. This forces users to fully take advantage of the ZFS pool storage model, and prevents them from adopting ludicrous schemes such as "one pool per filesystem." While RAID-Z has non-trivial performance implications for IOPs-bound workloads, the hope was that by allowing logzilla and readzilla devices to be configured per-filesystem, users could adjust relative performance and implement different qualities of service on a single pool.

While this works for the majority of workloads, there are still some that benefit from mirrored performance even in the presence of cache and log devices. As the maximum size of Sun Storage 7000 systems increases, it became apparent that we needed a way to allow pools with different RAS and performance characteristics in the same system. With this in mind, we relaxed the "one pool per system" rule1 with the 2010.Q1 release.

The storage configuration user experience is relatively unchanged. Instead of having a single pool (or two pools in a cluster), and being able to configure one or the other, you can simply click the '+' button and add pools as needed. When creating a pool, you can now specify a name for the pool. When importing a pool, you can either accept the existing name or give it a new one at the time you select the pool. Ownership of pools in a cluster is now managed exclusively through the Configuration -> Cluster screen, as with other shared resources.

When managing shares, there is a new dropdown menu at the top left of the navigation bar. This controls which shares are shown in the UI. In the CLI, the equivalent setting is the 'pool' property at the 'shares' node.

While this gives some flexibility in storage configuration, it also allows users to create poorly constructed storage topologies. The intent is to allow the user to create pools with different RAS and performance characteristics, not to create dozens of different pools with the same properties. If you attempt to do this, the UI will present a warning summarizing the drawbacks if you were to continue:

  • Wastes system resources that could be shared in a single pool.
  • Decreases overall performance
  • Increases administrative complexity.
  • Log and cache devices can be enabled on a per-share basis.

You can still commit the operation, but such configurations are discouraged. The exception is when configuring a second pool on one head in a cluster.

We hope this feature will allow users to continue to consolidate storage and expand use of the Sun Storage 7000 series in more complicated environments.

  1. Clever users figured out that this mechanism could be circumvented in a cluster to have two pools active on the same host in an active/passive configuration.

Join the discussion

Comments ( 5 )
  • true religion jeans Wednesday, March 10, 2010

    Hmm that's very interessting but actually i have a hard time understanding it... wonder what others have to say..

  • Charles Soto Thursday, March 11, 2010

    Looks like the whole fishworks blog needs some spam filtering!

    Not sure we'll use this. Our 7410 I/O needs are so "generic" as to be perfectly suited to the "one big generally capable pool per shelf" model. Maybe after we have >2 shelves will this become important for us.

  • Stanislav Obidin Friday, March 12, 2010

    i have already tried this upgrade on 7410... Only one pool per expansion tray is available. What i do wrongly?

  • Eric Schrock Friday, March 12, 2010

    Stanislav -

    This upgrade does not change the allocation options for shelves, which are dictated by SATA limitations. You will still need to allocate shelves to pools in half shelf increments. The internal disks in the 7110 and 7210 do not have this limitation and can be allocated at per-disk granularity.

  • Stanislav Obidin Monday, March 15, 2010

    Hello, Eric! Thank for your reply!

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.