X

The ZFS blog provides product announcements and technological insights

Recent Posts

Next Generation: Oracle ZFS Storage ZS7-2

Today, we are announcing the latest Oracle ZFS Storage Appliance ZS7-2, which is the sixth-generation of our unified storage platform. Built on the ground-breaking ZFS file system and hybrid storage pool model, that leverages a massively scaled SMP-based OS, with multi-petabyte scalability, extensive DRAM and flash caches, the Oracle ZFS Storage Appliance is a performance powerhouse. When you combine the system's advanced architecture with simplified storage management, deep analytics, storage efficiency, and a competitive price point, it’s a system that’s hard to beat for Oracle Database and general application storage, file sharing, support of large-scale virtualized environments, and backup and recovery. Oracle ZFS Storage ZS7-2 runs software version OS8.8.0 (2013.1.8.0), with the latest technical advancements from the Oracle ZFS Storage engineering team. This release includes over 30 new features and performance improvements. Here are some of my favorites: Performance and Scalability Only flash system with native IB support 64-bit address space completeness NAS throttle for I/O balancing Security Enhancements Pool-level encryption SMB 3.1 encryption (AES-GCM and AES-CCM) Customizable login banner Availability and Efficiency Schedule pool scrubs Data deduplication and storage pool usage RAIDZ performance improvements Diagnostics and Visibility Average latency statistics Analytics overload warning The OS8.8.0 release is required for the ZS7-2 controllers, but is also available for download for the following Oracle ZFS Storage Appliance platforms: ZS5-4, ZS5-2, ZS4-4, ZS3-4, ZS3-2, 7420 and 7320. See Oracle ZFS Storage Appliance: Software Updates (Doc ID 2021771.1) available on MOS for more details. The first ZFS Storage Appliance was released in January 2009. It's been over 10 years of innovation if you consider the creation of the ZFS file system! That's a blog for another day.

Today, we are announcing the latest Oracle ZFS Storage Appliance ZS7-2, which is the sixth-generation of our unified storage platform. Built on the ground-breaking ZFS file system and hybrid...

Join Us for an Oracle ZFS Storage WebCast

Happy Summer! Please join us for a webcast about Oracle ZFS Storage integration with Oracle Database workloads. Because we're co-engineered, we can develop optimizations that provide unmatched performance This Thursday, Greg Drobish and I are doing a webcast on current ZFS Storage innovations, integration with Oracle Database, and recent features that yield great data reduction ratios. 3rd Thursday Tech Talk: Oracle ZFS Storage Appliance - Optimized for Oracle Database June 21, 2018 9 AM PT / 12 Noon ET Is Your Oracle Database Supported by Modern Architecture? The Oracle database supports key enterprise applications and workloads where critical information lives. It deserves a storage solution that is architected for the modern data center —not just something generic. The Oracle ZFS Storage Appliance fills this requirement. This technical session will discuss key features and capabilities, points of integration with the Oracle database and positioning within Oracle’s Cloud and on-premises infrastructure offerings. https://www.oracle.com/a/ocom/docs/dc/sev100717805-na-us-on-ce1-ie1a-ev.html         3rd Thursday Tech Talk Oracle ZFS Storage Appliance - Optimized for Oracle Database June 21, 2018 9 AM PT / 12 Noon ET Register Now       Is Your Oracle Database Supported by Modern Architecture? In modern business environments, the Oracle database supports key enterprise applications and workloads because that’s where critical information lives. This vital part of your IT landscape deserves a storage solution that is architected for the modern data center — not just something generic. The Oracle ZFS Storage Appliance fills this requirement. As a high performance, enterprise-grade NAS solution, ZFS Storage provides simplified management of both storage and data services, an all-flash configuration for extreme performance, unique integration points with the Oracle Database, and tight integration with Oracle Engineering Systems. This technical session will: Provide an overview of the ZFS Storage Appliance Discuss key product features and capabilities, points of integration with the Oracle database Review the positioning within Oracle’s Cloud and on-premises infrastructure offerings Register     Featured Speakers   Cindy Swearingen Principal Product Manager, ZFS Oracle Corporation Cindy is a principal product manager on the ZFS storage product management team. She has worked with the ZFS engineering team since 2003 in a variety of roles that include evangelizing the merits of ZFS storage internally and externally.   Greg Drobish Principal Software Engineer Oracle Corporation Greg is a Principal Software Engineer with the Oracle storage team. He has experience working with application integrations, solutions development and performance characterization with Oracle ZFS storage. Areas of focus include Oracle Engineered Systems integration, database backup and database workloads. Register Today     Agenda 12:00 p.m. – 12:05 p.m. ET   Browser & Sound Test - Audio is streamed. No Dial-In Number 12:05 p.m. – 12:45 p.m. ET   Oracle ZFS Storage Appliance - Optimized for Oracle Database   12:45 p.m. – 12:50 p.m. ET   Q&A        

Happy Summer! Please join us for a webcast about Oracle ZFS Storage integration with Oracle Database workloads. Because we're co-engineered, we can develop optimizations that provide...

But Wait...ZFS Always Had Device Flexibility

Talking to customers this week about the Oracle Solaris ZFS device removal feature brought up a few key points that I would like to emphasize as a follow-on to this previous blog: Mirrored pool configurations have always had the most device flexibility because you can add vdevs, detach devices, and replace smaller devices or LUNs with larger devices. You could even have a mirrored pool configuration with a uneven number of devices and even mirror across different sized disks. The mirrored capacity would be equivalent to the smallest size disk when mirroring across different sized disks. Devices can be detached if you need to temporarily borrow a pool device for another task but its risky to remove redundancy from an active pool on a running system. Customers who have leveraged this flexibility for years now are asking what exactly is the difference between detaching a device or removing device? This difference is best illustrated with a mirrored storage pool. The key difference is that detaching a device from a mirrored pool does not reduce the pool capacity. Removing a mirrored vdev in the Solaris 11.4 release does reduce the pool capacity. See the example below. Also illustrating my point that ZFS always had device flexibility was a comment from a customer a few years ago, who said  about ZFS device removal, "Forget it...I don't need it any more!" I have 2 mirrored pools, tank and tank-clone, roughly the equivalent amount of data and pool capacity: # zpool list tank tank-clone NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 56.6G 871G 6% 2.00x ONLINE - tank-clone 928G 56.3G 872G 6% 1.00x ONLINE - # zpool status tank tank-clone pool: tank state: ONLINE scan: resilvered 19.8M in 12s with 0 errors on Mon Apr 16 14:41:28 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 c3t4d0 ONLINE 0 0 0 errors: No known data errors pool: tank-clone state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank-clone ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 errors: No known data errors Detach devices from tank. Removing pool redundancy is not recommended but to illustrate a point that detaching a device from a mirrored configuration doesn't change the pool's overall capacity: # zpool detach tank c4t2d0 # zpool detach tank c3t4d0 # zpool list tank NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 56.6G 871G 6% 2.00x ONLINE - # zpool status tank pool: tank state: ONLINE scan: resilvered 19.8M in 12s with 0 errors on Mon Apr 16 14:41:28 2018 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 errors: No known data errors Next, remove a mirrored vdev from tank-clone and review the pool capacity change: # zpool remove tank-clone mirror-1 # zpool status tank-clone pool: tank-clone state: ONLINE scan: resilvered 28.1G in 13m17s with 0 errors on Wed Apr 25 09:36:35 2018 config: NAME STATE READ WRITE CKSUM tank-clone ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 errors: No known data errors # zpool list tank tank-clone NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT tank 928G 56.6G 871G 6% 2.00x ONLINE - tank-clone 464G 56.7G 407G 12% 1.00x ONLINE - In summary, Solaris 11.4  device removal brings additional flexibility for managing your data and devices with ZFS.

Talking to customers this week about the Oracle Solaris ZFS device removal feature brought up a few key points that I would like to emphasize as a follow-on to this previous blog: Mirrored pool...

Oracle Solaris 11.4 Data Management Features

Oracle engineering teams have a long history of working together to create great products. Solaris 11.4 is no exception.The recalibration of Solaris releases into a continuous innovation strategy means that we had more a bit time to integrate some great data management features into the Solaris OS. This effort includes collaborative work from the ZFS, NFS, SMB, replication, performance, and security teams. We also get great feature suggestions or RFEs from our customers. You know who you are. :-) My favorites are always the ones that make device administration easier. Let's review some of the data management features that are available in Solaris 11.4 beta release: Data Reduction Deduplication 2.0 Data Sharing/Migration NFSv4.1 server features SMB 2.0, 2.1, and 3.0 features Shadow migration improvements I/O Control and Visibility Read/write I/O limits fsstat I/O latency dtrace file ops and iSCSI provider  Pool and Device Management Change pool GUID (reGUID) format fast startup Mirrored device selection zpool get options zpool label command RAIDZ space improvement Pool/File System Performance Asynchronous destroy Meta devices (DDT SSDs) Reduce resilvering restarts Send Stream/Replication Enhancements Deduplicated send streams File level copy (zcopy/reflinkat) Raw (compressed) send streams Resumable replication (zfs list -p) Security Related Keychangedate/rekeydate improvements Sharing and monitoring sensitive data Be sure to see Glenn Faden's blogs on these topics ZFS storage management authorization This is the just a high-level view of feature set in the beta release. There are too many to list and more to come!

Oracle engineering teams have a long history of working together to create great products. Solaris 11.4 is no exception.The recalibration of Solaris releases into a continuous innovation strategy...

Life in the ZFS Lane

Oracle ZFS Storage Innovation: OS 8.7 Data Protection Highlights

Now's a great time to store data as efficiently as possible... Oracle ZFS Storage OS 8.7 deduplication 2.0 and LZ4 compression provide increased storage and replication efficiency with data reduction ratios in the 2X-8.5X range for backup use cases. Deduplication is a not just a great data reduction feature but comes with excellent performance that is expected from Oracle ZFS Storage. Based on our testing, an Oracle ZS5-4 system (active/active cluster) running RMAN level 0 backups provides excellent performance and great efficiency: Sustained backup rates up to 41.4 TB/hour Sustained restore rates up to 55.4 TB/hour 273% increase in effective usable capacity Deduplication 2.0 can help you save space and reduce your backup data footprint with no additional licenses or pesky host-side agents. Full backups yield the highest overall data reduction ratios. The new design leverages the Hybrid Storage Pool (HSP) model, which is the heart of the ZFS Storage Appliance architecture, to scale deduplication performance by placing the deduplication table (DDT) on SSDs. We have done extensive testing with deduplication 2.0 and LZ4 compression on the ZFS Storage Appliance as a backup target running OS 8.7 for the most popular backup applications such as Veritas Netbackup, VEEAM and RMAN. Learn more about best practices for backup here on OTN. Oracle ZFS Storage replication now leverages both deduplication and compression for better efficiency and performance, which means replication streams ... remain compressed over the wire, reducing network and system resources can be deduplicated, further reducing resources resumable so that they can be restarted where they left off, thereby reducing replication time and network resources My development colleagues, Bob Handlin, Siyu Liu, and I talk more about these features here.

Now's a great time to store data as efficiently as possible... Oracle ZFS Storage OS 8.7 deduplication 2.0 and LZ4 compression provide increased storage and replication efficiency with data reduction...

Current ZFS Pool Capacity Recommendations

Happy Summer, ZFS Fans! We were discussing the recommended ZFS storage pool capacity recently so it seems like a good time to review this information externally as well.  The recommended pool capacity was changed from 80% to 90% starting in the Oracle Solaris 11 and Oracle Solaris 10 1/13 releases, which is reflected in ZFS file system version 5. However, there are some nuances to review.  Oracle Solaris ZFS is designed with a redirect on write architecture so it needs some headroom for finding space to write new data. This is the reason why we provide a general recommendation of keeping ZFS storage pools below 90% capacity, but the reality is that the percentage where performance is impacted depends greatly on the workload: If data is mostly added (write once, remove never), then it's very easy for a redirect on write architecture such as ZFS to find new blocks. Here the percentage full when performance is impacted would be more than the provided generic rule of thumb, say 95%.  Similarly, if data is made of large files/large blocks (128K or 1MB) where data is removed in bulk operations, then the rule of thumb can also be relaxed. The other end of the spectrum is where a large percentage of the pool (say 50% or more) is made of 8K chunks (DB files, iSCSI LUNs, or many small files) with constant rewrites (dynamic data), then the 90% rule of thumb needs to be followed strictly.  If 100% of the data is small blocks of dynamic data, then you should monitor your pool closely. Possibly starting as early as 80%. The sign to watch for is increased disk IOPS to achieve the same level of client IOPS. Other storage vendors with ROW-based file systems often report reduced overall storage capacity in order to maintain this headroom (and hide the need for the headroom). Our position is that it is up to you to determine how your pool space is utilized.

Happy Summer, ZFS Fans! We were discussing the recommended ZFS storage pool capacity recently so it seems like a good time to review this information externally as well.  The recommended pool capacity...

ZFS Data and Pool Recovery

I was talking to the ZFS engineering team recently about a comment we received through another venue around the lack of ZFS recovery tools. We think this comment was possibly related by a pool failure in 2006 after reviewing mail archives. Prompted by this comment and 10 years of working with our customers to use ZFS successfully, let's review both the problems we saw when ZFS was in its infancy, particularly around 3rd-party hardware and drivers, either ignoring cache flush commands or not generating or fabricating device IDs, and the tools that we had then and now. Let's look at the problems associated with cache flushing first: ZFS data integrity depends upon explicit write ordering: first data, metadata, and then finally the uberblock A disk drive write cache is a small amount of memory on the drive's controller board ZFS enables this cache and flushes it out every time ZFS commits a transaction We found that some 3rd-party disk drives ignore the "synchronize cache" command and just discard the request, which can lead to out-of-order writes. This issue might prevent pools from be imported. The second problem is around pool device IDs: All file systems must rely on a close relationship with their underlying devices and ZFS is no exception.  ZFS tracks pool devices with internal device IDs so that if any intervening hardware component changes, ZFS can find its own pool devices. Third-party storage drivers do not always generate or fabricate device IDs so if the hardware is moved or changed, ZFS can't find the pool devices and you could end up with a broken ZFS pool. If the pool is exported or the system is shutdown before hardware is changed or moved, ZFS has a chance to reread the device information and recover, unless there was some other hardware or cabling problem. Oracle/Sun hardware generates device IDs that ZFS relies on but I still don't recommend moving or changing hardware under live pools. 10 years ago, we had the tools to recover when a pool was damaged by changing pool device IDs, but most of us didn't know the steps to recover back then. Because we saw a fondness in the ZFS community for moving hardware underneath ZFS storage pools that had the unfortunate result of changing pool device IDs, we learned the steps to recover, using an existing tool, zdb, that integrated with ZFS (thankfully) back in 2005. A summary of the steps is to use zdb -l to identify the pool device labels (and device IDs), create symbolic links in /dev/dsk to the previous device names, and then try to import the pool with simulated device names. Or, use dd to make a physical block copy of the pool devices and try to lofi-mount them with the original pool device names so that the data can be recovered. Many early ZFS engineers, support engineers, and community members worked tirelessly to recover broken pools and we are all grateful. Many still do. Then and Now  Then (2005) - ZFS debugger (zdb) became an invaluable tool not only for identifying and then reconstructing pool device information, but also for recovering ZFS pool data. Ben Rockwood wrote a blog in 2008 on how to use zdb to recover ZFS data. I've used it myself to verify that ZFS data is actually encrypted and also how to recover data. 2009 - Pool recovery (zpool clear -FXn or zpool import -FXn) attempts to rewind the pool back to a previous transaction so that the damaged pool could be cleared if already imported upon reboot or just imported. This feature allows pool recovery, but must be used as a first step and has a risk of losing a small amount of data due to rewinding to a previous transaction. 2011 - Read-only pool import allows a pool on broken hardware to be imported so that at least the data can be recovered. Now - Along with the above tools that have been enhanced along the way, Oracle ZFS experts (many from the early days) are available to help recover ZFS pools and data. The best way to engage is to go through MOS support. Lessons Learned: System hardware recommendations: Provide ECC memory and redundant pool devices, following the latest ZFS Best Practices. Ensure hardware respects cache flush commands. If you are using storage arrays with battery-backed cache, make sure you monitor battery life. ZFS pools built on SATA disks behind some SAS expanders were problematic in that if one disk failed, other disks around the failed disk also failed (as if in sympathy). Don't move or change hardware under live pools. Export the pool first or shutdown the system. Keep a listing of zdb -l output and compare it to the device information for the pool about to be imported. Be prepared to repair the pool device links if there is a mismatch. Maximizing for space generally means maximizing pain upon a hardware failure. If your pool is too big to back up, reconfigure it so it can be backed up.  

I was talking to the ZFS engineering team recently about a comment we received through another venue around the lack of ZFS recovery tools. We think this comment was possibly related by a pool failure...

Recommended ZFS Configurations and Practices

We recently celebrated 10 years of ZFS in Oracle Solaris so I've spent many years working with people internally and externally to ensure optimal ZFS configurations and practices. You can review recommended ZFS configurations and practices in the ZFS Best Practices section of the docs. ZFS brings modern simplicity and redundancy to the enterprise datacenter where previously we wrestled with file systems and devices. I used to whine like the exasperated comedienne of data storage, asking incredulously: Is it a disk...a slice…a partition? Do I need a volume…sub mirror…plex or WHAT? VTOC…SMI…or EFI??? Not to mention the 17-steps to create a disk slice. ZFS reduces those complications. Today, I compare modern ZFS to the past and also provide a review of current ZFS best practices: ZFS is both a file system and a volume manager but with one simple interface so you don't have to learn separate products just to manage your data. If you are transitioning your systems running Solaris 10 to Solaris 11, your task is simplified because ZFS is basically the same on both releases. ZFS supports EFI-labeled disks for both root pools and non-root (data) pools, which means whole disks can be used for your pools and file systems. This means never having to say you're sorry or translated another way: never having to use the format utility again. Redundant ZFS storage pools are created in one step, no special formatting, slices, or partitions are required. We really are living happily ever after. Redundant ZFS storage pools are highly recommended and should be part of every ZFS configuration. See the best practices for tips on creating a recommended ZFS storage configuration.  Even with redundant ZFS storage pools, you should always have spares and a recent backup because disasters happen. Rather than including a spare on a ZFS root pool, you can easily create a 2-, 3-, or 4-way (or more) mirror, depending on your paranoia level. In the past, if you were using a traditional volume management product, you could break a mirror, patch the system and if all went well, re-attach the mirror and let it sync.  Today, we update a Solaris boot environment (based on a ZFS snapshot) so if anything goes wrong during the update, you can easily rollback. No need exists to detach disks to do a software update. Put another way: No need exists to detach disks and compromise your redundancy. Caution: If you detach a disk from a mirrored ZFS root pool, the detached disk is no longer bootable. ZFS file systems are created from a ZFS storage pool, which is a collection of disks, so all file systems consume space in the pool. Disk space consumed by ZFS file systems can be managed with quotas and reservations. If the file systems need more space, just add more disks to the pool. No one-to-one correspondence exists between a ZFS file system and a disk slice unless you want to simulate the traditional approach. This configuration is more granular and harder to manage if a file system fills up and is not recommended. We saw a lot of hard lessons in the early days of ZFS because of poor configurations or failing hardware and that motivated us to provide data recovery tools. The best lesson is to always host important data on recommended ZFS storage pool configurations that include redundancy and always have regular backups at the onset. You can choose between existing enterprise backup tools or something like Oracle Secure Backup. Learn more about this low-cost backup solution here. Selecting hardware for hosting critical data should match the value of the data. Disks are relatively cheap compared to the value of your data so I don't consider that mirrored configurations double the disk space cost. This is the cost of keeping your data available and if you consider the cost of recovery (in time and money), the mirroring decision should be a snap. Expensive hardware arrays are not immune to bugs or failure so consider configuring the array in JBOD mode and presenting individual disks to ZFS so that they can be mirrored. Should your ZFS pool encounter corruption due to hardware issues, see the ZFS Pool Recovery section in the ZFS Admin Guide about suggested actions for handling situations such as individual file corruption or pool metadata corruption. Testing previous pool states to see if agood state can be found is often the first step.  For more complex corruption situations, please open a Service Request through My Oracle Support to speak with a ZFS specialist. Life is complicated enough. Your data storage doesn't have to be.

We recently celebrated 10 years of ZFS in Oracle Solaris so I've spent many years working with people internally and externally to ensure optimal ZFS configurations and practices. You can...

Happy 10th Birthday, ZFS!

 One Smart 10 Year Old (Tagline shamelessly stolen from another great story) Ten years ago today, (October 31, 2005), the ZFS file system integrated into what would become Oracle Solaris 11. Along with many other system admin types, I believe ZFS is one of the very best things that has happened to UNIX administration, ever. I have nothing but gratitude and respect for the original ZFS developers and the teams that continue to evolve and create products around ZFS. Many, many people contributed to the success of ZFS in the early days and its continued evolution. Many have moved on, but many from the original ZFS engineering team are still here at Oracle and that team has expanded to the largest ZFS engineering effort on the planet. The ZFS product development ecosystem supports a wide range of Oracle's data management solutions for many different appliances, engineered systems, and Oracle Cloud products that run the world's businesses efficiently and securely. This story is a whole other blog topic. ZFS is a strong foundation for the Oracle Solaris OS: Hierarchical checksums and data redundancy automatically protect data ZFS snapshot and cloning provide Solaris boot environment provisioning and fast recovery 
Flexible data sharing over NFS, FC, iSCSI, block or object storage 
 File system encryption secures critical data at every level ZFS supports secure live migration of VMs  Below I gathered the primary ZFS features that have integrated since 2005. This long list does not include features that are in progress for a future Oracle Solaris release but are available as part of our Solaris 12 platinum beta program. Nor does this list include separate ZFS storage appliance features and the whole suite of products that have evolved from that effort. Data Management General Expanded online help for commands (zfs help and zpool help) File system monitoring tool (fsstat) File System Creation Create intermediate file systems (zfs create -p) Set ZFS file system properties at pool creation time Roll back a file system without unmounting Data/File System Services Display all file system information (zfs get all) canmount property xattr property User defined properties Data Sharing, Interoperability, and Migration Case insensitivity Enhanced sharing syntax SMB, SMB 2.0, SMB 2.1     Shadow migration iSCSI integration/support File System Accounting and Storage Reduction User/group quotas Default user/group quotas ZFS quotas and reservations for file system data only (refquota and refreservation) Disk-space accounting properties (usedbychildren, usedbydataset, usedbyrefreservation, usedbysnapshots) LZJB, GZIP, and LZ4 compression Deduplication Data Replication and Send Stream Management Ditto blocks Recursive ZFS snapshots Recursively renaming snapshots Identify and recursively identify ZFS snapshot differences (zfs diff) Receivedproperties Incrementaland cumulative snapshots List snapshots (listsnapshots, listsnap) property Clonepromotion and improved clone destruction Improvedsnapshot destruction Sendholds Data Security and Protection Compact NFSv4 ACL format
 Data Encryption
 Delegated administration
 Multilevel datasets
 System attribute support
 ACL improvements
 aclinherit passthrough aclmode reintroduced with new properties
 Trivial ACL improvements  Performance L2ARC primarycache/secondarycache Persistent L2ARC logbias property sync property New scheduling class SDC (pool-name process) Concurrent metaslab syncing Improved block picking reARC New prefetch algorithm Better performance near file system quota ZIO join Future pending Zero copy aggregation ZIL train VM2 / ARC better memory management NUMA reader/writer lock support Pool and Device Management Pool and File System Interoperability •    File system upgrade (zfs upgrade)•    Pool version (zpool upgrade) Pool/Device Fault Handling •    Integration with FMA•    Clear pool errors (zpool clear)•    ZFS hotplug enhancements (REMOVED)•    Pool resilver and scrub in-progress reporting•    Pool resilver and scrub completion reporting•    zpool status interval and count Pool Support     •    Pool history (zpool history)•    RAIDZ, RAIDZ2, and RAIDZ3•    Recover destroyed pools (zpool import –D)•    cachefile and failmode properties•    Pool recovery (zpool import and zpool clear –F (rewind))•    zpool list changes (AVAIL and USED)•    Split a mirrored pool (zpool split)•    Automatic pool expansion (autoexpand)•    Pool/file system monitoring (zpool monitor)•    New resilvering algorithm Device Handling •    Import with missing log device•    Import read-only•    Log removal•    Identify devices by physical locations (zpool status/zpool iostat –l)•    Spare support•    Improved spare handling and checking OS Level Support* *ZFS provides underlying support for many of these great features: •    Root pool support/integration•    BE management•    Solaris 10 Live Upgrade support•    Unified Archives for system cloning and recovery Solaris Virtualization* *ZFS provides underlying support for many of these great features: ZOSS ZOSS over NFS Immutable zones Secure live migration

 One Smart 10 Year Old (Tagline shamelessly stolen from another great story) Ten years ago today, (October 31, 2005), the ZFS file system integrated into what would become Oracle Solaris 11. Along...

Today is About Gratitude...the Oracle Solaris Brand is Strong

Last week at Oracle Open World, we launched Oracle Solaris 11.3, the most advanced operating system. ZFS is a solid foundation for Oracle Solaris on many levels, for which many of us are grateful. More about this tomorrow. Today is about gratitude to our customers, who joined us at another Oracle Open World, where they shared their great Oracle Solaris on SPARC and x86 technology stories. They run the world's businesses on our products--securely, efficiently, and in many cases, with staggering performance on our new and existing hardware. When I go to a show, I don't want to hear what vendors are saying. To me, the best part of a show is hearing what products the world's businesses are using and why. The most satisfying part of my job is working with our customers. Who wouldn't be inspired? From around the world, they run automotive, banking, communication, e-commerce, financial, health care, insurance, and provide internet and media services on our products. This is why I come to work every day. Our customers, along with the Oracle Solaris engineering team, create a solid partnership for building great products. I want to personally thank Robert, Chris, Fritz, Thorsten, Marcus, Krishna, Justin, Taki, Mehmet, and Ozal for publicly sharing their Oracle Solaris stories and use cases. They inspire us, and I hope, we inspire them. Other, recent inspiring Oracle Solaris customer success stories are available here: https://www.oracle.com/search/customers/browse/_/N-1z140qr

Last week at Oracle Open World, we launched Oracle Solaris 11.3, the most advanced operating system. ZFS is a solid foundation for Oracle Solaris on many levels, for which many of us are...

Reducing Storage Costs with ZFS Compression

This blog describes how to use ZFS compression as a simple and flexible way to reduce the power, cooling, and floor space costs that are associated with storing data. Oracle Solaris 11.3 includes lz4 compression. You can display supported compression algorithms on your system like this:  # zfs help compressioncompression (property)Editable, InheritableAccepted values: on | off | lzjb | gzip | gzip-[1-9] | zle | lz4 When ZFS compression is enabled on a file system, all newly written data is compressed. If you are storing new, compressible data, the syntax is simple: # zfs create -o compression=on tank/data ZFS file system properties are flexible because you can set them on an individual file system or an entire pool. For example: # zfs set compression=on pond The above syntax means that all newly written data to this pool is compressed. If you have compressible data but are concerned about the impact on system resources, consider trying lz4 compression. For example: # zfs create -o compression=lz4 tank/cdata cannot create 'tank/cdata': pool must be upgraded to set this property or value This error means that the pool version needs to be updated to pool version 37 to support lz4 compression. # zpool upgrade tank This system is currently running ZFS pool version 37.Successfully upgraded 'tank' from version 35 to version 37 # zfs create -o compression=lz4 tank/cdata  *************************  More clarification added around root pool behavior: Currently, neither gzip nor lz4 compression is supported on root pools. If you attempt to set either gzip or lz4 compression on the current BE file system, you see an error similar to the following: # zfs set compression=lz4 rpool/ROOT/s11u3 cannot set property for 'rpool/ROOT/s11u3': property setting is not allowed on bootable datasets  If you set either gzip or lz4 compression on a root pool, no error occurs. (I filed a bug about this). If you attempt to pkg update the current BE, the operation eventually fails. You will see a message similar to this: BootmgmtError: operation not supported on this type of pool Use the following steps to recover:  1. Set default compression (lzjb) on the root pool: # zfs set compression=on rpool 2. Remove the BE that was created during the (failed) pkg update process. # beadm destroy s11u3-1  *************************  In summary, ZFS compression is simple, flexible, and can reduce your storage footprint. Solaris customers are reporting compression ratios in the 2X - 15X range, depending on their workloads, which can translate into huge cost savings. Give it a try.

This blog describes how to use ZFS compression as a simple and flexible way to reduce the power, cooling, and floor space costs that are associated with storing data. Oracle Solaris 11.3 includes lz4 co...

Welcome to Oracle Solaris 11.3 ZFS

This blog introduces Oracle Solaris 11.3, our enterprise cloud infrastructure environment that provides many new features that you can read about here. ZFS provides a secure, simple, and efficient foundation for supporting the Oracle Solaris infrastructure so I'm excited about the new ZFS features that are included in the Solaris 11.3 Beta release. Here are few highlights: Efficient and Flexible LZ4 compression: Enabling this compression algorithm generally achieves a higher compression ratio with better system performance. If your system is well resourced, then enabling the default ZFS (lzjb) compression or LZ4 (lz4) compression should be imperceptible other than saving heaps of disk space. If you haven't enabled ZFS compression because you are concerned about reduced system resources, then give LZ4 compression a try. Default user and group quotas: Automatically allocate and constrain disk space on large sandbox environments with default user or group quotas. Simplified Management Monitoring file system/pool operations: You can use the new zpool monitor command to monitor ZFS pool or file system operations, such as zfs send and receive operations. Automatic spare detection: ZFS determines whether an unused spare is failing or failed and FMA generates a fault report. Performance and Scalability: Roch B. blogs about these features in detail but here are some highlights that illustrate efficiency in memory operation and block allocation: Hot data stays cached in memory after the system is rebooted. If the data is already compressed, it remains compressed in memory. This is persistent L2ARC. ZFS ARC is redesigned for large memory systems ZFS I/O aggregation uses one less copy Recommended pool capacity can reach 90% and above (depending on workloads) due to improved block allocation SMB 2.1 is built for high-speed networks SMB requests scale to 1MB rather than 64KB Clients get performance benefit of not losing local caching when the same file is opened frequently

This blog introduces Oracle Solaris 11.3, our enterprise cloud infrastructure environment that provides many new features that you can read about here. ZFS provides a secure, simple, and efficient...

ZFS Across the Data Center

Previously, I described ZFS as the foundation of the Solaris OS because it is completely integrated in the core Solaris OS providing many key functions: Integrated Recovery and Provisioning ZFS boot environments for quick failsafe recovery ZFS snapshot and cloning for rapid provisioning Automated system recovery and cloning Integrated VM Scalability Reduce VM storage footprints with ZFS compression Share VM data over NFS, SMB, iSCSI, or FC VM hosting on block or object storage Integrated Data Security Hierarchical checksums and redundant data Data encryption throughout the stack VM file systems can be locked down with read-only ZFS file systems ZFS File System Compatibility ZFS is supported on both Solaris 11 and Solaris 10 systems and across Oracle Solaris Zones and LDOMs. This means you can reduce the number of different file systems that you are trying to manage across the same hardware, when running on bare metal or virtualization or both. UFS or other file system data can easily be migrated across systems by using ZFS's shadow migration feature. The amount of time and energy to manage file systems is reduced when you are managing the same, simple file system across your data center. File system management compatibility saves both time and money. Complete Data Storage Solution Oracle's ZFS storage appliance is based on both the ZFS file system and the Solaris OS. This world- class storage appliance provides robust data protection and all the ZFS features that you know and love in an easily managed box. And, with the best price performance in the industry. You can boot zones, LDOMs, or VMware images on this appliance as well as quickly diagnose any problems with dtrace analytics. You can easily deploy and boot 10K+ virtual images on a ZS3-2 cluster. If you haven't had a chance to take a ZFS storage appliance for a spin, you should. And, not just because it is a great storage solution at a great value. Standardizing on ZFS across your data center simplifies and reduces data storage management. This week in San Francisco, the Oracle ZFS storage appliance team announces exciting and new VMware API integrations at VMworld.  For more information, stop by VMworld 2014, Booth #205.

Previously, I described ZFS as the foundation of the Solaris OS because it is completely integrated in the core Solaris OS providing many key functions: Integrated Recovery and Provisioning ZFS boot...

Archiving ZFS Root Pool Images in Solaris 11.2

Solaris 11.2 is generally available (GA) and we will be talking about many great new features for months to come. In this blog, I highlight our new root pool cloning and recovery tool called Unified Archives. Previously, you had to create ZFS root pool snapshots manually and store them for root pool recovery. Performing all the steps is rather ponderous, particularly when your #1 goal is to get the system back up and running quickly. Now, we have a tool to create a deployable root pool image automatically and it is integrated with both the Automated Installer and our virtualization features. A Unified Archive is more than a system recovery tool because it can be used to clone and deploy OpenStack images on a system that runs Solaris 11.2. You can also use a Unified Archive image to transform between a zone, a kernel zone, and bare metal. Make sure you check it out by reviewing Jesse Butler's blog. At a minimum, use it to keep your root pool image for recovery purposes. Or, see how you can use it to maximally transform  between virtual to bare metal environments. Rolling out a new Solaris release is a good time to talk about new features, particularly one that allows you to more easily recover a root pool. Creating a root pool recovery image is a great best practice in Solaris 11.2 and this is an excellent time to review other ZFS best practices that are available here.

Solaris 11.2 is generally available (GA) and we will be talking about many great new features for months to come. In this blog, I highlight our new root pool cloning and recovery tool called Unified...

Welcome to Oracle Solaris 11.2

This is a new blog about ZFS: I've been involved in Solaris system administration for many years and I have played a variety of roles. For the last 10 years, I've been a passionate evangelist for ZFS. I will be blogging about ZFS features in Oracle Solaris and in the larger Oracle ecosystem. This blog entry is about our announcement of the past week: Those of us in Solaris Land are very excited about introducing Solaris 11.2 with integrated OpenStack features. This quote from Sean Michael Kerner at eweek sums up our strategy: "OpenStack isn't just an overlay in Solaris; it is Solaris." OpenStack isn't just an overlay in Solaris; it is Solaris. - See more at: http://www.eweek.com/cloud/solaris-11.2-the-future-of-openstack.html#sthash.YLocFyaa.dpuf OpenStack isn't just an overlay in Solaris; it is Solaris. - See more at: http://www.eweek.com/cloud/solaris-11.2-the-future-of-openstack.html#sthash.YLocFyaa.dpuf OpenStack isn't just an overlay in Solaris; it is Solaris. - See more at: http://www.eweek.com/cloud/solaris-11.2-the-future-of-openstack.html#sthash.YLocFyaa.dpuf Because cloud, life-cycle, networking, security, storage, and virtualization features are deeply integrated into the base OS, Solaris is more than an OS, it's a complete cloud platform. The eweek quote is similar to what I say about ZFS in Solaris. ZFS isn't just bolted on as an afterthought: "ZFS is the foundation of Solaris." More about this in another blog entry. For today, let's review my top 10 list of great features that ZFS brings to OpenStack. Top 10 Reasons Why ZFS is Best for Open Stack Reduce administration time so you can focus on your business Robust data redundancy without whitebox sprawl Economics of a low cost ZFS storage solution means decision makers will love it  Builtin compression means using less storage not more ZFS encryption keeps client data safe in an unsafe world  Hardware acceleration means crazy fast ZFS encryption performance on SPARC Support for a variety of protocols means data storage your way Reduce tenant privileges by delegating only snapshot/cloning privileges in multitenant environments Rapidly deploy VMs in development, test, and production with ZFS snapshots and cloning  Rapidly recover VMs in development, test, and production with ZFS rollback For more information about giving Solaris 11.2 a spin, see the Solaris 11.2 Beta page.

This is a new blog about ZFS: I've been involved in Solaris system administration for many years and I have played a variety of roles. For the last 10 years, I've been a passionate evangelist for ZFS....