A Brilliant Argument for ZFS in Cloud Storage Environments
By marchamilton on May 18, 2010
Without actually mentioning ZFS, Henry's analysis points out exactly why the innovative approach of ZFS to data integrity is required in multi-petabyte storage clouds. The key feature of ZFS enabling data integrity is the 256-bit checksum that protects your data. This checksum allows the ZFS self-healing feature to automatically repair corrupted data. ZFS is not new, it was introduced years ago with Solaris 10 and many many petabytes of mission critical data are protected today by ZFS at thousands of companies around the world. When ZFS was first advertised as a future-proof file system, most people were not even dreaming about clouds, but the ZFS designers were certainly thinking about multi-petabyte file systems, that is why they created ZFS with mind-boggling 128-bit scalability.
So thank you Henry for pointing out the quite real limitations of simple geographic replication in large cloud storage environments. If you don't have time to read Henry's brilliant analysis, just ask your cloud storage provider or your own internal IT staff if they are protecting your storage with ZFS. If they ask why, tell them to go ask Henry Newman about it.