Open Storage - The (R)Evolution

Why Pay more for Less?

Do you pay incredibly high license and maintenance fees for your Network Attached Storage? Are you locked into a vendor with proprietary Operating Systems and Protocols? Do you question yourself why you should pay just for using NFS, CIFS or NDMP which are standards since years?

You might answer all of the above mentioned questions with a big and bold YES. If this is the case, then keep reading this blog entry and you will see that there is an other WAY or PERSPECTIVE to go into the next decade of Open, Reliable and Fairly Priced Storage Solutions!

You will recognize that there is only one Vendor that fullfills the following topics:
  • Open Source Software and Operating System Stack
  • No proprietary hardware and drivers
  • 128Bit Transaction Oriented File System
  • Usage of fair priced SAS (Serial Attached SCSI) Connectivity
  • Hybrid Storage Concept
  • Usage of Solid State Technology to increase performance
And this vendor is SUN Microsystems!

I am now finished with the marketing part. Let's see how Sun Microsystems can help you optimize your Storage and Data Services!

The Open Storage Concept

As a general term, open storage refers to storage systems built with an open architecture, customers can select the best hardware and software components to meet their requirements. For example, a customer who needs network file services can usa an open storage filer built from a standard x86 server, disk drives, and OpenSolaris technology at fraction of the cost of a proprietary NAS appliance.

Almost all modern disk arrays and NAS are closed systems. All the components of a closed system must come from that specific vendor. Therefore you are locked into buying drives, controllers and proprietary software features at premium prices and typically you cannot add your own drivers or software to improve the functionality of this product.

The Open Storage Software

OpenSolaris is the cornerstone of Sun Open Storage offerings and provides a solid foundation as an open storage platform. The origin of OpenSolaris technology, the Solaris OS, has been in continous production since September 1991. OpenSolaris offers the most complete open source storage software stack in the industry. Below is a list of current and planned offerings:

At the storage protocol layer, OpenSolaris technology provides:
    opensolaris.png
  • SCSI
  • iSCSI
  • iSNS
  • FC
  • FCoE
  • InfiniBand software
  • RDMA
  • OSD
  • SES
  • SAS
At the storage presentation layer, OpenSolaris technology offers:
  • Solaris ZFS
  • UFS
  • SVM
  • NFS
  • Parallel NFS
  • CIFS
  • MPxIO
  • Shared QFS
  • FUSE
At the storage application layer, OpenSolaris technology offers:
  • MySQL
  • Postgres
  • BerkeleyDB
  • AVS
  • SAM-FS
  • Amanda
  • Filebench

Solaris ZFS

One of the key cornerstones of Sun's open storage platform is the Solaris ZFS file system. Solaris ZFS can address 256 quadrillion zettabytes of storage and handle a maximum file size of 16 exabytes. Several storage services are included in ZFS:
    ZFS.png
  • Snapshots
  • Point-in-time copy
  • Volume management (no need for additional volume managers!)
  • Command line and GUI oriented file system management
  • Data integrity features based on copy-on-write and RAID
  • Hybrid Storage Model
Vendors of closed storage appliances typically charge customers extra software licensing fees for data management services such as administration, replication, and volume management. The Solaris OS with Solaris ZFS moves this functionality to the operating system, simplifying storage management and eliminating layers in the storage stack. In doing this, Solaris ZFS changes the economics of storage. A closed and expensive storage system can now be replaced by a storage server running Solaris ZFS, or a server running Solaris ZFS attached to JBOD.

Solaris ZFS recently won InfoWorld’s 2008 Technology of the Year award for best file system. In the InfoWorld evaluation, the reviewer stated, “Soon after I started working with ZFS (Zettabyte File System), one thing became clear: The file system of the next 10 years will either be ZFS or something extremely similar.”

ZFS Hybrid Storage Model

zfs_hybrid_storage_model.png The ZFS Storage Pools have an extreme flexibility in terms of placing data on the optimal storage devices. You can basically split a storage pool in to three different sections:
  1. The High performance Read & Write Cache Pool
  2. The high performance read & write cache pool combines the systems main memory and SSDs for read caching. As you imagine, we are using SSDs (Solid State Disks) which have a big advantage compared to RAM and traditional disks. SSDs are NOT volatile as RAM, and they are much faster than traditional disks. Therefore you don't need to first load the data into the memory to become fast! Traditionally less than 10-20% of a file system are realy used often or need high performance. Imagine that exactely this part is stored on the SSD technology. The result is a grazy fast file system ;-) You can read more about how ZFS technically does this in an other blog entry soon.

  3. ZFS Intent Log pool
  4. All file system related system calls are logged as transaction records by the ZIL. The transaction records contain sufficient information to replay them back in the event of a system crash.

    ZFS operations are always a part of a DMU (Data Management Unit) transaction. When a DMU transaction is opened, there is also a ZIL transaction that is opened. This ZIL transaction is associated with the DMU transaction, and in most cases discarded when the DMU transaction commits. These transactions accumulate in memory until an fsync or O_DSYNC write happens in which case they are committed to stable storage. For committed DMU transactions, the ZIL transactions are discarded (from memory or stable storage).

    As you must have figured out by now, ZIL performance is critical for performance of synchronous writes. A common application that issues synchronous writes is a database. This means that all of these writes run at the speed of the ZIL. The ZIL is already quite optimized, and efforts will optimize this code path even further. Using solid state disks for the log make this screaming fast!

  5. High Capacity Pool
  6. The biggest advantage of traditional HDDs is the price per capacity and density value, which is until today unbeaten for online storage. While combining different technologies within a file system, you can now choose SATA technology for the high capacity pool while not loosing performance in the overall prespective. The ZFS pool manager automatically stripes across any number of high capacity HDDs. The ZFS IO-scheduler bundles disk IO to optimize arm movement and sector allocation.
Again, I will post more details about ZFS and the Hybrid Storage Concept soon in an other blog entry.

Solaris DTrace

Solaris DTrace provides an advanced tracing framework and language that enables users to ask arbitrary diagnostic questions of the storage subsystem, such as “Which user is generating which I/O load?” and “Is the storage subsystem data block size optimized for the application that is using it?” These queries place minimal load on the system and can be used to resolve support issues and increase system efficiency with very little analytical effort.

Solaris FMA - Fault Management Architecture

Solaris Fault Management Architecture provides automatic monitoring and diagnosis of I/O subsystems and hardware faults and facilitates a simpler and more effective end-to-end experience for system administrators, reducing cost of ownership. This is achieved by isolating and disabling faulty components and then continuing the provision of service through reconfiguration of redundant paths to data, even before an administrator knows there is a problem. The Solaris OS’ reconfiguration agents are integrated with other Solaris OS features such as Solaris Zones and Solaris Resource Manager, which provide a consistent administrative experience and are transparent to applications.

Sun StorageTek Availability Suite

Sun StorageTek Availability Suite software delivers open-source remote-mirror-copy and point-in-time-copy applications as well as a collection of supporting software and utilities. The remote-mirror-copy and point-in-time-copy software enable volumes and/ or their snapshots to be replicated between physically separated servers. Replicated volumes can be used for tape and disk backup, off-host data processing, disaster recovery solutions, content distribution, and other volume-based processing tasks.

Lustre File System

Lustre is Sun’s open-source shared disk file system that is generally used for largescale cluster computing. The Lustre file system is currently used in 15 percent of the top 500 supercomputers in the world, and six of the top 10 supercomputers. Lustre currently supports tens of thousands of nodes, petabytes of data, and billions of files. Development is underway to support one million nodes and trillions of files.

Conclusion

Today’s digital data, Internet applications, and emerging IT markets require new storage architectures that are more open and flexible, and that offer better IT economics. Open storage leverages industry-standard components and open software to build highly scalable, reliable, and affordable enterprise storage systems.

Open storage architectures are already competing with traditional storage architectures in the IT market, especially in Web 2.0 deployments and increasingly in other, more traditional storage markets. Open storage architectures won’t completely replace closed architectures in the near term, but the storage architecture mix in IT datacenters will definitely change over time.

We estimate that open storage architectures will make up just under 12 percent of the market by 2011, fueled by the industry’s need for more scalable and economic storage.
Comments:

Thanks for your post, Anatol. Could you please explain more in detail about the 'self-built' storage systems you explained?

-cut-
"For example, a customer who needs network file services can usa an open storage filer built from a standard x86 server, disk drives, and OpenSolaris technology at fraction of the cost of a proprietary NAS appliance."
-cut-

The idea to setup and run your own storage solution built upon x86 servers combined with an open system like open solaris is tempting.

On the pro side you would have full control about whats going on. But on the con side you might have a lot more hassle when setting it all up yourself. Think of systems engineers. They do not want to have an additional server to manage but to have centralized control over their infrastructure and storage systems. That's what I think, at least.

So what do you recommend in terms of using open storage systems? Are there tools to make life easier like the ones developed by the proprietary vendors?

Posted by Ansgar Wollnik on July 25, 2008 at 08:25 AM CEST #

Hi Ansgar,

I see your point. If you are a bit patient, Sun will give you the answer soon. Naturally Sun won't just release parts or guides to build your own storage server. You can certainly build your own storage server with OpenSolaris, but that's not the way enterprises will go, as they expect full support from the vendor. Again, be paitent, the answer to this question will be provided soon.

You can also find more information about OpenStorage on the following link:

http://www.sun.com/storage/openstorage/

Anatol

Posted by Anatol Studler on July 26, 2008 at 06:10 AM CEST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

In this blog you can find interesting content about solutions and technologies that SUN is developing.

The blog is customer oriented and provides information for architects, chief technologists as well as engineers.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today