What is Storage Network Engineering?
By kgibson on Mar 31, 2005
In the 90's new interconnects like Fibre Channel (FC) were developed that solved some of these problems. Early FC allowed much higher expandability and performance. It also allowed some amount of pooling and central management of physical storage assets. The term Storage Area Networks (SANs) was used to describe these limited storage networks. A problem though was operating systems still operated under the direct-attached, storage-as-peripheral model. They still tried to scan every storage device that was visible to them, then assumed exclusive control of that storage. OS's also assumed that the configuration couldn't change unless the server was powered-down, or that the number of devices was small enough that the system administrator could list them all in a configuration file. This forced administrators to partition up their SAN through zoning to give each OS only a small view of the SAN that appeared like a small direct-attached configuration, prevented multiple servers from seeing the same storage, and still allowed server to claim exclusive ownership of storage across the FC interconnect.
Today we have entered the age of true storage networks. Data centers now have storage networks with well over a thousand nodes. These are true networks on which the disk arrays and tape libraries have become Servers, providing block storage or tape archiving services. The Compute Servers are now Data Clients that discover and use these shared storage services across a network. The old direct-attach assumptions built into operating systems don't apply anymore. It would take hours to boot a compute server if the OS tried to scan the network for every device visible to it. You can't reboot the compute server every time the network is reconfigured or a new device is added. Also, you can't keep configuration files to describe the physical address of every storage device. Imagine if your web browser required you to maintain a file with the ethernet MAC addresses of every web server you visited.
In Storage Network Engineering we have designed a storage networking protocol stack into the Solaris OS making it the best Data Client on the storage network. Storage is discovered and exposed to applications as needed so boot times are fast. There is no editing config files with physical addresses. Multi-pathing for redundancy, routing through the network, and fail-over is built-in. New storage can be dynamically detected and path changes through the network are detected and adapted to without needing to reboot the Solaris Data Client.
For a more thorough description of the Solaris Storage Network stack see the Sun Blueprints document "Increasing Storage Area Network Productivity" located here: