Why you should avoid placing SSDs in traditional Arrays!

Performance_Car.pngSome vendors are announcing SSDs for their traditional arrays in the midrange and high end sector.

This is quite surprising to me, as it is comparable to place a 8-cylinder bi-turbo engine with 450HP into an entry level car (try to avoid to use any brands ;-)).

You might ask for an explanation? Here it is:

Traditional midrange arrays are developed to handle hundreds of traditional (15k RPM) harddisk drives. A traditional harddisk is capable of running about 250 IO/s. Now if we compare this with the actual enterprise class Solid State Disks available on the market, a single solid state disk can do about 50k IO/s read or 12k IO/s write. So in fact it is about 100x faster than a 15k RPM harddisk.

The controller of a midrange array system can probably do about 500k IO/s against it's internal cache. So in fact if we place about 10x Solid State Disks into such a storage system, it would simple consume the complete power of the controller. I didn't even start talking about RAID functionality!!!!

There is another major reason that makes such solutions ridiculous! It is the lantency you are adding by using FC networks. While a traditional harddisk works with a latency of about 3,125us (3.1ms), an enterprise class Solid State Disk works with a latency of less than 100us (0.1ms). By using FC, you might loose 1 IO/s with a traditional disk drive by adding the overhead of switches, cable length and array controllers! With a SSD and a latency of less than 100us, the overhead can end up at loosing 10'000 IO/s in the read performance.

So, where do I place the SSD technology?

The answer is simple!

AS CLOSE AS YOU CAN TO THE SERVER!

The best protocol and technology today is SAS (Serial Attached SCSI). The only limitation of SAS is the cable length as it is limited to about 8m, but there is no additional protocol overhead as on FC!

There are two ways to implement SAS attached SSDs.
  1. Directly in a Server, as most of the servers anyway uses SAS attached internal harddisks.
  2. Attached via SAS JBOD (Just a Bunch Of Disks) if you need more disks than a server could cover.
You might also ask how to implement the SSD technology in the most cost effective way?

Thats where most vendors have to stop as they have no solution or good answer.

Sun's ZFS is exactely the product that is capable of using all the benefits of SSDs in combination with the benefits of traditional storage (DENSITY). Combining the two technologies within one file system provides performance AND density under one umbrella. The magic word is Hybrid Storage Pool.

ZFS_HSP.png


While the slow part of ZFS (density) remains on traditional fibre channel storage arrays, the important parts (performance) like ZIL (ZFS Intend Log) and L2ARC (Level 2 Adaptive Replacement Cache) remains on SSD technology.
Comments:

I've found the write delays of SSD to be somewhat mitigated by running a pair of SSD disks in RAID 0.

I tested this in Dec '07 on a IBM HS21 with a pair of IBM 2.5" 16 GB SSD SAS disks. The write performance was greatly enhanced, and the random read performance was unbelievable. I no longer have the performance numbers, but random reads were faster than our 64 disk array (15k, 2Gb FC).

I am sure that the technology must have improved quite a bit since that time.

Best use I thought of was running the swap on Blade servers, and/or even SQL/FS journals (as you said for ZFS).

Posted by Jason Woods on February 11, 2009 at 04:21 PM CET #

I think you make the common mistake of comparing the latency of SSDs to HDDs. Most systems use a RAID cache instead of JBOD, and so the comparison should be between RAID and SSD JBOD. A RAID cache uses DRAM which has the lowest latency of them all, and even the entry level RAIDs use multiple storage processors to handle very high IOPS figures. Placing SSDs closer to the server in a DAS config creates a server overhead to synchronize, scrub, and re-mirror disks which is something that is best off-loaded to a RAID. So I think putting SSDs under the RAID cache is a good design.

Another point to consider is scalability. For high performance / high capacity systems a lot of 32GB SSDs will be required, and if you are using SAS then you will hit cable length limitations. Not so if you use FC and/or have the SSDs under the RAID cache.

And I guess we should consider your concern about the overhead of the FC fabric. FC comes in 8Gbps, SAS in 3Gbps, so already FC has a fatter pipe. In terms of latency, FC does not have any protocol overhead like TCP, it's pure hardware overhead caused by switching and cabling. And this overhead is negligible compared to the delays caused by the 4K write alignment issue with SSDs, let alone the fact that a SSD DAS config has to perform host-level write mirroring too. If you want to compare read performance, having 2\*8Gbps FC links can pump through more data than 4\*3Gbps SAS links.

So no, I think the SSD under RAID is far better overall than SSD DAS.

Posted by Steve A on December 08, 2010 at 01:59 AM CET #

Post a Comment:
  • HTML Syntax: NOT allowed
About

In this blog you can find interesting content about solutions and technologies that SUN is developing.

The blog is customer oriented and provides information for architects, chief technologists as well as engineers.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today