Wednesday Feb 04, 2009

The Screaming Fast Sun Modular Storage 6000 Family!

Did you know that both Sun StorageTek 6140 and 6540 disk arrays, which belong to our Modular Storage Line, are still leading the price/performance rankings in their class? Feel free to verify at StoragePerformance.org. The modular approach and the ability to upgrade from the smallest to the biggest system just by exchanging controllers is very unique and our customers love this investment protection!

The Uniqueness of the Sun Storage 6000 Familiy

Today, the 6000 modular storage portfolio looks as follows:
  • 6140-2 (up to 64 Disk Drives mixed FC and SATA)
  • 6140-4 (up to 112 Disk Drivers mixed FC and SATA)
  • 6540 (up to 224 Disk Drives mixed FC and SATA)
  • 6580 (up to 256 Disk Drives mixed FC and SATA)
  • 6780 (up to 256/448\* Disk Drives mixed FC and SATA)
6000_Modular_Family.png


All 6000 series arrays are using an ASIC (Application Specific Integrated Circuit) to do RAID operations. This results in a very low latency overhead and a guaranteed performance. The publicized cache volume is 100% dedicated to the ASIC and can't be accessed by the management CPU, which for example in case of the 6780 has a separate 2GB RAM. In the complete family, you have upgrade protection.

You can start with a 6140-2 and seamlessly upgrade to a 6780 by just replacing the contollers! No configuration changes or exports are necessary, as the complete RAID configuration is distributed to each single disk in the array. You can also move a complete RAID group to a different array in the family. Certainly you better take care that both are running on the same firmware level. ;-)

Sun StorageTek 6780 Array

As of today, Sun announced its latest and greatest midrange disk array. It is completing the modular line as the high end model of the 6000 series. The connectivity of the Storage Array and its features are very impressive and pretty unique in the midrange segment!
  • Replaceable Host Inteface Cards (two per Controller)
    • Up to 16x 4Gb or 8Gb\* FC Host Channels
    • Up to 8x 10Gb\* Ethernet Host Channels
  • 16x 4Gb FC Drive Channels
  • Up to 16x/28x\* Drive Enclosures
  • Up to 32GB\* dedicated RAID Cache
  • RAID 0,1,3,5,6,10
  • Up to 512 Domains = up to 512 servers with dedicated LUN mapping can be attached to the array
  • Enterprise Features:
    • Snapshot
    • Data Copy
    • Data Replicaton
Bellow are some insights about the architecture of the 6780 Array:

6780_Controller_Architecture.png


The internal flash storage allows longterm power outages without loosing IO that is not yet written on disk. As you can see, each drive chip has access to all disk drives. Everything in the controller and drive enclosure has at least a redundancy factor two. In some cases like the drive chips we have even a higher redundancy factor.

6780_CSM200_Connectivity.png


The expansion trays are SBODs (Switched Bunch Of Disks) and therefore limit the impact of a drive failure. Most other vendors still use looped JBODs. In such a case, a loop is vulnerable if a drive fails. In worst case a complete tray could fail just because of a failing drive. Also looped BODs are slower than switched BODs.

6780_Drive_Channel_Connectivity.png


Due to the high amount of drive channels, the maximum drive count per dual 4Gb FC loop is 64 (with 448 Drives). With 256 Drives, you will only have 32 drives per dual 4Gb FC loop. Due to this fact, and the dedicated ASICs for RAID calculations, the 6780 array can do up to 175'000 IOPS and 6.4GB/s throughput in disk read operations. This is for sure the top rank in the midrange segment!

Summary

Latest by now, you should know that Sun is NOT a me too manufacturer in the storage business. Our modular storage family uses leading edge technology and delivers investment protection by providing an easy upgrade path the the next higher controller level.

\*Will be available after initial release.

Sunday Feb 01, 2009

Traditional Arrays vs "The Open Storage Approach"

Why should I still use a traditional Array?

You may ask yourself why you should use a traditional array, if Sun is pushing towards OpenStorage? Good question! Now, as there isn't a cow that provides, milk, coke and beer, there isn't a storage product that does everything for you ... today. So, while our OpenStorage family is today perfect for IP network oriented access like CIFS, NFS and iSCSI it doesn't cover yet the FC block attached community. And despite all honour that ZFS and OpenSolaris deserve, an ASIC, if you have the money and the skills to build one, will do faster RAID calculations. ASICs do not require an operating system underneat the RAID code, which results in far less latency in calculation.

The Unanswered Question ...

There is one unanswered question that remains in the IT business. How long can companies afford to build ASICs that keep up with the performance increases in the volume business? ASICs, as the name states, are built for a certain purpose and therefore manufactured in a much lower volume. Means, they are simply much more expensive than general purpose built CPUs.

An other question might give you an impression of the future. Who is still programming Assembler? Every programmer knows that if you write perfect Assembler Code, no but really NO C, C++ or Java program will ever run faster than your Assembler program, right? But, programming Assembler gets so complex that you can't manage anymore your code. That's why we use abstraction layers to simplify your business.... Got a hint?

Now, there is also a huge design problem with a dedicated ASIC. You cannot extend its features by just upgrading the software as it is hardware. An ASIC can do what it's built for, and therefore is very limited in extending features! In a manufacturing and design perspective, this can be very limited. One little thing missing or wrong in a ASIC, and you will fail with the complete product without the chance to fix or change it. Uhhh, you better make no mistake ...

Conclusion

So depending on your requirements, you will have to choose the appropriate technology! If you can afford the no compromise way of storage, the best solution is to have both or maybe a combination of each. :-)

In a long term perspective, I only see one solution that survives. The combined approach of commodity hardware and software, provides the key elements that will succeed. This namely are:
  • Great price/performance
  • Possibility to add features (in best case for free) with easy upgrades
If the used software is open sourced, you suddenly have the ability to add features yourself to the subsystem! One example is the project COMSTAR that turns an OpenSolaris host into a SCSI target.

introducingCOMSTAR.png


So, you better keep the OpenStorage Vision of Sun in your mind.

Monday Aug 25, 2008

Why you should avoid placing SSDs in traditional Arrays!

Performance_Car.pngSome vendors are announcing SSDs for their traditional arrays in the midrange and high end sector.

This is quite surprising to me, as it is comparable to place a 8-cylinder bi-turbo engine with 450HP into an entry level car (try to avoid to use any brands ;-)).

You might ask for an explanation? Here it is:

Traditional midrange arrays are developed to handle hundreds of traditional (15k RPM) harddisk drives. A traditional harddisk is capable of running about 250 IO/s. Now if we compare this with the actual enterprise class Solid State Disks available on the market, a single solid state disk can do about 50k IO/s read or 12k IO/s write. So in fact it is about 100x faster than a 15k RPM harddisk.

The controller of a midrange array system can probably do about 500k IO/s against it's internal cache. So in fact if we place about 10x Solid State Disks into such a storage system, it would simple consume the complete power of the controller. I didn't even start talking about RAID functionality!!!!

There is another major reason that makes such solutions ridiculous! It is the lantency you are adding by using FC networks. While a traditional harddisk works with a latency of about 3,125us (3.1ms), an enterprise class Solid State Disk works with a latency of less than 100us (0.1ms). By using FC, you might loose 1 IO/s with a traditional disk drive by adding the overhead of switches, cable length and array controllers! With a SSD and a latency of less than 100us, the overhead can end up at loosing 10'000 IO/s in the read performance.

So, where do I place the SSD technology?

The answer is simple!

AS CLOSE AS YOU CAN TO THE SERVER!

The best protocol and technology today is SAS (Serial Attached SCSI). The only limitation of SAS is the cable length as it is limited to about 8m, but there is no additional protocol overhead as on FC!

There are two ways to implement SAS attached SSDs.
  1. Directly in a Server, as most of the servers anyway uses SAS attached internal harddisks.
  2. Attached via SAS JBOD (Just a Bunch Of Disks) if you need more disks than a server could cover.
You might also ask how to implement the SSD technology in the most cost effective way?

Thats where most vendors have to stop as they have no solution or good answer.

Sun's ZFS is exactely the product that is capable of using all the benefits of SSDs in combination with the benefits of traditional storage (DENSITY). Combining the two technologies within one file system provides performance AND density under one umbrella. The magic word is Hybrid Storage Pool.

ZFS_HSP.png


While the slow part of ZFS (density) remains on traditional fibre channel storage arrays, the important parts (performance) like ZIL (ZFS Intend Log) and L2ARC (Level 2 Adaptive Replacement Cache) remains on SSD technology.
About

In this blog you can find interesting content about solutions and technologies that SUN is developing.

The blog is customer oriented and provides information for architects, chief technologists as well as engineers.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today