Tuesday Apr 21, 2009

Video: Top 5 Cool Features of the Sun Storage 7000 Unified Storage Systems

A couple of weeks ago, Marc (our producer from the HELDENFunk Podcast) and I sat down and put together a video about the top 5 reasons why the new Sun Storage 7000 systems are so cool. We even "invited" Brendan Gregg to show us his latest trick:

For the next video, I'll try to learn more phrases by heart and look less at the prompter screen for a more natural feel. I apologize for my German accent (some people say it adds credibility :) ). Still, people seem to like the video, at least it has been viewed about 200 times already.

There's a lot of discussion around the Sun Storage 7000, most of it is very positive. In Germany, we like to complain a lot so of course we also hear a lot of constructive criticism. Most of the comments I hear fall into one of the two following categories:

  1. The Storage 7000 systems are cool, but I know ZFS/OpenSolaris can do "X" and I really want this to be in the Storage 7000 GUI as well!
    Yes, we know that there are still many features we'd like to see in the Storage 7000 and we're working on making them available. Make sure your Sun contact knows about your wishlist, so she can forward it to our engineers. Please remember that the Storage 7000 systems are meant to be easy-to-use appliances: Taking your "X" feature from ZFS/OpenSolaris and building a GUI around it is a hard thing to do, especially if you want it to work reliably and if you want it to be self-explanatory and self-serviceable. Please be patient, we're most probably working on your favourite features already.

  2. The Storage 7000 systems are cool, but I want more control. I want to change the hardware/hack them/take them apart/add more functionality/get them to do exactly what I want, etc.
    Sure, that feature is called "OpenSolaris". Please go to OpenSolaris.org, download the CD, install it on your favourite hardware and off you go!
    But, can I have the GUI, too, maybe as an SDK of some sort?
    No. The Storage 7000 systems are not "just a GUI". They are full-blown appliances which means that they're more than just the hardware and a GUI. A big part of the ease-of-use, stability, performance and predictability of these products is in the way configuration options are selected, tested and yes, limited, as well as a careful consideration of which features to implement at what time and which not. Only then comes the GUI on top, which is tailored to the overall product as a whole. In other words: You wouldn't go to BMW and ask them to give you their dashboard, radio and the lights so you can bolt them onto a Volkswagen, would you?

You see, either you build your own storage machine out of the building blocks you have, and get all the functionality and flexibility you want at the expense of some configuration effort,
or you buy the car as a whole, nice, round, sweet package, so you don't worry about configuration, implementation details, complexity, etc. Asking for anything in between will get you into trouble: Either you'll spend more effort than you want, or you won't get the kind of control you want.

If you understand German, there's some discussion of this topic as well as a great overview of the MySQL future plus a primer on SSDs in the latest episode of the HELDENFunk podcast.

And if you like the Sun Storage 7000 Unified Storage Systems as much as I do, here are the slides in StarOffice format, as well as in PDF format, so you can tell your colleagues and friends as well.

Tuesday Aug 12, 2008

ZFS saved my data. Right now.

Kid with a swimming ring.As you know, I have a server at home that I use for storing all my photos, music, backups and more using the Solaris ZFS filesystem. You could say that I store my life on my server.

For storage, I use Western Digital's MyBook Essential Edition USB drives because they are the cheapest ones I could find from a well-known brand. The packaging says "Put your life on it!". How fitting.

Last week, I had a team meeting and a colleague introduced us to some performance tuning techiques. When we started playing with iostat(1M), I logged into my server to do some stress tests. That was when my server said something like this:

constant@condorito:~$ zpool status

(data from other pools omitted)

  pool: santiago
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 16h28m with 0 errors on Fri Aug  8 11:19:37 2008
config:

	NAME         STATE     READ WRITE CKSUM
	santiago     DEGRADED     0     0     0
	  mirror     DEGRADED     0     0     0
	    c10t0d0  DEGRADED     0     0   135  too many errors
	    c9t0d0   DEGRADED     0     0    20  too many errors
	  mirror     ONLINE       0     0     0
	    c8t0d0   ONLINE       0     0     0
	    c7t0d0   ONLINE       0     0     0

errors: No known data errors

This tells us 3 important things:

  • Two of my disks (c10t0d0 and c9t0d0) are happily giving me garbage back instead of my data. Without knowing it.
    Thanks to ZFS' checksumming, we can detect this, even though the drive thinks everything is ok.
    No other storage device, RAID array, NAS or file system I know of can do this. Not even the increasingly hyped (and admittedly cool-looking) Drobo [1].
  • Because both drives are configured as a mirror, bad data from one device can be corrected by reading good data from the other device. This is the "applications are unaffected" and "no known data errors" part.
    Again, it's the checksums that enable ZFS to distinguish good data blocks from bad ones, and therefore enabling self-healing while the system is reading stuff from disk.
    As a result, even though both disks are not functioning properly, my data is still safe, because (luckily, albeit with millions of blocks per disk, statistics is on my side here) the erroneous blocks don't overlap in terms of what pieces of data they store.
    Again, no other storage technology can do this. RAID arrays only kick in when the disk drives as a whole are unacessible or when a drive  diagnoses itself to be broken. They do nothing against silent data corruption, which is what we see here and what all people on this planet that don't use ZFS (yet) can't see (yet). Until it's too late.
  • Data hygiene is a good thing. Do a "zpool scrub <poolname>" once in a while. Use cron(1M) to do this, for example every other week for all pools.

Over the weekend, I ordered myself a new disk (sheesh, they dropped EUR 5 in price already after just 5 days...) and after a "zpool replace santiago c10t0d0 c11t0d0" on monday, my pool started resilvering:

constant@condorito:~$ zpool status

(data from other pools omitted)

  pool: santiago
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: resilver in progress for 1h13m, 6.23% done, 18h23m to go
config:

        NAME           STATE     READ WRITE CKSUM
        santiago       DEGRADED     0     0     0
          mirror       DEGRADED     0     0     0
            replacing  DEGRADED     0     0     0
              c10t0d0  DEGRADED     0     0   135  too many errors
              c11t0d0  ONLINE       0     0     0
            c9t0d0     DEGRADED     0     0    20  too many errors
          mirror       ONLINE       0     0     0
            c8t0d0     ONLINE       0     0     0
            c7t0d0     ONLINE       0     0     0

errors: No known data errors

The next step for me is to send the c10t0d0 drive back and ask for a replacement under warranty (it's only a couple of months old). After receiving c10's replacement, I'll consider sending in c9 for replacement (depending on how the next scrub goes).

Which makes me wonder: How will drive manufacturers react to a new wave of warranty cases based on drive errors that were not easily detectable before?

[1] To the guys at Drobo: Of course you're invited to implement ZFS into the next revision of your products. It's open source. In fact, Drobo and ZFS would make a perfect team!

Thursday Jun 26, 2008

ZFS and Mac OS X Time Machine: The Perfect Team

A few months ago, I wrote about "X4500 + Solaris ZFS + iSCSI = Perfect Video Editing Storage". And thanks to you, my readers, it became one of my most popular blog entries. Then I wrote about "VirtualBox and ZFS: The Perfect Team", which turned out to be another very popular blog article. Well, I'm glad to introduce you to another perfect team now: Solaris ZFS and Mac OS X Time Machine.

Actually, it began a long time ago: In December '06, Ben Rockwood wrote about the beauty of ZFS and iSCSI integration, and immediatley I thought "That's the perfect solution to back up my Mac OS X PowerBook!" No more strings attached, just back up over WLAN to a really good storage device that lives on Solaris ZFS, while still using the Mac OS X native file system peculiarities. But Apple didn't have an iSCSI initiator yet (they still don't have one now) and the only free iSCSI initiator I could find was buggy, unstable and didn't like Solaris targets at all.

Then, Apple announced their Time Machine technology. Many people thought that this was related to them supporting ZFS and in fact, it's easy to believe that Time Machine's travels back in time are supported by ZFS snapshots. But they aren't. In reality, it's just a clever use of hardlinks. And not a very efficient one, too: Whenever a file changes, the whole file gets backed up again, even if you just changed a little bit of it.

Last week, a colleague of mine told me that Studio Networks Solutions had updated their globalSAN iSCSI Initiator software for Mac OS X and that it now works well with Solaris iSCSI targets. I decided to give it another try. So, here are two ZFS ZVOLs sitting on my OpenSolaris 2008.05 home server:

Sun Microsystems Inc.   SunOS 5.11      snv_86  January 2008
-bash-3.2$ zfs list -rt volume 
NAME                             USED  AVAIL  REFER  MOUNTPOINT
santiago/volumes/aperturevault  6.50G   631G  6.50G  -
santiago/volumes/mbptm           193G   631G   193G  -
-bash-3.2$

They have both been shared as iSCSI targets through a single zfs set shareiscsi=on santiago/volumes command, thanks to ZFS' attribute inheritance:

-bash-3.2$ zfs get shareiscsi santiago/volumes
NAME              PROPERTY    VALUE             SOURCE
santiago/volumes  shareiscsi  on                local
-bash-3.2$ zfs get shareiscsi santiago/volumes/aperturevault
NAME                            PROPERTY    VALUE                           SOURCE
santiago/volumes/aperturevault  shareiscsi  on                              inherited from santiago/volumes
-bash-3.2$ zfs get shareiscsi santiago/volumes/mbptm
NAME                    PROPERTY    VALUE                   SOURCE
santiago/volumes/mbptm  shareiscsi  on                      inherited from santiago/volumes
-bash-3.2$ 

On the Mac side, they show up in the globalSAN GUI just nicely:

globalSAN iSCSI GUI

And Disk Utility can format them perfectly as if they were real disks:

Disk Utility with 2 iSCSI disks 

Time Machine happily accepted one of the iSCSI disks and synced more than 190GB to it just fine and as I type these lines, Aperture is busy syncing more than 40GB of photos to the other iSCSI disk (it wouldn't accept a network share). Sometimes, they're busy working simultaneously :).

Of course, iSCSI performance heavily depends on network performance, so for larger transfers, a cable connection is mandatory. But the occasional Time Machine or Aperture sync in the background runs just fine over WLAN.

So finally, Solaris and Mac fans can have a Time Machine based on ZFS, with real data integrity, redundancy, robustness, two different ways of travelling through time (ZVOLs can be snapshotted just like regular ZFS file systems) and much more.

Many thanks to Christiano for letting me know and to the guys at Studio Network Solutions for making this possible. And of course to the ZFS team for a wonderful piece of open storage software!

Monday Oct 01, 2007

New HELDENFunk Podcast Episode Featuring 3 Interviews (2 in English)

HELDENFunk Episode 3 pictures 

Today, the 3rd episode of the HELDENFunk podcast went live. And we now have a jingle, too! I'm glad we reached this milestone: If we can bring out three regular episodes of this podcast, we can do 10, then maybe 100...

Even if this podcast is mostly in german, there are two very interesting interviews in english:

Of course, there's much more, albeit in german: Ulrich Gräf, OS Ambassador talks about Solaris 10 8/07 (update 4), we discuss Sun's newest servers based on Intel CPUs, the CFS acquisition, a nice case mod where one of our customers put a Solaris 10 server into his hand luggage, Solaris xVM and Project eTude and much, much more.

In fact, from episode 1 to 3, this podcast has ever increased in length. Maybe it's time to move to a bi-weekly schedule soon...

P.S.: If you understand german, make sure to participate in our sweepstake competition! 

Monday Sep 17, 2007

The Importance of Archiving (For the Rest of us)

A few weeks ago I was on the road with Dave Cavena. He works as an SE in Hollywood and helps our customers there understand the importance of digitally archiving their movies. The issue here is a simple one: Today, movies are being archived by storing rolls of film on shelves in gigantic warehouses and hoping they'll survive for a few years to come. "Few" could be tens or maybe hundreds of years, but nobody really knows how long they'll really survive and how good the movies will look after a couple of decades of archiving. Will the colours look natural? Will there be scratches? Will the film material degrade so that the movie rips right in the middle of the most important scene? Or will it spontaneously decompose into a heap of dust when someone opens the door after 150 years to see what the heck people were keeping in that warehouse anyway?

Digital Data on the other hand can be kept indefinitely and it can stay perfect through eternity, if people store it right. "Right" means things like keeping redundant copies in geographically distant places (so that the movie survices a warehouse fire or an earthquake), periodical integrity checking and fault healing based on those redundant copies (so silent data corruption can be detected and corrected) and periodic copy and conversion cycles so the data can survive format and media evolution. Try playing "Dragon's Lair", a classic arcade game from the 80ies which was originally produced for Laserdisc-based arcade machines. I loved the game back them and I was glad I found it on DVD. Now look it up on Amazon: You'll find Blu-Ray and HD-DVD versions as well!

Dave and some other very bright people have written an interesting white paper on "Archiving Movies in a Digital World". It is a great read: It shows why archiving movies the digital way is so important (so they can't get lost), how to do it and why this is actually cheaper than keeping rolls of film in warehouses (Hint: Archiving bits takes up less real estate and looks a lot cooler if you use one of these. Your archives may even become smart by using one of these, too!).

That got me thinking: What will happen to all those photos that people are taking using their private digital camera? Parties, Vacations, Babies, Families, etc.? Yes, if people don't start thinking about a good archiving strategy soon, they will all be lost in the next couple of decades. By the time my little daughter gets married, I might have lost all of her baby pictures if I don't do something real quick (as in: This decade).

Storing them on a file server running a serious, enterprise-class OS with ZFS on a set of redundant storage media (could be disks, could be more disks, iSCSI devices, could be USB sticks, it really doesn't matter) is a good start, because it can provide 100% data integrity and self-heal any damages before they become permanent. But this is still not enough. They need to be stored in multiple locations and they need to be periodically copied into more recent media.

I'll figure out the multiple locations part someday (probably a second server that replicates the first file server's ZFS file systems through a couple of zfs send/receive scripts) and the periodical copying means I'll still have an interesting hobby when my daughter has children of her own. Meanwhile, just to be sure, I've started to copy all of my photos to a popular photo-sharing service called Flickr on the net. Yes, all of them. This means I can still decide who can look at my pictures and who not, I get access to all of my pictures from everywhere near a web browser and I can store as many photos as I want for just a small annual fee. And they'll still be there should my basement catch fire or should all of my disks die for some strange statistical reason or when I start taking 3D photographs of my grandchidren. What more could anybody want?

Now what if the movie industry found out about this and started archiving their movies on Flickr as well, image after image, all of them, for just the same small annual fee?

Update (Sep. 18th, 2007): Thanks to Jesse's comment, I've now tried SmugMug and I love it! They offer a 50% discount for Flickr refugees during the first year and there's a nice tool called SmuggLr that makes migration a snap. Thank you Jesse!

Thursday Sep 06, 2007

7 Easy Tips for ZFS Starters

So you're now curious about ZFS. Maybe you read Jonathan's latest blog entry on ZFS or you've followed some other buzz on the Solaris ZFS file system or maybe you saw a friend using it. Now it's time for you to try it out yourself. It's easy and here are seven tips to get you started quickly and effortlessly:

1. Check out what Solaris ZFS can do for you

First, try to compose yourself a picture of what the Solaris ZFS filesystem is, what features it has and how it can work to your advantage. Check out the CSI:Munich video for a fun demo on how Solaris ZFS can turn 12 cheap USB memory sticks into highly available, enterprise-class, robust storage. Of course, what works with USB sticks also works with your own harddisks or any other storage device. Also, there are great ZFS screencasts that show you some more powerful features in an easy to follow way. Finally, there's a nice writeup on "What is ZFS?" at the OpenSolaris ZFS Community's homepage.

2. Read some (easy) documentation

It's easy to configure Solaris ZFS. Really. You just need to know two commands: zpool (1M) and zfs (1M). That's it. So, get your hands onto a Solaris system (or download and install it for free) and take a look at those manpages. If you still want more, then there's of course the ZFS Administration Guide with detailed planning, configuration and troubleshooting steps. If you want to learn even more, check out the OpenSolaris ZFS Community Links page. German-speaking readers are invited to read my german white paper on ZFS or listen to episode #006 of the POFACS podcast.

3. Dive into the pool

Solaris ZFS manages your storage devices in pools. Pools are a convenient way of abstracting storage hardware and turning it into a repository of blocks to store your data in. Each pool takes a number of devices and applies an availability scheme (or none) to it. Pools can then be easily expanded by adding more disks to them. Use pools to manage your hardware and its availability properties. You could create a mirrored pool for data that should be protected against disk failure and that needs fast access to hardware. Then, you could add another pool using RAID-Z (which is similar, but better than RAID-5) for data that needs to be protected but where performance is not the first priority. For scratch, test or demo data, a pool without any RAID scheme is ok, too. Pools are easily created:

zpool create mypool mirror c0d0 c1d0

Will create a mirror out of the two disk devices c0d0 and c1d0. Similarly, you can easily create a RAID-Z pool by saying:

zpool create mypool raidz c0d0 c1d0 c2d0

The easiest way to turn a disk into a pool is:

zpool create mypool c0d0

It's that easy. All the complexity of finding, sanity-checking, labeling, formatting and managing disks is hidden behind this simple command.

If you don't have any spare disks to try this out with, then you can just create yourself some files, then use them as if they were block devices:

# mkfile 128m /export/stuff/disk1
# mkfile 128m /export/stuff/disk2
# zpool create testpool mirror /export/stuff/disk1 /export/stuff/disk2
# zpool status testpool
pool: testpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/stuff/disk1 ONLINE 0 0 0
/export/stuff/disk2 ONLINE 0 0 0

errors: No known data errors

The cool thing about this procedure is that you can create as many virtual disks as you like and then test ZFS's features such as data integrity, self-healing, hot spares, RAID-Z and RAID-Z2 etc. without having to find any free disks.

When creating a pool for production data, think about redundancy. There are three basic properties to storage: availability, performance and space. And it's a good idea to prioritize them in that order: Make sure you have redundancy (mirroring, RAID-Z, RAID-Z2) so ZFS can self-heal data when stuff goes wrong at the hardware level. Then decide how much performance you want. Generally, mirroring is faster and more flexible than RAID-Z/Z2, especially if the pool is degraded and ZFS needs to reconstruct data. Space is the cheapest of all three, so don't be greedy and try to give priority to the other two. Richard Elling has some great recommendations on RAID, space and MTTDL. Roch has also posted a great article on mirroring vs. RAID-Z.

4. The power to give

Once you have set up your basic pool, you can already access your new ZFS file system: Your pool has been automatically mounted for you in the root directory. If you followed the examples above, then you can just cd to /mypool and start using ZFS!

But there's more: Creating additional ZFS file systems that use your pool's resources is very easy, just say something like:

zfs create mypool/home
zfs create mypool/home/johndoe
zfs create mypool/home/janedoe

Each of these commands only takes seconds to complete and every time you will get a full new file system, already set up and mounted for you to start using it immediately. Notice that you can manage your ZFS filesystems hierarchically as seen above. Use pools to manage storage properties at the hardware level, use filesystems to present storage to your users and applications. Filesystems have properties (compression, quotas, reservations, etc.) that you can easily administer using zfs set and that are inherited across the hierarchy. Check out Chris Gerhard's blog on more thoughts about file system organization.

5. Snapshot early, snapshot often

ZFS snapshots are quick, easy and cheap. Much cheaper than the horrible experience when you realize that you just deleted a very important file that hasn't been backed up yet! So, use snapshots whenever you can. If you think about whether to snapshot or not, just do it. I recently spent only about $220 on two 320 GB USB disks for my home server to expand my pool with. At these prices, the time you spend thinking about whether to snapshot or not may be more worth than just buying more disk.

Again, Chris has some wisdom on this topic in his ZFS snapshot massacre blog entry. He once had over 60000 snapshots and he's snapshotting filesystems by the minute! Since snapshots in ZFS “just work” and since they only take up the space that actually changes between snapshots, there's really no reason to not doing snapshots all the time. Maybe once per minute is a little bit exaggerated, but once a week, once per day or once an hour per active filesystem is definitely good advice.

Instead of time based snapshotting, Chris came up with the idea to snapshot a file system shared with Samba whenever the Samba user logs in!

6. See the Synergy

ZFS by itself is very powerful. But the full beauty of it can be unleashed by combining ZFS with other great Solaris 10 features. Here are some examples:

  • Tim Foster has written a great SMF service that will snapshot your ZFS filesystems on a regular basis. It's fully automatic, configurable and integrated with SMF in a beautiful way.

  • ZFS can create block devices, too. They are called zvols. Since Nevada build 54, they are fully integrated into the Solaris iSCSI infrastructure. See Ben Rockwood's blog entry on the beauty of iSCSI with ZFS.

  • A couple of people are now elevating this concept even further: Take two Thumpers, create big zvols inside them, export them through iSCSI and mirror over them with ZFS on a server. You'll get a huge, distributed storage subsystem that can be easily exported and imported on a regular network. A poor man's SAN and a powerful shared storage for future HA clusters thanks to ZFS, iSCSI and Thumper! Jörg Möllenkamp is taking this concept a bit further by thinking about ZFS, iSCSI, Thumper and SAM-FS.

  • Check out some cool Sun StorageTek Availability Suite and ZFS demos here.

  • ZFS and boot support is still in the works, but if you're brave, you can try it out with the newer Solaris Nevada distributions on x64 systems. Think about the possibilities together with Solaris Live Upgrade! Create a new boot environment in seconds while not needing to find or dedicate a new partition, thanks to snapshots, while saving most of the needed disk space!

And that's only the beginning. As ZFS becomes more and more adopted, we'll see many more creative uses of ZFS with other Solaris 10 technologies and other OSes.

7. Beam me up, ZFS!

One of the most amazing features of ZFS is zfs send/receive. zfs send will turn a ZFS filesystem into a bitstream that you can save to a file, pipe through bzip2 for compression or send through ssh to a distant server for archiving or for remote replication through the corresponding zfs receive command. It also supports incremental sending and receiving out of subsequent snapshots through the -i modifier.

This is a powerful feature with a lot of uses:

  • Create your Solaris zone as a ZFS filesystem, complete with applications, configuration, automation scripts, users etc., zfs send | bzip2 >zone_archive.zfs.bz2 it for later use. Then, unpack and create hundreds of cloned zones out of this master copy.

  • Easily migrate ZFS filesystems between pools on the same machine or on distant machines (through ssh) with zfs send/receive.

  • Create a crontab entry that takes a snapshot every minute, then zfs send -i it over ssh to a second machine where it is piped into zfs receive. Tadah! You'll get free, finely-grained, online remote replication of your precious data.

  • Easily create efficient full or incremental backups of home directories (each in their own ZFS filesystems) through ZFS send. Again, you can compress them and treat them like you would, say, treat a tar archive.

See? It is easy, isn't it? I hope this guide helps you find your way around the world of ZFS. If you want more, drop by the OpenSolaris ZFS Community, we have a mailing list/forum where bright and friendly people hang out that will be glad to help you.

Friday Mar 09, 2007

CSI:Munich - How to save the world with ZFS and 12 USB Sticks

Here's a fun video that shows how cool the Sun Fire X4500 (codename: Thumper) is and how you can create your own Thumper experience on your laptop using Solaris ZFS and 12 USB sticks:

This is finally the english dubbed version of a German video that a couple of colleagues and I produced some weeks ago. If you don't mind the german language, you might enjoy the original german version, too (It turns out that the english language has a lot less redundancy than the german one, so please forgive the occasional soundless lip motions).

If you liked the video(s), let us know, we'll be glad to answer any questions, receive any leftover Oscars or accept any new ideas for future episodes.

Here are a few more details, in case you really want to try this at home:

The first hurdle to overcome is to teach Solaris how to accept more than 10 USB storage devices. On a plain vanilla Solaris 10 system, it turns out that there is a limitation: Connecting more than 10 USB sticks through 3 USB-powered Hubs yields a Connecting device on port n failed error. Thanks to a colleague from engineering, the fix is to set ehci:ehci_qh_pool_size = 120 in /etc/system.

The second issue is briefly explained in the video itself: Not all USB sticks (particularly the cheap ones) are created equal. Small variations in the components create small variations in their storage space. So, when creating a zpool, you need to use -f to tell zpool to ignore differing device sizes.

If you pay close attention to the video, you'll notice around 7:20 that pulling a hub wasn't so harmless at all: "errors: 8 data errors, use '-v' for a list" can be seen at the bottom of the teminal window. In fact, zpool status reports 6 checksum errors in c21t0d0p0. Well, using cheap USB sticks means that block errors can occur in practice and once you don't have enough redundancy (like after unplugging a USB hub for show effect) they may hurt you. Fortunately, they didn't hurt our particular demo, since on one hand ZFS' prefetch algorithm had most of the video in memory anyway, while on the other hand zpool scrub fixed any broken blocks after re-plugging the USB hub. So, the cheaper the storage the more redundancy one should add. In this case, RAID-Z2 would have been better. Perhaps we can get some more USB sticks and hubs from any sponsors?

Finally, it took us a couple of retrys until the remove-sticks-mix-then-replug stunt worked, because it turned out that the laptop's USB implementation wasn't as reliable as we needed it to be. And yes, it does help to wait until they've finished blinking before removing any sticks :).

All in all, it was great fun for us producing this video and thanks to the tireless efforts of Marc, our beloved but invisible video editor, we now can proudly present an english version. Actually, we were quite surprised by this video's success: We published it in early February and just a day later, it got noticed by a couple of Solaris engineering people. Now, we have more than 9000 views of the german version (counting the Google video and the YouTube edition together) and are still counting. Hopefully, we can cross the 10,000 views barrier with the english version, now that we have increased the potential audience :).

After watching the video, feel free to try out Solaris ZFS for yourself. There's nothing like building your own pool, then watching ZFS take care of your data. At home, ZFS keeps my photos, music and TV videos nice and tidy, including weekly snapshots thanks to Tim Foster's automatic snapshot SMF service. Just this tuesday, my weekly zpool scrub cron job told me it had fixed a broken block on one of my disks. One that I'd never found out with any other storage system.

To get started, get OpenSolaris here or download it here. All you need to do is check out the docs though real system heroes only need two man-pages: zpool(1M) and zfs(1M).

P.S.: CSI of course stands for "Computer Systems Integration". Any similarities to the popular TV show are purely coincidence. Really. Hmm, but maybe having a dead body or two in one of the next episodes might spice up things a little...

P.P.S.: The cool rock music at the beginning is from XING a great rock band where one of our colleagues plays drums in. Go XING!

Update: Here is a much higher quality version, in case you want to show this video around on your laptop.

About

Tune in and find out useful stuff about Sun Solaris, CPU and System Technology, Web 2.0 - and have a little fun, too!

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
Bookmarks
TopEntries
Blogroll
OldTopEntries