Thursday Sep 18, 2008

time for new challenges

It's been a great eight years at Sun, but a new opportunity has piqued my interest. So as of today, i'm moving on.

Where to?

Well i'm going to be helping out some friends at a startup:
http://www.lumosity.com/

Best to team ZFS (including those outside of Sun).

eric

Wednesday Apr 23, 2008

zones and ZFS file systems

Starting off with a freshly created pool, let's see the steps to create a zone based on a ZFS file system. Here we see our new pool with only one file system:

fsh-sole# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
kwame   160K  7.63G    18K  /kwame
fsh-sole#

Now, we'll create and configure a local zone "ejkzone". Note, we set the zonepath within the path of the ZFS pool:

fsh-sole# zonecfg -z ejkzone
ejkzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:ejkzone> create
zonecfg:ejkzone> set zonepath=/kwame/kilpatrick
zonecfg:ejkzone> commit
zonecfg:ejkzone> exit
fsh-sole#

Now we install zone "ejkzone" and notice that the installation tells us that it will automatically create a ZFS file system for us:

fsh-sole# zoneadm -z ejkzone install
A ZFS file system has been created for this zone.
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <10116> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1198> packages on the zone.
Initialized <1198> packages on zone.                                 
Zone  is initialized.
The file  contains a log of the zone installation.
fsh-sole#

Now we can boot the zone to use it, and can also see that the file system kwame/kilpatrick was automatically created for us:

fsh-sole# zoneadm -z ejkzone boot   
fsh-sole# zoneadm list
global
ejkzone
fsh-sole# zoneadm -z ejkzone list -v
  ID NAME             STATUS     PATH                           BRAND    IP    
   3 ejkzone          running    /kwame/kilpatrick              native   shared
fsh-sole# zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
kwame              517M  7.12G    20K  /kwame
kwame/kilpatrick   517M  7.12G   517M  /kwame/kilpatrick
fsh-sole# 

Now if we login into the zone via 'zlogin -C ejkzone', we notice that the local zone cannot see any ZFS file systems (only the global zone can):

ejkzone# zfs list
no datasets available
ejkzone# 

If we then want to create and delegate some ZFS file systems to the local zone "ejkzone" so that "ejkzone" has administrative control over the file systems, we can do that. From the global zone, we do:

fsh-sole# zfs create kwame/textme
fsh-sole# zonecfg -z ejkzone
zonecfg:ejkzone> add dataset
zonecfg:ejkzone:dataset> set name=kwame/textme
zonecfg:ejkzone:dataset> end
zonecfg:ejkzone> exit
fsh-sole#

Now, we can get the "zoned" property of the newly created file system:

fsh-sole# zfs get zoned kwame/textme 
NAME          PROPERTY  VALUE         SOURCE
kwame/textme  zoned     off           default
fsh-sole# 

Huh, it says "off". But we delegated it to a local zone. Why is that? Well in order for this to take affect, we have to reboot the local zone. After doing that, we can see from the global zone:

fsh-sole# zfs get zoned kwame/textme
NAME          PROPERTY  VALUE         SOURCE
kwame/textme  zoned     on            local
fsh-sole# 

And from the local zone "ejkzone":

ejkzone# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
kwame          595M  7.05G    20K  /kwame
kwame/textme    18K  7.05G    18K  /kwame/textme
ejkzone# 

And now we have administrative control over the file system via the local zone:

ejkzone# zfs get copies kwame/textme 
NAME          PROPERTY  VALUE         SOURCE
kwame/textme  copies    1             default
ejkzone# zfs set copies=2 kwame/textme
ejkzone# zfs get copies kwame/textme  
NAME          PROPERTY  VALUE         SOURCE
kwame/textme  copies    2             local
ejkzone# 

Double checking on the global zone:

fsh-sole# zfs get copies kwame/textme
NAME          PROPERTY  VALUE         SOURCE
kwame/textme  copies    2             local
fsh-sole# zpool history -l
History for 'kwame':
2008-04-23.16:01:17 zpool create -f kwame c1d0s3 [user root on fsh-sole:global]
2008-04-23.16:29:42 zfs create kwame/textme [user root on fsh-sole:global]
2008-04-23.16:36:45 zfs set copies=2 kwame/textme [user root on fsh-sole:ejkzone]

fsh-sole# 

Happy zoning

Wednesday Mar 19, 2008

how dedupalicious is your pool?

WIth the putback of 6656655 zdb should be able to display blkptr signatures, we can now get the "signature" of the block pointers in a pool. To see an example, let's first put some content into an empty pool:

heavy# zpool create bigIO c0t0d0 c0t1d0
heavy# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
bigIO   928G  95.5K   928G     0%  ONLINE  -
heavy# mkfile 1m /bigIO/1m.txt
heavy# echo "dedup me" > /bigIO/ejk.txt
heavy# cp /bigIO/ejk.txt /bigIO/ejk2.txt
heavy# echo "no dedup" > /bigIO/nope.txt
heavy# cp /bigIO/ejk.txt /bigIO/ejk3.txt

Now lets run zdb with the new option "-S". We pass in "user:all", where "user" tells zdb that we only want user data blocks (as opposed to both user and metadata) and "all" tells zdb to print out all blocks (skipping any checksum algorithm strength comparisons).

heavy# zdb -L -S user:all bigIO
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       131072  1       ZFS plain file  fletcher2       uncompressed    0:0:0:0
0       512     1       ZFS plain file  fletcher2       uncompressed    656d207075646564:a:ada40e0eac8cac80:140
0       512     1       ZFS plain file  fletcher2       uncompressed    656d207075646564:a:ada40e0eac8cac80:140
0       512     1       ZFS plain file  fletcher2       uncompressed    7075646564206f6e:a:eac8cac840dedc0:140
0       512     1       ZFS plain file  fletcher2       uncompressed    656d207075646564:a:ada40e0eac8cac80:140
heavy# 

This displays the signature of each block pointer - where the columns are level, physical size, number of dvas, object type, checksum type, compression type, and finally the actual checksum of the block.

So this is interesting, but what could we do with this information? Well, one thing we could do, is to figure out how much your pool can take advantage of dedup. Let's assume that the dedup implementation does matching based on the actual checksum and any checksum algorithm is strong enough (in reality, we'd need sha256 or stronger). So starting with the above pool and using a simple perl script 'line_by_line_process.pl' (shown at the end of this blog), we find:

heavy# zdb -L -S user:all bigIO > /tmp/zdb_out.txt
heavy# sort -k 7 -t "`/bin/echo '\\t'`" /tmp/zdb_out.txt > /tmp/zdb_out_sorted.txt
heavy# ./line_by_line_process.pl /tmp/zdb_out_sorted.txt 
total PSIZE:               0t1050624
total unique PSIZE:        0t132096
total that can be duped:   0t918528
percent that can be duped  87.4269005847953%
heavy#   

In our trivial case, we can see that we could get a huge win - 87% of the pool can be dedup'd!. Upon closer examination, we notice that mkfile writes out all zero blocks. If you had compression enabled, there won't be any actual blocks for this file. So let's look at a case where just the "ejk.txt" contents are getting dedup'd:

heavy# zpool destroy bigIO
heavy# zpool create bigIO c0t0d0 c0t1d0
heavy# dd if=/dev/random of=/bigIO/1m.txt bs=1024 count=5
5+0 records in
5+0 records out
heavy# echo "dedup me" > /bigIO/ejk.txt
heavy# cp /bigIO/ejk.txt /bigIO/ejk2.txt
heavy# echo "no dedup" > /bigIO/nope.txt
heavy# cp /bigIO/ejk.txt /bigIO/ejk3.txt
heavy# zdb -L -S user:all bigIO > /tmp/zdb_out.txt
heavy# sort -k 7 -t "`/bin/echo '\\t'`" /tmp/zdb_out.txt > /tmp/zdb_out_sorted.txt
heavy# ./line_by_line_process.pl /tmp/zdb_out_sorted.txt       
total PSIZE:               0t7168
total unique PSIZE:        0t6144
total that can be duped:   0t1024
percent that can be duped  14.2857142857143%
heavy# 

Ok in this different setup we can see ~14% of capacity can actually be dedup'd - a nice savings on capacity.

So the question becomes - how dedupalicious is your pool?




ps: here is the simple perl script 'line_by_line_process.pl':

#!/usr/bin/perl

# Run this script as:
#  % script 


# total PSIZE
$totalps = 0;

# total unique PSIZE
$totalups = 0;

$last_cksum = -1;

$path = $ARGV[0];
open(FH, $path) or die "Can't open $!";

while (<>) {
        my $line = $_;
        ($level, $psize, $ndvas, $type, $cksum_alg, $compress, $cksum) = split /\\t/, $line, 7;
        if ($cksum ne $last_cksum) {
                $totalups += $psize;
        }
        $last_cksum = $cksum;
        $totalps += $psize;
}

print "total PSIZE:               0t".$totalps."\\n";
print "total unique PSIZE:        0t".$totalups."\\n";
print "total that can be duped:   0t".($totalps - $totalups)."\\n";
print "percent that can be duped  ".($totalps - $totalups) / $totalps \* 100 ."%\\n";

Thursday Oct 04, 2007

FileBench : a New Era in FS Performance

I'm happy to report that FileBench has gone over a significant overhaul and we're happy to release that updated version. Bits will be posted to sourceforge tonight. I'm also happy to report that FileBench is now included in OpenSolaris. You can find it in our new "/usr/benchmarks" path.

Ok, great, just what the industry needed - FileBench is just another simple benchmark. right? Nope.

First let me give you, dear reader, a taste of what we get internally and externally here at the ZFS team:

"I ran bonnie, dd, and tar'd up myfav_linuxkernel_tarball.tar.  Your file system sucks."

Though sometimes i'm happy to note we get:

"I ran bonnie, dd, and tar'd up myfav_linuxkernel_tarball.tar.  Your file system rulz!."

What is FileBench?

It is nice to hear that your file system does in fact "rule", but the problem with the above is that bonnie, dd, and tar are (obviously) not a comprehensive set of applications that can completely measure a file system. IOzone is quite nice but it only tests basic I/O patterns. And there are many other file system benchmarks (mkfile, fsstress, fsrandom, mongo, iometer, etc). The problem with all of these benchmarks is that they only measure a specific workload (or a small set of workloads). None of them actually let you measure what a real application does. Yes part of what Oracle does is random aligned reads/writes (which many of the pre-mentioned benchmarks can measure), but the key is how the random read/writes interact with each other \*and\* how they interact with the other parts of what Oracle does (the log writer as a major example). None of the pre-mentioned benchmarks can do that.

Enter FileBench.

So how does FileBench differ? FileBench is a framework of file system workloads for measuring and comparing file system performance. The key is in the workloads. FileBench has a simple .f language that allows you to describe and build workloads to simulate applications. You can create workloads to replace all the pre-mentioned benchmarks. But more importantly, you can create workloads to simulate complex applications such as a database. For instance, i didn't have to buy an Oracle license nor figure out how to install it on my system to find out if my changes to the vdev cache for ZFS helped database performance or not. I just used FileBench and its 'oltp.f' workload.

How do i Start using FileBench?

The best place to start is via the quick start guide. You can find out lots more at our wiki. Lots of good information and a great place to contribute. For trouble shooting see the gotchas section.

How do i Contribute to FileBench?

If you'd like to write your own workloads, check out the very nice documentation Drew Wilson wrote up. This is actually where we (where we is the FileBench community, not just us Sun people) would love the most contribution. We would really like to have people verify the workloads we have today and build the workloads for tomorrow. This is a great opportunity for industry types and academics. We very much plan to incorporate new workloads into FileBench.

If you would like to help with the actual code of the filebench binaries, find us at perf-discuss@opensolaris.org.

FileBench is done, right?

Um, nope. There's actually lots of interesting work up ahead for FileBench. Besides building new workloads, the two other major focuses that need help are: multi-client support and plug-ins. The first is pretty obvious - we need to have support for multiple clients to benchmark NFS and CIFS. And we need that to work on multiple platforms (OpenSolaris, linux, \*BSD, OSX, etc). The second is where experts in specific communities can help out. Currently, FileBench goes through whatever client/initiator implementation you have on your machine. But if you wanted to just do a comparison of server/target implementations, then you need a plug-in built into FileBench that both systems utilize (even if they are different operating systems). We started prototyping a plug-in for NFSv3. We've also thought about NFSv4, CIFS, and iSCSI. A community member suggested XAM. This is a very interesting space to explore.

So What does this all Mean?

If you need to do file system benchmarking, try out FileBench. Let us know what you like and what needs some love.

If you're thinking about developing a new file system benchmark, consider creating a new workload for FileBench instead. If that works out for you, please share your work. If for some reason it doesn't, please let us know why.

We really believe in the architecture of FileBench and really want it to succeed industry wide (file system and hence storage). We know it works quite well on OpenSolaris and would love other developers to make sure it works just as well on their platforms (linux, \*BSD, OSX, etc.).

Happy Benchmarking and long live RMC!

Tuesday Jun 12, 2007

ZFS on a laptop?

Sun is known for servers, not laptops. So a filesystem designed by Sun would surely be too powerful and too "heavy" for laptops, that the features of a "datacenter" filesystem wouldn't fit on a laptop. Right? Actually... not. As it turns out, ZFS is a great match for laptops.

Backup

One of the most important things a user needs to do on a laptop is to back his data up. Copying your data to DVD or an external drive is one way. ZFS snapshots with 'zfs send' and 'zfs recv' is a better way. Due to its architecture, snaphots in ZFS are very fast and only take up as much space as much data has changed. For a typical user, taking a snapshot every day, for example, will only take up a small amount of capacity.

So let's start off with a ZFS pool called 'swim' and two filesystems: 'Music' and 'Pictures':

fsh-mullet# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
swim            157K  9.60G    21K  /swim
swim/Music       18K  9.60G    18K  /swim/Music
swim/Pictures    19K  9.60G    19K  /swim/Pictures
fsh-mullet# ls /swim/Pictures 
bday.jpg        good_times.jpg

Taking a snapshot 'today' of Pictures is this easy:

fsh-mullet# zfs snapshot swim/Pictures@today

And now we can see the contents of snapshot 'today' via the '.zfs/snapshot' directory:

fsh-mullet# ls /swim/Pictures/.zfs/snapshot/today 
bday.jpg        good_times.jpg
fsh-mullet# 

If you want to take a snapshot of all your filesystems, then you can do:

fsh-mullet# zfs snapshot -r swim@today      
fsh-mullet# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
swim                  100M  9.50G    21K  /swim
swim@today               0      -    21K  -
swim/Music            100M  9.50G   100M  /swim/Music
swim/Music@today         0      -   100M  -
swim/Pictures          19K  9.50G    19K  /swim/Pictures
swim/Pictures@today      0      -    19K  -
fsh-mullet# 

Now that you have snapshots, you can use the built-in features of 'zfs send' and 'zfs recv' to backup your data - even to another machine.

fsh-mullet# zfs send swim/Pictures@today | ssh host2 zfs recv -d backupswim

After you've sent over the first snapshot via 'zfs send', you can then do incremental 'zfs send's:

fsh-mullet# zfs send -i swim/Pictures@today | ssh host2 zfs recv -d backupswim

Now let's look at the backup ZFS pool 'backupswim' on host 'host2':

host2# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
backupswim                  100M  9.50G    21K  /backupswim
backupswim/Music            100M  9.50G   100M  /backupswim/Music
backupswim/Music@today         0      -   100M  -
backupswim/Pictures          18K  9.50G    18K  /backupswim/Pictures
backupswim/Pictures@today      0      -    18K  -

What's really nice about using ZFS's snapshots is that you only need to send over (and store) the differences between snapshots. So if you're doing video editing on your laptop, and have a giant 10GB file, but only change, say, 1KB of data on this day, with ZFS you only have to send over 1KB of data - not the entire 10GB of the file. This also means you don't have to store multiple 10GB versions (one per snapshot) of the file on your backup device.

You can also backup with an external hard drive. Create a backup pool on the second hard drive, and just 'zfs send/recv' your nightly snapshots.

Reliability

Since laptops (typically) only have 1 disk, handling disk errors is very important. Bill introduced ditto blocks to handle partial disk failures. With typical filesystems, if part of the disk is corrupted/failing and that part of the disk stores your metadata, you're screwed. There's no way to access the data associated with the inaccessible metadata without backing up. With ditto blocks, ZFS stores multiple copies of the metadata in the pool. In the single disk case, we strategically store multiple copies of the metadata on different locations on disk (such as at the front and back of the disk). A subtle partial disk failure can make other filesystems useless, whereas ZFS can survive.

Matt took ditto blocks one step further and allowed the user to apply it to any filesystem's data. What this means is that you can make your more important data more reliable by stashing away multiple copies of your precious data (without muddying your namespace). Here's how you store two copies of your pictures:

fsh-mullet# zfs set copies=2 swim/Pictures
fsh-mullet# zfs get copies swim/Pictures
NAME           PROPERTY  VALUE          SOURCE
swim/Pictures  copies    2              local
fsh-mullet# 

Note, the number of copies property only affects future writes (not existing data). So i recommend you set this at filesystem creation time:

fsh-mullet# zfs create -o copies=2 swim/Music
fsh-mullet# zfs get copies swim/Music
NAME        PROPERTY  VALUE       SOURCE
swim/Music  copies    2           local
fsh-mullet# 

Built-in Compression

With ZFS, compression comes built-in. The current algorithms are lzjb (based on Lempel-Ziv) and gzip. Now its true that your jpegs and mp4s are already compressed quite nicely, but if you want to save capacity on other filesystems, all you have to do is:

fsh-mullet# zfs set compression=on swim/Documents
fsh-mullet# zfs get compression swim/Documents
NAME            PROPERTY     VALUE           SOURCE
swim/Documents  compression  on              local
fsh-mullet# 

The default compression algorithm is lzjb. If you want to use gzip, then do:

fsh-mullet# zfs set compression=gzip swim/Documents
fsh-mullet# zfs get compression swim/Documents     
NAME            PROPERTY     VALUE           SOURCE
swim/Documents  compression  gzip            local
fsh-mullet# 

That single disk stickiness

A major problem with laptops today is the single point of failure: the single disk. It makes complete sense today that laptops are designed this way given the physical space and power issues. But looking foward, as, say, flash gets cheaper and cheaper as well as more reliable, it becomes more and more of a possibility to replace the single disk in laptops. So now that you save physical space, you can actually fit more than one flash device in the laptop. Wouldn't it be really cool if you could then build RAID ontop of the multiple devices? Introducing some hardware RAID controller doesn't make any sense - but software RAID does.

ZFS allows you to do mirroring as well as RAID-Z (ZFS's unique form of RAID-5) - in software.

Creating a mirrored pool is easy:

diskmonster# zpool create swim mirror c7t0d0 c7t1d0
diskmonster# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        swim        ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0

errors: No known data errors
diskmonster# 

Similarly, creating a RAID-Z is also easy:

diskmonster# zpool create swim raidz c7t0d0 c7t1d0 c7t2d0 c7t5d0
diskmonster# zpool status
  pool: swim
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        swim        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            c7t0d0  ONLINE       0     0     0
            c7t1d0  ONLINE       0     0     0
            c7t2d0  ONLINE       0     0     0
            c7t5d0  ONLINE       0     0     0

errors: No known data errors
diskmonster# 

With either of these configurations, your laptop can now handle a whole device failure.

ZFS on a laptop - a perfect fit.

Tuesday May 29, 2007

NCQ performance analysis

hi[Read More]

Tuesday Nov 21, 2006

zil_disable

People are finding that setting 'zil_disable' seems to increase their performance - especially NFS/ZFS performance. But what does setting 'zil_disable' to 1 really do? It completely disables the ZIL. Ok fine, what does that mean?

Disabling the ZIL causes ZFS to not immediatley write synchronous operations to disk/storage. With the ZIL disabled, the synchronous operations (such as fsync(), O_DSYNC, OP_COMMIT for NFS, etc.) will be written to disk, just at the same guarantees as asynchronous operations. That means you can return success to applications/NFS clients before the data has been commited to stable storage. In the event of a server crash, if the data hasn't been written out to the storage, it is lost forever.

With the ZIL disabled, no ZIL log records are written.

Note: disabling the ZIL does NOT compromise filesystem integrity. Disabling the ZIL does NOT cause corruption in ZFS.

Disabling the ZIL is definitely frowned upon and can cause your applications much confusion. Disabling the ZIL can cause corruption for NFS clients in the case where a reply to the client is done before the server crashes, and the server crashes before the data is commited to stable storage. If you can't live with this, then don't turn off the ZIL.

The 'zil_disable' tuneable will go away once 6280630 zil synchronicity is putback.

Hmm, so all of this sounds shady - so why did we add 'zil_disable' to the code base? Not for people to use, but as an easy way to do performance measurements (to isolate areas outside the ZIL).

If you'd like more information on how the ZIL works, check out Neil's blog and Neelakanth's blog.

About

erickustarz

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today