Exploring ZFS options for storing crash dumps

Systems fail - for a variety of reasons, That's what keeps me in a job. When they do, you need to store data about the way it failed if you want to be able to diagnose what happened. This is what we call a crash dump. But this data consumes a large amount of space on disk. So I thought I'd explore which ZFS technologies can help reduce the overhead of storing this data.

The approach I took was to make a system (x4600M2 with 256GB memory) moderately active using the filebench benchmarking utility, available from the Solaris 11 Package repository. Then take a system panic using reboot -d, then repeat the process twice more, taking live crash dumps using savecore -L. This should generate 3 separate crash dumps, which will have some unique, and some duplicated data.

There are a number of technologies available to us.

  • savecore compressed crash dumps
    • not a ZFS technology, but the default behavior in Oracle Solaris 11
    • Have a read of Steve Sistare's great blog on the subject
  • ZFS deduplication
    • Works on a block level
    • Should store only one copy of a block if it's repeated among multiple crash dumps
  • ZFS snapshots
    • If we are modifying a file, we should only save the changes
      • To make this viable, I had to modify the savecore program to create a snapshot of a filesystem on the fly, and reopen the existing crash dump and modify the file rather than create a new one
  • ZFS compression
    • either in addition to or instead of savecore compression
    • Multiple different levels of compression
      • I tried LZJB and GZIP (at level 9)

All of these can be applied to both compressed (vmdump) and non-compressed (vmcore) crash dumps.So I created multiple zfs data sets with the properties and repeated the crash dump creation, adjusting savecore configuration using dumpadm(1m) to save to the various data sets, either using savecore compression or not.

Remember also one of the motivations of saving a crash dump compressed is to speed up the time it takes to get from the dump device to a file system, so you can send it to Oracle support for analysis.

So what do we get?

Lets look at the default case, this is no compression on the file system, but using the level of compression achieved by savecore (which is the same as the panic process, and is either LZJB or BZIP2). In this we have three crash dumps, totaling 8.86GB. If these same dumps are uncompressed we get 36.4GB crash dumps (so we can see that savecore compression is saving us a lot of space)

Interestingly use of dedup seems to not give us any benefit, I wouldn't have expected it to do so on vmdump format comrpessed dumps, as the act of compression is likely to make many more block unique, but I was surprised so no vmcore format uncompressed dumps showed any benefit. It's hard to see the how dedup is behaving because from a ZFS layer perspective  the data is still full size, but use of zdb(1m) can show us the dedup table

# zdb -D space
DDT-sha256-zap-duplicate: 37869 entries, size 329 on disk, 179 in core
DDT-sha256-zap-unique: 574627 entries, size 323 on disk, 191 in core

dedup = 1.03, compress = 1.00, copies = 1.00, dedup * compress / copies = 1.03

the extra 0.03 dedup only came about when I started using the same pool for building Solaris kernel code.

I believe the lack of benefit is due to the fact that dedup works at a block level, and as such, even the change of a single pointer in a single data structure in a block of the crash dump would result in the block being unique and not being deduped

In light of this - the fact that my modified savecore code to use snapshots didn't show any benefit, is not really a surprise.

So that leaves compression. And this is where we get some real benefits. By enabling both savecore and zfs compression get between 20 and 50% saving in disk space. On uncompressed dumps, you get data size between 4.63GB and 8.03GB - ie. comparable to using savecore compression.

The table here shows the various usage

"> Name  Savecore compression
 ZFS DEDUP
 ZFS Snapshot
 ZFS compression
 Size (GB)
 % of default
 Default  Yes  No  No  No  8.86  100
 DEDUP  Yes  Yes  No  No  8.86  100
 Snapshot  Yes
 No  Yes  No  14.6
 165
 GZ9compress  Yes  No  No
 GZIP level 9
 4.46  50.3
 LZJBcompress  Yes
 No
 No  LZJB  7.2  81.2
 Expanded  No  No  No  No  36.4  410
 ExpandedDedup  No  Yes  No  No  36.4  410
 ExpandedSnapshot
 No  No  Yes  No  37.7  425
 ExpandedGZ9
 No  No No GZIP level 9  4.63
 52.4
 ExpandedLZJB
 No  No No LZJB  8.03
 91

The anomaly here is the Snapshot of savecore compressed data. I can only explain that by saying that, though I repeated the same process, the crashdumps created were larger in that particular case. 5Gb each in stead of 5GB and two lots of 2GB

So what does this tell us? Well fundamentally, the Oracle Solaris 11 default does a pretty good job of getting a crash dump file off the dump device and storing it in an efficient way. That block level optimisations don't help in minimizing the data size (dedup and snapshot). And compression helps us in data size (big surprise there - not!)

If disk space is an issue for you, consider creating a compressed zfs data set to store you crash dumps in.

If you want to analyse crash dumps in situ, then consider using uncompressed crash dumps, but written to compressed zfs data set.

Personally, as I do tend to want to look at the dumps as quickly as possible, I'll be setting my lab machines to create uncompressed crash dumps using

# dumpadm -z off

but create a zfs compressed data set, and use

# dumpadm -s /path/to/compressed/data/set

to make sure I can analyse the crash dump, but still not waste storage space




Comments:

Post a Comment:
Comments are closed for this entry.
About

Chris W Beal

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today