Understanding the Space Used by ZFS

Until recently, I've been confused and frustrated by the zfs list output as I try to clear up space on my hard drive.

Take this example using a 1 GB zpool:

bleonard@os200906:~# mkfile 1G /dev/dsk/disk1
bleonard@os200906:~# zpool create tank disk1
bleonard@os200906:~# zpool list tank
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank  1016M    73K  1016M     0%  ONLINE  -

Now let's create some files and snapshots:

bleonard@os200906:~# mkfile 100M /tank/file1
bleonard@os200906:~# zfs snapshot tank@snap1
bleonard@os200906:~# mkfile 100M /tank/file2
bleonard@os200906:~# zfs snapshot tank@snap2
bleonard@os200906:~# zfs list -t all -r tank
NAME         USED  AVAIL  REFER  MOUNTPOINT
tank         200M   784M   200M  /tank
tank@snap1    17K      -   100M  -
tank@snap2      0      -   200M  -

The output here looks as I'd expect. I have used 200 MB of disk space, neither of which is used by the snapshots. snap1 refers to 100 MBs of data (file1) and snap2 refers to 200 MBs of data (file1 and file2).

Now let's delete file1 and look at our zfs list output again:

bleonard@os200906:~# rm /tank/file1
bleonard@os200906:~# zpool scrub tank
bleonard@os200906:~# zfs list -t all -r tank
NAME         USED  AVAIL  REFER  MOUNTPOINT
tank         200M   784M   100M  /tank
tank@snap1    17K      -   100M  -
tank@snap2    17K      -   200M  -

Only 1 thing has changed - tank now only refers to 100 MB of data. file1 has been deleted and is only referenced by the snapshots. So why don't the snapshots reflect this in their USED column? You may think we should show 100 MB used by snap1, however, this would be misleading as deleting snap1 has no effect on the data used by the tank file system. Deleting snap1 would only free up 17K of disk space. We'll come back to this test case in a moment.

There is an option to get more detail on the space consumed by the snapshots. Although you can pretty easily deduct from the example above that the snapshots are using 100 MB, by using the zfs space option you can save yourself from doing the math:

bleonard@os200906:~# zfs list -t all -o space -r tank
NAME        AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank         784M   200M      100M    100M              0       154K
tank@snap1      -    17K         -       -              -          -
tank@snap2      -    17K         -       -              -          -

Here we can clearly see that of the 200 MB used by our file system, 100 MB is used by snapshots (file1) and 100 MB is used by the dataset itself (file2). Of course, there are other factors that can affect the total amount used - see the zfs man page for details.

Now, if we were to delete snap1 (we know this is safe, because it's not using any space):

bleonard@os200906:~# zfs destroy tank@snap1
bleonard@os200906:~# zfs list -t all -r tank
NAME         USED  AVAIL  REFER  MOUNTPOINT
tank         200M   784M   100M  /tank
tank@snap2   100M      -   200M  -

We can see that snap2 now shows 100 MBs used. If I were to delete snap2, I would be deleting 100 MB of data (or reclaiming 100 MB of space):

Now let's look at a more realistic example - my home directory where I have Time Slider running:

bleonard@opensolaris:~$ zfs list -t all -r -o space rpool/export/home
NAME                                                       AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool/export/home                                          25.4G  35.2G     17.9G   17.3G              0          0
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30       -   166M         -       -              -          -
rpool/export/home@zfs-backup-2010-08-12-15:30                  -  5.06M         -       -              -          -
rpool/export/home@zfs-backup-2010-08-12-15:56                  -  5.15M         -       -              -          -
rpool/export/home@zfs-backup-2010-08-31-14:12                  -  54.6M         -       -              -          -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00       -  53.8M         -       -              -          -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00        -  95.8M         -       -              -          -
rpool/export/home@zfs-backup-2010-09-09-09:04                  -  53.9M         -       -              -          -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00        -  2.06G         -       -              -          -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00        -  89.7M         -       -              -          -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:00      -  18.3M         -       -              -          -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15      -   293K         -       -              -          -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30      -   293K         -       -              -          -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45      -  1.18M         -       -              -          -

My snapshots are consuming almost 18 GBs of space. However, it would appear that I could only reclaim about 2.5 GBs of space by deleting all of my snapshots. In reality, 15.5 GBs of space is referenced by 2 or more snapshots.

I can get a better idea of which snapshots might reclaim the most space by removing the space option so I get the REFER field in the output:

bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT
rpool/export/home                                          35.2G  25.4G  17.3G  /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30    166M      -  15.5G  -
rpool/export/home@zfs-backup-2010-08-12-15:30              5.06M      -  28.5G  -
rpool/export/home@zfs-backup-2010-08-12-15:56              5.15M      -  28.5G  -
rpool/export/home@zfs-backup-2010-08-31-14:12              54.6M      -  15.5G  -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00   53.8M      -  15.5G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00    95.8M      -  15.5G  -
rpool/export/home@zfs-backup-2010-09-09-09:04              53.9M      -  17.4G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00    2.06G      -  19.4G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00    89.7M      -  15.5G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15   293K      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30   293K      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45  1.18M      -  17.3G  -
rpool/export/home@zfs-auto-snap:hourly-2010-09-28-12:00        0      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00      0      -  17.3G  -

In the above output, I can see that 2 snapshots, taken 26 seconds apart, are referring to 28.5 GBs of disk space. If I were to delete one of those snapshots and check the zfs list output again:

bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:30
bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT
rpool/export/home                                          35.2G  25.4G  17.3G  /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30    166M      -  15.5G  -
rpool/export/home@zfs-backup-2010-08-12-15:56              12.5G      -  28.5G  -
rpool/export/home@zfs-backup-2010-08-31-14:12              54.6M      -  15.5G  -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00   53.8M      -  15.5G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00    95.8M      -  15.5G  -
rpool/export/home@zfs-backup-2010-09-09-09:04              53.9M      -  17.4G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00    2.06G      -  19.4G  -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00    89.7M      -  15.5G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15   293K      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30   293K      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45  1.18M      -  17.3G  -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00   537K      -  17.3G  -

I can now clearly see that the remaining snapshot is using 12.5 GBs of space and deleting this snapshot would reclaim much needed space on my laptop:

bleonard@opensolaris:~$ zpool list rpool
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   149G   120G  28.5G    80%  ONLINE  -
bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:56
bleonard@opensolaris:~$ zpool list rpool
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool   149G   108G  41.0G    72%  ONLINE  -
And that should be enough to keep Time Slider humming along smoothly and prevent the warning dialog from appearing (lucky you if you haven't seen that yet).
Comments:

Thanks for the post.
I think the following makes things clearer :

# mkfile 110m file1
# zfs snapshot tank@snap1
# mkfile 120m file2
# zfs snapshot tank@snap2
# mkfile 150m file3
# zfs snapshot tank@snap3
# zfs list -t all -o name,volsize,used,referenced,reservation,refreservation,usedbyrefreservation,usedbysnapshots | egrep "NAME|tank"
NAME VOLSIZE USED REFER RESERV REFRESERV USEDREFRESERV USEDSNAP
tank - 381M 380M none none 0 38K
tank@snap1 - 19K 110M - - - -
tank@snap2 - 19K 230M - - - -
tank@snap3 - 0 380M - - - -

# rm file1
# zfs list -t all -o name,volsize,used,referenced,reservation,refreservation,usedbyrefreservation,usedbysnapshots | egrep "NAME|tank"
NAME VOLSIZE USED REFER RESERV REFRESERV USEDREFRESERV USEDSNAP
tank - 380M 270M none none 0 110M
tank@snap1 - 19K 110M - - - -
tank@snap2 - 19K 230M - - - -
tank@snap3 - 20K 380M - - - -

file1 is part of USEDSNAP and is shared to all the snaphots. If we delete 1 snapshot we do not free anything from the pool (USED col is small).

# rm file2
# zfs list -t all -o name,volsize,used,referenced,reservation,refreservation,usedbyrefreservation,usedbysnapshots | egrep "NAME|tank"
NAME VOLSIZE USED REFER RESERV REFRESERV USEDREFRESERV USEDSNAP
tank - 380M 150M none none 0 230M
tank@snap1 - 19K 110M - - - -
tank@snap2 - 19K 230M - - - -
tank@snap3 - 20K 380M - - - -

# zfs destroy tank@snap2
# zfs list -t all -o name,volsize,used,referenced,reservation,refreservation,usedbyrefreservation,usedbysnapshots | egrep "NAME|tank"
NAME VOLSIZE USED REFER RESERV REFRESERV USEDREFRESERV USEDSNAP
tank - 380M 150M none none 0 230M
tank@snap1 - 19K 110M - - - -
tank@snap3 - 120M 380M - - - -

tank = file1 + file3
tank@snap1 = file1
tank@snap3 = file1 + file2 + file3 and is the only dataset referencing file2, hence USED = 120M

If we delete tank@snap3, we will be able to free some space from the pool (USED col = 120M).

This matches with what is written is the ZFS admin Guide :
USEDSNAP :
Identifies the amount of disk space that is consumed by snapshots of this dataset, which would be freed if all of
this dataset's snapshots were destroyed. Note that this is not the sum of the snapshots' used properties, because disk
space can be shared by multiple snapshots.

Cheers
fred

Posted by guest on October 11, 2013 at 12:35 PM GMT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

The Observatory is a blog for users of Oracle Solaris. Tune in here for tips, tricks and more as we explore the Solaris operating system from Oracle.

Connect with Oracle Solaris:


Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
20
21
22
23
24
25
26
27
28
29
30
   
       
Today