Destroying some unneeded ZFS snapshots

Last night, I found that I had a number of unused ZFS snapshots. I decided to destroy some of them. The first destroy locked up the system. Nagios reported everything was dead. I remotely power cycled the system. After it came back, I noticed that a scrub was in progress. This had not been initiated because of the power cycle. The scrub was a weekly automatic operation which started some hours before my destroy attempt.

I decided to let the scrub complete before proceeding.

Here is what I stared with:

$ zpool status
  pool: storage
 state: ONLINE
 scan: scrub repaired 0 in 20h24m with 0 errors on Wed Oct  3 23:38:44 2012
config:

        NAME                 STATE     READ WRITE CKSUM
        storage              ONLINE       0     0     0
          raidz2-0           ONLINE       0     0     0
            gpt/disk01-live  ONLINE       0     0     0
            gpt/disk02-live  ONLINE       0     0     0
            gpt/disk03-live  ONLINE       0     0     0
            gpt/disk04-live  ONLINE       0     0     0
            gpt/disk05-live  ONLINE       0     0     0
            gpt/disk06-live  ONLINE       0     0     0
            gpt/disk07-live  ONLINE       0     0     0

errors: No known data errors

$ zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
storage                               7.99T   911G  14.5G  /storage
storage/compressed                    7.96T   911G  53.3K  /storage/compressed
storage/compressed/bacula             7.59T   911G  4.66T  /storage/compressed/bacula
storage/compressed/bacula@2011.03.21  2.93T      -  3.46T  -
storage/compressed/home               2.75G   911G  2.75G  /storage/compressed/home
storage/compressed/jails               373G   911G   373G  /storage/compressed/jails
storage/pgsql                         13.8G   911G  13.8G  /storage/pgsql

$ zfs list -t snapshot
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
storage@2011.03.21                        93.8K      -  14.5G  -
storage/compressed@2011.03.21             38.4K      -  51.2K  -
storage/compressed/bacula@2010.10.19      8.90G      -  2.29T  -
storage/compressed/bacula@2010.10.20       159K      -  2.29T  -
storage/compressed/bacula@2010.10.20a     78.9K      -  2.29T  -
storage/compressed/bacula@2010.10.20b     78.9K      -  2.29T  -
storage/compressed/bacula@pre.pool.merge  1.12M      -  2.58T  -
storage/compressed/bacula@2011.03.21       360G      -  3.46T  -
storage/compressed/home@2011.03.21         122K      -  2.75G  -

As you can see, I have some snapshots which are nearly two years old. I don’t need them any more. Let’s let them go.

# zfs destroy storage/compressed/bacula@2010.10.20
# zfs destroy storage/compressed/bacula@2010.10.20a
# zfs destroy storage/compressed/bacula@2010.10.20b
# zfs destroy storage/compressed/bacula@pre.pool.merge
# zfs destroy storage/compressed/bacula@2010.10.19
# zfs destroy storage@2011.03.21
# zfs destroy storage/compressed@2011.03.21
# zfs destroy storage/compressed/home@2011.03.21
# zfs destroy storage/compressed/bacula@2011.03.21

Now I have:

$ zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
storage                    5.06T  3.82T  14.5G  /storage
storage/compressed         5.03T  3.82T  53.3K  /storage/compressed
storage/compressed/bacula  4.66T  3.82T  4.66T  /storage/compressed/bacula
storage/compressed/home    2.75G  3.82T  2.75G  /storage/compressed/home
storage/compressed/jails    373G  3.82T   373G  /storage/compressed/jails
storage/pgsql              13.8G  3.82T  13.8G  /storage/pgsql

I’ve gone from 8T to 5T in use. Which means I have 3T to grow. That’s enough for now. :)

Here is a graph from munin which shows the change in disk consumption resulting from my Bacula and ZFS efforts over the past two days.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top