I recently discovered that you can delete all snapshot from a ZFS filesystem with a single command. It came to me via fortune:
You can delete a range of ZFS snapshots (a-z) in multiple ways.
The following will delete d and all earlier snapshots:
zfs destroy mypool/data@%d
To delete d and all later snapshots:
zfs destroy mypool/data@d%
To delete all dataset snapshots:
zfs destroy mypool/data@%
Make sure to let ZFS perform a dry run (-n option) first and display (-v) what
it would do to confirm that the delete operation is removing exactly what you
intended.— Benedict Reuschling
Today, I decided to retire a few iocage jails which have been powered off for a while. I will use my newly found knowledge and combine it with some zfs rename and sanoid configuration to archive these filesystems for a bit longer.
In this post:
- FreeBSD 12.2
- iocage 1.2_6
- sanoid 2.0.3
I have used sanoid for while to manage my snapshots.
The goal
I will create a new filesystem within the same zpool. This will allow me to rename the existing filesystems, effectively moving them out from under system/iocage/jails and into a new location system/trash. I’m calling it trash, but this is not a trash can as commonly used in desktop user interfaces. I’m just giving it that name. I could have called it archive or ngatoto and the result would be the same.
I will instruct sanoid to not snapshot the newly-relocated filesytems.
The old iocage jails
There are the jails in question:
[dan@slocum:~] $ iocage list -l | grep off +-----+--------------------+------+-------+------+-----------------+--------------------------------------------+--------------------------------------------------------------------------+---------------+----------+ | JID | NAME | BOOT | STATE | TYPE | RELEASE | IP4 | IP6 | TEMPLATE | BASEJAIL | +=====+====================+======+=======+======+=================+============================================+==========================================================================+===============+==========+ | - | dev-ingress01 | off | down | jail | 12.2-RELEASE-p4 | [redacted] | - | - | no | | - | dev-nginx01 | off | down | jail | 12.2-RELEASE-p4 | [redacted] | - | - | no | | - | devgit-ingress01 | off | down | jail | 12.2-RELEASE-p4 | ix2|[redacted] | - | - | no | | - | devgit-nginx01 | off | down | jail | 12.2-RELEASE-p4 | ix2|[redacted] | - | - | no | | - | mobile-nginx01 | off | down | jail | 12.2-RELEASE-p4 | ix2|[redacted] | - | - | no | | - | stage-ingress01 | off | down | jail | 12.2-RELEASE-p4 | ix2|[redacted] | - | - | no | | - | stage-nginx01 | off | down | jail | 12.2-RELEASE-p4 | ix2|[redacted] | - | - | no | | - | test-ingress01 | off | down | jail | 12.2-RELEASE-p4 | [redacted] | - | - | no | | - | test-nginx01 | off | down | jail | 12.2-RELEASE-p4 | [redacted] | - | stage-nginx01 | no | +-----+--------------------+------+-------+------+-----------------+--------------------------------------------+--------------------------------------------------------------------------+---------------+----------+ | JID | NAME | BOOT | STATE | TYPE | RELEASE | IP4 | IP6 | TEMPLATE | BASEJAIL | +=====+====================+======+=======+======+=================+============================================+==========================================================================+===============+==========+ [dan@slocum:~] $
All of those powered-off jails have been converted to vanilla jails:
[dan@slocum:~] $ zfs list -r system/jails NAME USED AVAIL REFER MOUNTPOINT system/jails 240G 12.4T 304K /jails system/jails/dev-ingress01 15.1G 12.4T 13.2G /jails/dev-ingress01 system/jails/dev-nginx01 6.42G 12.4T 6.04G /jails/dev-nginx01 system/jails/devgit-ingress01 10.4G 12.4T 9.38G /jails/devgit-ingress01 system/jails/devgit-nginx01 8.68G 12.4T 7.17G /jails/devgit-nginx01 system/jails/mx-ingress-test 1.30G 12.4T 1.30G /jails/mx-ingress-test system/jails/mx-ingress01 2.10G 12.4T 1.88G /jails/mx-ingress01 system/jails/stage-ingress01 127G 12.4T 125G /jails/stage-ingress01 system/jails/stage-nginx01 6.21G 12.4T 5.83G /jails/stage-nginx01 system/jails/stagegit-ingress01 23.7G 12.4T 15.0G /jails/stagegit-ingress01 system/jails/stagegit-nginx01 2.60G 12.4T 2.32G /jails/stagegit-nginx01 system/jails/test-ingress01 7.89G 12.4T 7.50G /jails/test-ingress01 system/jails/test-nginx01 6.15G 12.4T 5.79G /jails/test-nginx01 system/jails/testgit-ingress01 19.6G 12.4T 11.5G /jails/testgit-ingress01 system/jails/testgit-nginx01 3.58G 12.4T 2.47G /jails/testgit-nginx01 [dan@slocum:~] $
The corresponding system/iocage/* filesystems are no longer required. I want to retain them for a while longer, but I don’t want sanoid to take any snapshots.
First, tell sanoid to ignore
Ths is what I added to /usr/local/etc/sanoid/sanoid.conf:
# we make no snapshots of this [system/trash] use_template = ignore recursive = yes
I need that recursive declaration to avoid letting other directives for system being applied to the filesystems under this point. You may be able to get away with it, but I find that when I want something to apply to this thing and everything under it, I specify span class=”cmd”>recursive – it is both clear and documents my intentions.
This template was already in that file:
[template_ignore] autoprune = no autosnap = no monitor = no
Creating the destination
This will be used in the upcoming zfs rename commands.
[dan@slocum:~] $ sudo zfs create system/trash
Read below for how I should / could have created this.
Renaming the filesystems
I did this via copy/paste, and I’m showing you my command history
[dan@slocum:~] $ history | grep rename 495 sudo zfs rename system/iocage/jails/dev-ingress01 system/trash/dev-ingress01 496 sudo zfs rename system/iocage/jails/dev-nginx01 system/trash/dev-nginx01 497 sudo zfs rename system/iocage/jails/devgit-ingress01 system/trash/devgit-ingress01 498 sudo zfs rename system/iocage/jails/devgit-nginx01 system/trash/devgit-nginx01 500 sudo zfs rename system/iocage/jails/mobile-nginx01 system/trash/mobile-nginx01 501 sudo zfs rename system/iocage/jails/stage-ingress01 system/trash/stage-ingress01 503 sudo zfs rename system/iocage/jails/test-ingress01 system/test-ingress01 504 sudo zfs rename system/iocage/jails/test-nginx01 system/ 505 sudo zfs rename system/iocage/jails/test-nginx01 system/test-nginx01 507 sudo zfs rename system/test-nginx01 system/trash/test-nginx01 508 history | grep rename
I have also column aligned the results to make it easier to follow. You can see my error at 504 & 505 and the correction on 507.
What do I have there? This is the output after I deleted all the snapshots (in the next section):
[dan@slocum:~] $ zfs list -r system/trash NAME USED AVAIL REFER MOUNTPOINT system/trash 69.8G 12.4T 288K /trash system/trash/dev-ingress01 7.74G 12.4T 208K /trash/dev-ingress01 system/trash/dev-ingress01/root 7.74G 12.4T 7.74G /trash/dev-ingress01/root system/trash/dev-nginx01 5.01G 12.4T 208K /trash/dev-nginx01 system/trash/dev-nginx01/root 5.01G 12.4T 5.01G /trash/dev-nginx01/root system/trash/devgit-ingress01 5.66G 12.4T 208K /trash/devgit-ingress01 system/trash/devgit-ingress01/root 5.66G 12.4T 5.66G /trash/devgit-ingress01/root system/trash/devgit-nginx01 5.35G 12.4T 208K /trash/devgit-nginx01 system/trash/devgit-nginx01/root 5.35G 12.4T 5.35G /trash/devgit-nginx01/root system/trash/mobile-nginx01 3.63G 12.4T 200K /trash/mobile-nginx01 system/trash/mobile-nginx01/root 3.63G 12.4T 3.63G /trash/mobile-nginx01/root system/trash/stage-ingress01 37.8G 12.4T 208K /trash/stage-ingress01 system/trash/stage-ingress01/root 37.8G 12.4T 37.8G /trash/stage-ingress01/root system/trash/stage-nginx01 4.58G 12.4T 208K /trash/stage-nginx01 system/trash/stage-nginx01/root 4.58G 12.4T 4.58G /trash/stage-nginx01/root [dan@slocum:~] $
Hmmm, I don’t want this stuff mounted.
[dan@slocum:~] $ sudo zfs set canmount=off system/trash [dan@slocum:~] $ sudo zfs set mountpoint=none system/trash [dan@slocum:~] $ zfs list -r system/trash NAME USED AVAIL REFER MOUNTPOINT system/trash 69.8G 12.4T 288K none system/trash/dev-ingress01 7.74G 12.4T 208K none system/trash/dev-ingress01/root 7.74G 12.4T 7.74G none system/trash/dev-nginx01 5.01G 12.4T 208K none system/trash/dev-nginx01/root 5.01G 12.4T 5.01G none system/trash/devgit-ingress01 5.66G 12.4T 208K none system/trash/devgit-ingress01/root 5.66G 12.4T 5.66G none system/trash/devgit-nginx01 5.35G 12.4T 208K none system/trash/devgit-nginx01/root 5.35G 12.4T 5.35G none system/trash/mobile-nginx01 3.63G 12.4T 200K none system/trash/mobile-nginx01/root 3.63G 12.4T 3.63G none system/trash/stage-ingress01 37.8G 12.4T 208K none system/trash/stage-ingress01/root 37.8G 12.4T 37.8G none system/trash/stage-nginx01 4.58G 12.4T 208K none system/trash/stage-nginx01/root 4.58G 12.4T 4.58G none [dan@slocum:~] $
There, better. I could have done this when I created the filesystem:
[dan@slocum:~] $ sudo zfs create -o canmount=off -o mountpoint=none system/trash
Destroying the snapshots
I am always cautious around zfs destroy even when working on snapshots. An accidental tap of the ENTER key and you’ve lost the real data, not the snapshot.
I like using the -nv options on zfs destroy – that does a dry run (destroys nothing) and shows you want it would have done. For example:
[dan@slocum:~] $ sudo zfs destroy -nv -R system/trash@% would destroy system/trash/test-nginx01@1 would destroy system/trash/test-nginx01@before.bacula.restore would destroy system/trash/test-nginx01@ioc_update_12.2-RELEASE-p2_2021-02-24_15-06-06 would destroy system/trash/test-nginx01@autosnap_2021-03-15_00:00:24_daily would destroy system/trash/test-nginx01@autosnap_2021-03-16_00:01:06_daily would destroy system/trash/test-nginx01@autosnap_2021-03-17_00:00:43_daily .... would destroy system/trash/dev-nginx01/root@autosnap_2021-03-24_14:00:19_frequently would destroy system/trash/dev-nginx01/root@autosnap_2021-03-24_14:15:50_frequently would destroy system/trash/dev-nginx01/root@autosnap_2021-03-24_14:30:41_frequently would destroy system/trash/dev-nginx01/root@autosnap_2021-03-24_14:45:51_frequently would reclaim 46.2G [dan@slocum:~] $
Please read up on the differences between -R and -r.
How many snapshots is that?
[dan@slocum:~] $ zfs list -r -t snapshot system/trash | wc -l 1053
I also did this just to see if there was anything outside this tree which depended on something:
[dan@slocum:~] $ sudo zfs destroy -nv -R system/trash@% | grep -v system/trash would reclaim 46.2G
OK, good to go. Here was the real command:
[dan@slocum:~] $ sudo zfs destroy -R system/trash@% [dan@slocum:~] $
Done.
Still no snapshots
After I wrote the above, I went back to see if there any snapshots:
[dan@slocum:~] $ zfs list -r -t snapshot system/trash no datasets available [dan@slocum:~] $
No, nothing. Just as intended.
Hope this helps.