Now that I’ve filled up data01, I bought some more SSDs and created another zpool. Today, on this snow-is-anticipated Saturday winter morning, I’m going to move some ZFS filesystems/datasets around.
Side note: I like the term dataset better than filesystem. man zfs talks about datasets, which can be a file system, a volume, a snapshot, or a bookmark. However, usually when I talk about a filesystem, it might be dataset. It’s a fine point, but the difference is not usually relevant to what I’m doing.
In this post:
- FreeBSD 13.1
- syncoid (installed as part of sanoid-2.1.0)
The top level datasets
These are the top level datasets on the r730-01. The highlighted lines are those which I am going to move to the new zpool, data03.
[r730-01 dvl ~] % zfs list -d 1 -r data01 NAME USED AVAIL REFER MOUNTPOINT data01 3.03T 1005G 205K none data01/acme-certs 30.2M 1005G 188K none data01/bacula 1.97T 1005G 188K /jails/bacula-sd-03/usr/local/bacula data01/certs-rsync 7.01G 1005G 4.42G none data01/jail-testing 1.85G 1005G 205K none data01/jail_within_jail 956M 1005G 205K none data01/librenms-rrd 54.2G 1005G 4.51G /jails/besser/var/db/librenms/rrd data01/pg03 27.8G 1005G 188K none data01/poudriere 271G 1005G 239K /usr/local/poudriere data01/repos 8.11G 1005G 188K none data01/samba 700G 1005G 222K /jails/fileserver/usr/local/samba data01/syncthing 14.0G 1005G 13.1G /jails/fileserver/usr/local/syncthing
In short, everything but bacula and samba are moving out of this zpool.
Stop the jails
First, I stop the jails which use the data I am going to transfer:
[r730-01 dvl ~] % zfs list -r data01/acme-certs NAME USED AVAIL REFER MOUNTPOINT data01/acme-certs 30.2M 1005G 188K none data01/acme-certs/certs 16.4M 1005G 13.6M /jails/certs/var/db/acme/certs data01/acme-certs/certs-for-rsync 13.6M 1005G 8.29M /jails/certs/var/db/certs-for-rsync [r730-01 dvl ~] % jls | grep certs 5 10.55.0.54 certs-rsync.int.unixathome.or /jails/certs-rsync 6 10.55.0.112 certs.int.unixathome.org /jails/certs [r730-01 dvl ~] % sudo service jail stop certs-rsync certs Stopping jails: certs-rsync certs. [r730-01 dvl ~] %
syncoid over the data
This is the command I used:
[r730-01 dvl ~] % sudo syncoid data01/acme-certs data03/acme-certs INFO: Sending oldest full snapshot data01/acme-certs@autosnap_2023-02-10_00:02:37_daily (~ 39 KB) to new target filesystem: 46.0KiB 0:00:00 [ 392KiB/s] [================================================================================] 117% INFO: Updating new target filesystem with incremental data01/acme-certs@autosnap_2023-02-10_00:02:37_daily ... syncoid_r730-01.int.unixathome.org_2023-02-25:13:29:41-GMT00:00 (~ 42 KB): 43.0KiB 0:00:02 [16.1KiB/s] [================================================================================] 102% [r730-01 dvl ~] % zfs list -r data03 NAME USED AVAIL REFER MOUNTPOINT data03 2.72M 7.12T 96K /data03 data03/acme-certs 88K 7.12T 88K /data03/acme-certs
Ahh, I’m missing stuff because I didn’t use -r. Let’s try again:
[r730-01 dvl ~] % sudo syncoid -r data01/acme-certs data03/acme-certs Sending incremental data01/acme-certs@syncoid_r730-01.int.unixathome.org_2023-02-25:13:29:41-GMT00:00 ... syncoid_r730-01.int.unixathome.org_2023-02-25:13:31:55-GMT00:00 (~ 4 KB): 1.52KiB 0:00:00 [16.3KiB/s] [=============================> ] 38% INFO: Sending oldest full snapshot data01/acme-certs/certs@autosnap_2023-02-10_00:02:19_daily (~ 3.3 MB) to new target filesystem: 4.31MiB 0:00:00 [34.4MiB/s] [================================================================================] 130% INFO: Updating new target filesystem with incremental data01/acme-certs/certs@autosnap_2023-02-10_00:02:19_daily ... syncoid_r730-01.int.unixathome.org_2023-02-25:13:31:56-GMT00:00 (~ 833 KB): 1.26MiB 0:00:02 [ 551KiB/s] [================================================================================] 155% INFO: Sending oldest full snapshot data01/acme-certs/certs-for-rsync@autosnap_2023-02-10_00:01:11_daily (~ 2.7 MB) to new target filesystem: 3.49MiB 0:00:00 [30.2MiB/s] [================================================================================] 128% INFO: Updating new target filesystem with incremental data01/acme-certs/certs-for-rsync@autosnap_2023-02-10_00:01:11_daily ... syncoid_r730-01.int.unixathome.org_2023-02-25:13:31:58-GMT00:00 (~ 1.3 MB): 3.91MiB 0:00:02 [1.60MiB/s] [================================================================================] 300% [r730-01 dvl ~] % zfs list -r data03 NAME USED AVAIL REFER MOUNTPOINT data03 20.8M 7.12T 96K /data03 data03/acme-certs 15.2M 7.12T 96K /data03/acme-certs data03/acme-certs/certs 8.39M 7.12T 6.95M /data03/acme-certs/certs data03/acme-certs/certs-for-rsync 6.62M 7.12T 4.13M /data03/acme-certs/certs-for-rsync [r730-01 dvl ~] %
One other thing I should have been wary of was recordsize. However, in this case, the source and destination are the same value:
[r730-01 dvl ~] % zfs get -t filesystem -r recordsize data01/acme-certs NAME PROPERTY VALUE SOURCE data01/acme-certs recordsize 128K default data01/acme-certs/certs recordsize 128K default data01/acme-certs/certs-for-rsync recordsize 128K default [r730-01 dvl ~] % zfs get -t filesystem -r recordsize data03 NAME PROPERTY VALUE SOURCE data03 recordsize 128K default data03/acme-certs recordsize 128K default data03/acme-certs/certs recordsize 128K default data03/acme-certs/certs-for-rsync recordsize 128K default
Adjust the old mountpoints
This unmounts the old filesystems:
[r730-01 dvl ~] % sudo zfs set mountpoint=none data01/acme-certs/certs [r730-01 dvl ~] % sudo zfs set mountpoint=none data01/acme-certs/certs-for-rsync [r730-01 dvl ~] % zfs get -t filesystem -r mounted data01/acme-certs NAME PROPERTY VALUE SOURCE data01/acme-certs mounted no - data01/acme-certs/certs mounted no - data01/acme-certs/certs-for-rsync mounted no - [r730-01 dvl ~] %
Mount the new filesystems
This mounts the new filesystem.
[r730-01 dvl ~] % sudo zfs set mountpoint=/jails/certs/var/db/acme/certs data03/acme-certs/certs [r730-01 dvl ~] % sudo zfs set mountpoint=/jails/certs/var/db/certs-for-rsync data03/acme-certs/certs-for-rsync [r730-01 dvl ~] % zfs list -r data03/acme-certs NAME USED AVAIL REFER MOUNTPOINT data03/acme-certs 15.2M 7.12T 96K /data03/acme-certs data03/acme-certs/certs 8.39M 7.12T 6.95M /jails/certs/var/db/acme/certs data03/acme-certs/certs-for-rsync 6.62M 7.12T 4.13M /jails/certs/var/db/certs-for-rsync [r730-01 dvl ~] % zfs get -t filesystem -r mounted data03/acme-certs NAME PROPERTY VALUE SOURCE data03/acme-certs mounted yes - data03/acme-certs/certs mounted yes - data03/acme-certs/certs-for-rsync mounted yes - [r730-01 dvl ~] %
Oh, wait, I don’t want data03/acme-certs mounted:
[r730-01 dvl ~] % sudo zfs set mountpoint=none data03/acme-certs [r730-01 dvl ~] % zfs list -r data03/acme-certs NAME USED AVAIL REFER MOUNTPOINT data03/acme-certs 15.2M 7.12T 96K none data03/acme-certs/certs 8.39M 7.12T 6.95M /jails/certs/var/db/acme/certs data03/acme-certs/certs-for-rsync 6.62M 7.12T 4.13M /jails/certs/var/db/certs-for-rsync
There, that’s better.
Start the jail
Next, I started the jails I had stopped at the start of this process.
[r730-01 dvl ~] % sudo service jail start certs-rsync certs Starting jails: certs-rsync certs.
I ssh’d into both jails, everything looked right to me. For example:
[certs-rsync dan ~] % ls -l /var/db/certs-for-rsync total 18 drwxr-xr-x 2 anvil anvil 6 2022.08.04 15:57 cert-fingerprints/ drwxr-xr-x 180 anvil anvil 180 2023.01.25 14:55 certs/ drwxr-xr-x 2 anvil anvil 2 2023.02.22 17:39 tmp/
Done. Now onto the rest. When I’m satisfied all is well, I’ll run a zfs destroy -nrv data01/acme-certs before actually destroying that dataset.
The transfers
This was taken while it was copying data. This is the output of gstat -p
dT: 1.002s w: 1.000s L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 0 0 0 0.0 0 0 0.0 0.0| nvd0 0 0 0 0 0.0 0 0 0.0 0.0| ada0 0 0 0 0 0.0 0 0 0.0 0.0| ada1 0 1878 1878 41589 0.1 0 0 0.0 12.2| da0 0 1893 1893 41553 0.1 0 0 0.0 12.1| da1 0 1879 1879 41749 0.1 0 0 0.0 12.5| da2 0 1906 1906 41489 0.1 0 0 0.0 12.5| da3 0 1897 1897 41533 0.1 0 0 0.0 12.2| da4 0 1884 1884 41389 0.1 0 0 0.0 12.1| da5 0 1895 1895 41681 0.1 0 0 0.0 12.2| da6 0 1891 1891 41613 0.1 0 0 0.0 12.0| da7 10 1006 0 0 0.0 1004 118096 9.8 99.4| da8 10 1699 0 0 0.0 1697 207981 5.2 88.7| da9 10 1738 0 0 0.0 1736 212739 5.0 86.9| da10 10 796 0 0 0.0 794 91258 9.2 89.5| da11 0 18 0 0 0.0 12 511 0.2 0.1| da12 0 18 0 0 0.0 12 511 0.2 0.1| da13 0 0 0 0 0.0 0 0 0.0 0.0| cd0
da0..da7 are the old dataset.
da8..da10 are the new dataset.
I recommend not basing any conclusion upon this one second snapshot of input/output.
Destroying the old datasets
I started by doing a zpool checkpoint data01. This would allow me to reverse something if I accidentally destroy the wrong thing. I should have done that on the data03 zpool as well. Hmm. Be aware though, if you used that checkpoint to recover, you’d lose any change changes made in other filesystems. i.e. you might be better off doing this in single-user mode, just in ase.
One pro-tip to pass on: zfs destroy -nrv data02/poudriere will list what would be destroyed. It runs as non-root, and therefore cannot possibly affect anything (unless your user has special permissions. The real precaution is held by the -n option, which is a ‘dry-run (“No-op”) deletion’, as found in man 8 zpool-destroy.
Lather. Rinse. Repeat.
I repeated this process, more or less, for all the datasets I wanted to copy. At the end, I had this:
[r730-01 dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 5.81T 3.88T 1.93T - - 3% 66% 1.00x ONLINE - data02 1.73T 363G 1.38T - - 0% 20% 1.00x ONLINE - data03 7.25T 513G 6.75T - - 0% 6% 1.00x ONLINE - zroot 424G 2.26G 422G - - 1% 0% 1.00x ONLINE -
Thanks for reading. Hope it helps.