Earlier today, I moved two file systems from one zpool to another. I use sysutils/sanoid to take and manage snapshots of the more important data. In this post, I’ll update that configuration.
- Migrating ZFS filesystems from one zpool to another – same host
- Migrating data02/freshports/dev-ingress01 to another zpool
These are the items which need updates:
[20:04 r730-01 dvl ~] % sudo cp -p /usr/local/etc/sanoid/sanoid.conf /usr/local/etc/sanoid/sanoid.conf.backup.2026.04.19
[20:08 r730-01 dvl ~] % sudoedit /usr/local/etc/sanoid/sanoid.conf
[20:03 r730-01 dvl ~] % grep vm /usr/local/etc/sanoid/sanoid.conf
[data02/vm]
use_template = vm
[template_vm]
[20:03 r730-01 dvl ~] % grep data02/freshports/dev-ingress01 /usr/local/etc/sanoid/sanoid.conf
#[data02/freshports/dev-ingress01]
[data02/freshports/dev-ingress01/ingress]
[data02/freshports/dev-ingress01/freshports/message-queues]
[data02/freshports/dev-ingress01/modules]
[data02/freshports/dev-ingress01/scripts]
[data02/freshports/dev-ingress01/dvl-src]
In this post:
- FreeBSD 15.0
- sanoid-2.3.0
Fortunately, this is easy
Here’s what I did:
[20:08 r730-01 dvl ~] % diff -ruN /usr/local/etc/sanoid/sanoid.conf.backup.2026.04.19 /usr/local/etc/sanoid/sanoid.conf
--- /usr/local/etc/sanoid/sanoid.conf.backup.2026.04.19 2025-12-10 15:03:51.312393000 +0000
+++ /usr/local/etc/sanoid/sanoid.conf 2026-04-19 20:08:48.625320000 +0000
@@ -5,15 +5,15 @@
# dev.freshports
#
-#[data02/freshports/dev-ingress01]
+#[data04/freshports/dev-ingress01]
# use_template = ignore
# recursive = yes
-[data02/freshports/dev-ingress01/ingress]
+[data04/freshports/dev-ingress01/ingress]
recursive = yes
use_template = freshports
-[data02/freshports/dev-ingress01/freshports/message-queues]
+[data04/freshports/dev-ingress01/freshports/message-queues]
use_template = freshports
# this has to be above other children, such as data02/freshports/jailed/dev-nginx01
@@ -24,13 +24,13 @@
# development areas within dev.freshports:
-[data02/freshports/dev-ingress01/modules]
+[data04/freshports/dev-ingress01/modules]
use_template = development
-[data02/freshports/dev-ingress01/scripts]
+[data04/freshports/dev-ingress01/scripts]
use_template = development
-[data02/freshports/dev-ingress01/dvl-src]
+[data04/freshports/dev-ingress01/dvl-src]
use_template = development
@@ -122,7 +122,7 @@
recursive = yes
use_template = ignore
-[data02/vm]
+[data04/vm]
use_template = vm
recursive = zfs
[20:09 r730-01 dvl ~] %
Warnings?
I initially thought: I’ll get warnings from sanoid about missing snapshots. Well, for the hourly snapshots perhaps, because it’s been more than an hour since I copied that stuff over.
I’ll wait and see.
But wait, there’s more!
The above is sanoid configuration. I also use syncoid (a sanoid tool for sending snapshots).
It took a bit of hunting, but this crontab is on the destination / pulling host.
[20:16 r730-03 dvl /usr/local/etc/cron.d] % cat syncoid # mail any output to `dan', no matter whose crontab this is MAILTO=dan@langille.org PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin # that's run as the user syncoid, the command syncoid #04 * * * * syncoid /usr/local/bin/syncoid --sshkey=~syncoid/.ssh/id_ed25519 -r --no-sync-snap --compress=lzo --quiet syncoid@slocum.int.unixathome.org:system/vm/hass system/backups-by-snapshot/syncoid/homeassistant 18 * * * * syncoid /usr/bin/lockf -t 0 /tmp/.syncoid-homeassistant-r730-01 /usr/local/bin/syncoid --sshkey=~syncoid/.ssh/id_ed25519 -r --no-privilege-elevation --no-sync-snap --quiet --compress=lzo syncoid@r730-01.int.unixathome.org:data02/vm/hass data01/snapshots/homeassistant-r730-01 [20:16 r730-03 dvl /usr/local/etc/cron.d] %
I changed the data02 to data04 and saved the file.
For extra comfort
For extra comfort in knowing that nothing else is using the old filesystems, I renamed them:
[20:19 r730-01 dvl ~] % sudo zfs rename data02/vm data02/vm.DELETE.ME.after.2026.05.15 [20:19 r730-01 dvl ~] % sudo zfs rename data02/freshports/dev-ingress01 data02/freshports/dev-ingress01.DELETE.ME.after.2026.05.15 [20:20 r730-01 dvl ~] %
Now I wait, and see what comes.
Permissions, that’s what comes. Permissions
This cronjob output arrived via email recently:
cannot hold: permission denied cannot send 'data04/vm/hass': permission denied cannot receive: failed to read from stream CRITICAL ERROR: ssh -i ~syncoid/.ssh/id_ed25519 -S /tmp/syncoid-syncoidr730-01intunixathomeorg-1776633480-80710-7579 syncoid@r730-01.int.unixathome.org ' zfs send -I '"'"'data04/vm/hass'"'"'@'"'"'send.to.data04.2'"'"' '"'"'data04/vm/hass'"'"'@'"'"'autosnap_2026-04-19_21:15:14_frequently'"'"' | lzop | mbuffer -q -s 128k -m 16M' | mbuffer -q -s 128k -m 16M | lzop -dfc | zfs receive -s -F 'data01/snapshots/homeassistant-r730-01' 2>&1 failed: 256
I went looking and fixed:
[21:29 r730-01 dvl ~] % zfs allow -l data02/vm.DELETE.ME.after.2026.05.15/hass ---- Permissions on data02/vm.DELETE.ME.after.2026.05.15/hass -------- Local+Descendent permissions: user syncoid hold,send,snapshot [21:29 r730-01 dvl ~] % zfs allow -l data02/vm.DELETE.ME.after.2026.05.15/hass [21:29 r730-01 dvl ~] % zfs allow -l data04/vm/hass [21:30 r730-01 dvl ~] % sudo zfs allow -u syncoid hold,send,snapshot data04/vm/hass [21:30 r730-01 dvl ~] %
More waiting followed.
Bacula
The next, Bacula told me I’d messed up. This is why I renamed the datasets – so anything using them by name would error out. Case in point:
20-Apr 03:05 r730-01-fd JobId 391910: shell command: run ClientBeforeJob "/usr/local/sbin/snapshots-for-backup.sh create" 20-Apr 03:05 r730-01-fd JobId 391910: ClientBeforeJob: cannot open 'data02/freshports/dev-ingress01/dvl-src': dataset does not exist
These are what I missed:
[12:27 r730-01 dvl ~] % grep data02/freshports/dev-ingress01 /usr/local/sbin/snapshots-for-backup.sh data02/freshports/dev-ingress01/dvl-src data02/freshports/dev-ingress01/freshports/message-queues/archive data02/freshports/dev-ingress01/ingress/latest_commits data02/freshports/dev-ingress01/modules data02/freshports/dev-ingress01/scripts
Here’s how I fixed that:
[12:27 r730-01 dvl ~] % sudo cp -p /usr/local/sbin/snapshots-for-backup.sh /usr/local/sbin/snapshots-for-backup.sh.before.moving.data02-freshports-dev-ingress01.to.data04 [12:30 r730-01 dvl ~] % sudoedit /usr/local/sbin/snapshots-for-backup.sh [12:31 r730-01 dvl ~] % grep data02/freshports/dev-ingress01 /usr/local/sbin/snapshots-for-backup.sh [12:31 r730-01 dvl ~] % grep freshports/dev-ingress01 /usr/local/sbin/snapshots-for-backup.sh data04/freshports/dev-ingress01/dvl-src data04/freshports/dev-ingress01/freshports/message-queues/archive data04/freshports/dev-ingress01/ingress/latest_commits data04/freshports/dev-ingress01/modules data04/freshports/dev-ingress01/scripts [12:32 r730-01 dvl ~] % diff -ruN /usr/local/sbin/snapshots-for-backup.sh.before.moving.data02-freshports-dev-ingress01.to.data04 /usr/local/sbin/snapshots-for-backup.sh --- /usr/local/sbin/snapshots-for-backup.sh.before.moving.data02-freshports-dev-ingress01.to.data04 2024-02-21 21:56:26.175139000 +0000 +++ /usr/local/sbin/snapshots-for-backup.sh 2026-04-20 12:31:03.603858000 +0000 @@ -7,11 +7,11 @@ SNAPNAME="snapshot-for-backup" DATASETS=" -data02/freshports/dev-ingress01/dvl-src -data02/freshports/dev-ingress01/freshports/message-queues/archive -data02/freshports/dev-ingress01/ingress/latest_commits -data02/freshports/dev-ingress01/modules -data02/freshports/dev-ingress01/scripts +data04/freshports/dev-ingress01/dvl-src +data04/freshports/dev-ingress01/freshports/message-queues/archive +data04/freshports/dev-ingress01/ingress/latest_commits +data04/freshports/dev-ingress01/modules +data04/freshports/dev-ingress01/scripts data02/freshports/dev-nginx01/www/freshports data02/freshports/dev-nginx01/www/freshsource data02/freshports/dvl-ingress01/dvl-src
Then I reran the backup job:
*run
A job name must be specified.
The defined Job resources are:
1: RestoreFiles
2: Verify Test Job
3: bacula basic
4: bacula home
5: crey basic
6: CopyToTape-Inc-Just-One
7: dbclone basic
8: dbclone home
9: dbclone databases
10: BackupCatalog
11: fileserver basic
12: gw01 basic
13: gw01 test compression
14: jester
15: mailjail snapshot
16: mydev basic
17: mydev home dir
18: r720-01 jail snapshots
19: r720-02 basic
20: r720-02 jail snapshots
21: r730-01 basic
22: r730-01 jail snapshots
23: r730-01 snapshots
24: repo-svn-snapshots
25: repo-git-snapshots
26: unifi
27: r730-03 basic
28: r730-03 jail snapshots
29: ansible
30: slocum basic
31: slocum home
32: slocum jail snapshots
33: slocum dev data snapshots
34: svn basic
35: svn everything
36: tallboy basic
37: tallboy home
38: tallboy jail snapshots
39: tallboy Papers Jail
40: tallboy Papers Jail PostgreSQL Configuration
41: tape02 basic
42: webserver basic
43: x8dtu basic
44: x8dtu jail snapshots
45: x8dtu message archive snapshots
46: zuul basic
47: zuul jail snapshots
48: CopyToTape-Inc
49: InitiateCopyToSD03-Inc
50: CopyToSD03-Inc
51: CopyToTape-Diff
52: InitiateCopyToBaculaSD03-Diff
53: CopyToSD03-Diff
54: CopyToTape-Full
55: CopyToTape-Full-LTO4
56: CopyToTape-Full-Just-One-tape-01
57: CopyToTape-Full-specific-jobs
58: InitiateCopyToBaculaSD03-Full
59: CopyToSD03-Full
60: CopyToTape-KeepThreeMonths
61: CopyToSD04
62: CopyToSD04-testing-deleteme
Select Job resource (1-62): 23
Run Backup job
JobName: r730-01 snapshots
Level: Incremental
Client: r730-01-fd
FileSet: r730-01 snapshots
Pool: FullFile-04 (From Job resource)
Storage: bacula-sd-04-FullFile (From Pool resource)
When: 2026-04-20 12:33:31
Priority: 10
OK to run? (Yes/mod/no): yes
Job queued. JobId=391927
Which ran without issue:
*m 20-Apr 12:33 bacula-dir JobId 391927: Start Backup JobId 391927, Job=r730-01_snapshots.2026-04-20_12.33.33_58 20-Apr 12:33 bacula-dir JobId 391927: Connected to Storage "bacula-sd-04-IncrFile" at bacula-sd-04.int.unixathome.org:9103 with TLS 20-Apr 12:33 bacula-dir JobId 391927: There are no more Jobs associated with Volume "IncrAuto-04-14564". Marking it purged. 20-Apr 12:33 bacula-dir JobId 391927: All records pruned from Volume "IncrAuto-04-14564"; marking it "Purged" 20-Apr 12:33 bacula-dir JobId 391927: Recycled volume "IncrAuto-04-14564" 20-Apr 12:33 bacula-dir JobId 391927: Using Device "vDrive-IncrFile-9" to write. 20-Apr 12:33 bacula-dir JobId 391927: Connected to Client "r730-01-fd" at r730-01.int.unixathome.org:9102 with TLS 20-Apr 12:33 r730-01-fd JobId 391927: Connected to Storage at bacula-sd-04.int.unixathome.org:9103 with TLS 20-Apr 12:33 r730-01-fd JobId 391927: shell command: run ClientBeforeJob "/usr/local/sbin/snapshots-for-backup.sh create" 20-Apr 12:33 bacula-sd-04 JobId 391927: Recycled volume "IncrAuto-04-14564" on File device "vDrive-IncrFile-9" (/usr/local/bacula/volumes/IncrFile), all previous data lost. 20-Apr 12:33 bacula-dir JobId 391927: Max Volume jobs=1 exceeded. Marking Volume "IncrAuto-04-14564" as Used. 20-Apr 12:33 r730-01-fd JobId 391927: shell command: run ClientAfterJob "/usr/local/sbin/snapshots-for-backup.sh destroy" 20-Apr 12:33 bacula-sd-04 JobId 391927: Elapsed time=00:00:09, Transfer rate=1.779 K Bytes/second 20-Apr 12:33 bacula-sd-04 JobId 391927: Sending spooled attrs to the Director. Despooling 1,262 bytes ... 20-Apr 12:33 bacula-dir JobId 391927: Bacula bacula-dir 15.0.3 (25Mar25): Build OS: amd64-portbld-freebsd15.0 freebsd 15.0-RELEASE JobId: 391927 Job: r730-01_snapshots.2026-04-20_12.33.33_58 Backup Level: Incremental, since=2026-04-20 03:05:51 Client: "r730-01-fd" 15.0.3 (25Mar25) amd64-portbld-freebsd15.0,freebsd,15.0-RELEASE FileSet: "r730-01 snapshots" 2023-12-25 01:30:49 Pool: "IncrFile-04" (From Job IncPool override) Catalog: "MyCatalog" (From Client resource) Storage: "bacula-sd-04-IncrFile" (From Pool resource) Scheduled time: 20-Apr-2026 12:33:31 Start time: 20-Apr-2026 12:33:36 End time: 20-Apr-2026 12:33:47 Elapsed time: 11 secs Priority: 10 FD Files Written: 4 SD Files Written: 4 FD Bytes Written: 15,224 (15.22 KB) SD Bytes Written: 16,014 (16.01 KB) Rate: 1.4 KB/s Software Compression: None Comm Line Compression: 31.6% 1.5:1 Snapshot/VSS: no Encryption: no Accurate: no Volume name(s): IncrAuto-04-14564 Volume Session Id: 445 Volume Session Time: 1775831242 Last Volume Bytes: 16,899 (16.89 KB) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: OK SD termination status: OK Termination: Backup OK 20-Apr 12:33 bacula-dir JobId 391927: Begin pruning Jobs older than 3 years . 20-Apr 12:33 bacula-dir JobId 391927: No Jobs found to prune. 20-Apr 12:33 bacula-dir JobId 391927: Begin pruning Files. 20-Apr 12:33 bacula-dir JobId 391927: No Files found to prune. 20-Apr 12:33 bacula-dir JobId 391927: End auto prune.
Only four files?
Let’s see.
*list files jobid=391927 +----------------------------------------------------------------------------------------------------+ | filename | +----------------------------------------------------------------------------------------------------+ | /jails/test-ingress01/var/db/ingress/latest_commits/.zfs/snapshot/snapshot-for-backup/latest_commits.txt | | /jails/dev-ingress01/var/db/ingress/latest_commits/.zfs/snapshot/snapshot-for-backup/latest_commits.txt | | /jails/stage-ingress01/var/db/ingress/latest_commits/.zfs/snapshot/snapshot-for-backup/latest_commits.txt | | /jails/dvl-ingress01/var/db/ingress/latest_commits/.zfs/snapshot/snapshot-for-backup/latest_commits.txt | +----------------------------------------------------------------------------------------------------+ +---------+-------------------+---------------------+------+-------+----------+----------+-----------+ | jobid | name | starttime | type | level | jobfiles | jobbytes | jobstatus | +---------+-------------------+---------------------+------+-------+----------+----------+-----------+ | 391,927 | r730-01 snapshots | 2026-04-20 12:33:36 | B | I | 4 | 15,224 | T | +---------+-------------------+---------------------+------+-------+----------+----------+-----------+
That seems right, but not quite enough. Let’s try again.
*run job="r730-01 snapshots" level=full Run Backup job JobName: r730-01 snapshots Level: Full Client: r730-01-fd FileSet: r730-01 snapshots Pool: FullFile-04 (From Job resource) Storage: bacula-sd-04-FullFile (From Pool resource) When: 2026-04-20 12:37:37 Priority: 10 OK to run? (Yes/mod/no): yes Job queued. JobId=391928 20-Apr 12:37 bacula-dir JobId 391928: Start Backup JobId 391928, Job=r730-01_snapshots.2026-04-20_12.37.40_00 20-Apr 12:37 bacula-dir JobId 391928: Connected to Storage "bacula-sd-04-FullFile" at bacula-sd-04.int.unixathome.org:9103 with TLS 20-Apr 12:37 bacula-dir JobId 391928: There are no more Jobs associated with Volume "FullAuto-04-15227". Marking it purged. 20-Apr 12:37 bacula-dir JobId 391928: All records pruned from Volume "FullAuto-04-15227"; marking it "Purged" 20-Apr 12:37 bacula-dir JobId 391928: Recycled volume "FullAuto-04-15227" 20-Apr 12:37 bacula-dir JobId 391928: Using Device "vDrive-FullFile-0" to write. 20-Apr 12:37 bacula-dir JobId 391928: Connected to Client "r730-01-fd" at r730-01.int.unixathome.org:9102 with TLS 20-Apr 12:37 r730-01-fd JobId 391928: Connected to Storage at bacula-sd-04.int.unixathome.org:9103 with TLS 20-Apr 12:37 r730-01-fd JobId 391928: shell command: run ClientBeforeJob "/usr/local/sbin/snapshots-for-backup.sh create" 20-Apr 12:37 bacula-sd-04 JobId 391928: Recycled volume "FullAuto-04-15227" on File device "vDrive-FullFile-0" (/usr/local/bacula/volumes/FullFile), all previous data lost. ... 20-Apr 13:12 bacula-sd-04 JobId 391928: Elapsed time=00:34:51, Transfer rate=149.2 M Bytes/second 20-Apr 13:12 bacula-sd-04 JobId 391928: Sending spooled attrs to the Director. Despooling 138,242,853 bytes ... 20-Apr 13:13 bacula-dir JobId 391928: Bacula bacula-dir 15.0.3 (25Mar25): Build OS: amd64-portbld-freebsd15.0 freebsd 15.0-RELEASE JobId: 391928 Job: r730-01_snapshots.2026-04-20_12.37.40_00 Backup Level: Full Client: "r730-01-fd" 15.0.3 (25Mar25) amd64-portbld-freebsd15.0,freebsd,15.0-RELEASE FileSet: "r730-01 snapshots" 2023-12-25 01:30:49 Pool: "FullFile-04" (From Job FullPool override) Catalog: "MyCatalog" (From Client resource) Storage: "bacula-sd-04-FullFile" (From Pool resource) Scheduled time: 20-Apr-2026 12:37:37 Start time: 20-Apr-2026 12:37:42 End time: 20-Apr-2026 13:13:03 Elapsed time: 35 mins 21 secs Priority: 10 FD Files Written: 350,274 SD Files Written: 350,274 FD Bytes Written: 311,942,537,575 (311.9 GB) SD Bytes Written: 312,039,842,393 (312.0 GB) Rate: 147073.3 KB/s Software Compression: None Comm Line Compression: 29.3% 1.4:1 Snapshot/VSS: no Encryption: no Accurate: no Volume name(s): FullAuto-04-15227|FullAuto-04-15299|FullAuto-04-15328|FullAuto-04-15336|FullAuto-04-15330|FullAuto-04-15344|FullAuto-04-15345|FullAuto-04-15354|FullAuto-04-15237|FullAuto-04-15228|FullAuto-04-15223|FullAuto-04-15355 Volume Session Id: 446 Volume Session Time: 1775831242 Last Volume Bytes: 17,042,951,664 (17.04 GB) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: OK SD termination status: OK Termination: Backup OK 20-Apr 13:13 bacula-dir JobId 391928: Begin pruning Jobs older than 3 years . 20-Apr 13:13 bacula-dir JobId 391928: No Jobs found to prune. 20-Apr 13:13 bacula-dir JobId 391928: Begin pruning Files. 20-Apr 13:13 bacula-dir JobId 391928: No Files found to prune. 20-Apr 13:13 bacula-dir JobId 391928: End auto prune.
That seems more likely.











