I have a problem with a zpool. To be clear, this really isn’t a problem. I’m not aware of any I/O throttling etc. It is just something I would like to change.
zpzpoo% [18:26 r720-02 dvl ~] % zpool status data01 pool: data01 state: ONLINE status: One or more devices are configured to use a non-native block size. Expect reduced performance. action: Replace affected devices with devices that support the configured block size, or migrate data to a properly configured pool. scan: scrub repaired 0B in 00:15:40 with 0 errors on Mon Dec 8 04:05:38 2025 config: NAME STATE READ WRITE CKSUM data01 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/S59VNS0N809087J_S00 ONLINE 0 0 0 block size: 512B configured, 4096B native gpt/S59VNJ0N631973D_S01 ONLINE 0 0 0 block size: 512B configured, 4096B native mirror-1 ONLINE 0 0 0 gpt/S5B3NDFN807383E_S02 ONLINE 0 0 0 block size: 512B configured, 4096B native gpt/S5B3NDFN807386P_S03 ONLINE 0 0 0 block size: 512B configured, 4096B native errors: No known data errors
I point out that this problem was not detected when the zpool was created 5 years ago in Oct 2020. Perhaps this is a situation which ZFS has only recently started to report.
It was discussed on Mastodon and deciding that recreation was the only option.
The plan:
- copy everything off the zpool
- destroy the zpool
- create the zpool
- copy everything onto the zpool
Sounds easy!
In this post:
- FreeBSD 14.3
One more time, with a bit of detail
Let me repeat what I just said above, but with slightly more detail.
- zfs send the data01 zpool from r720-02 (in a data center in New York) to r730-01 (in a rack, in my basement)
- repeat that send | recv, getting any changes since that initial length copy
- destroy the zpool on r720-02
- create the zpool, making sure we have solved the issue
- copy everything back, using another send | recv
Of note:
- I will be allowing ssh via root for this purpose. This is not recommended, except in certain temporary circumstances, such as this.
- ssh will be tightly controlled, by ssh-key only
I’m not going into much detail as to how I’m doing that all.
Allowing root ssh, by key only
I’m setting the bar high here. If you don’t know how to do this, I suggest you don’t want to do this.
I created an ssh key on the sending host. It has a passphrase.
I added that public key to the root account on the receiving host. I enabled root ssh on the receiving host.
All that will be disabled and reversed after I have done this copy/restore. This is not something you should enable forever. It will come back to bite you later.
Creating the destination dataset
This is where we will send data.
root@r730-01:~ # zfs get all data04/r720-02 NAME PROPERTY VALUE SOURCE data04/r720-02 type filesystem - data04/r720-02 creation Thu Dec 11 18:15 2025 - data04/r720-02 used 205K - data04/r720-02 available 17.3T - data04/r720-02 referenced 205K - data04/r720-02 compressratio 1.00x - data04/r720-02 mounted yes - data04/r720-02 quota none default data04/r720-02 reservation none default data04/r720-02 recordsize 128K inherited from data04 data04/r720-02 mountpoint /data04/r720-02 default data04/r720-02 sharenfs off default data04/r720-02 checksum on default data04/r720-02 compression zstd inherited from data04 data04/r720-02 atime on default data04/r720-02 devices on default data04/r720-02 exec on default data04/r720-02 setuid on default data04/r720-02 readonly off default data04/r720-02 jailed off default data04/r720-02 snapdir hidden default data04/r720-02 aclmode discard default data04/r720-02 aclinherit restricted default data04/r720-02 createtxg 36211 - data04/r720-02 canmount on default data04/r720-02 xattr on default data04/r720-02 copies 1 default data04/r720-02 version 5 - data04/r720-02 utf8only off - data04/r720-02 normalization none - data04/r720-02 casesensitivity sensitive - data04/r720-02 vscan off default data04/r720-02 nbmand off default data04/r720-02 sharesmb off default data04/r720-02 refquota none default data04/r720-02 refreservation none default data04/r720-02 guid 8698413625513271311 - data04/r720-02 primarycache all default data04/r720-02 secondarycache all default data04/r720-02 usedbysnapshots 0B - data04/r720-02 usedbydataset 205K - data04/r720-02 usedbychildren 0B - data04/r720-02 usedbyrefreservation 0B - data04/r720-02 logbias latency default data04/r720-02 objsetid 3399 - data04/r720-02 dedup off default data04/r720-02 mlslabel none default data04/r720-02 sync standard default data04/r720-02 dnodesize legacy default data04/r720-02 refcompressratio 1.00x - data04/r720-02 written 205K - data04/r720-02 logicalused 42.5K - data04/r720-02 logicalreferenced 42.5K - data04/r720-02 volmode default default data04/r720-02 filesystem_limit none default data04/r720-02 snapshot_limit none default data04/r720-02 filesystem_count none default data04/r720-02 snapshot_count none default data04/r720-02 snapdev hidden default data04/r720-02 acltype nfsv4 default data04/r720-02 context none default data04/r720-02 fscontext none default data04/r720-02 defcontext none default data04/r720-02 rootcontext none default data04/r720-02 relatime on default data04/r720-02 redundant_metadata all default data04/r720-02 overlay on default data04/r720-02 encryption off default data04/r720-02 keylocation none default data04/r720-02 keyformat none default data04/r720-02 pbkdf2iters 0 default data04/r720-02 special_small_blocks 0 default data04/r720-02 prefetch all default
Starting the send
Here goes:
[root@r720-02:~] # time syncoid -r --no-sync-snap --compress=lzo --quiet data01 root@10.55.0.141:data04/r720-02 Enter passphrase for key '/root/.ssh/id_ed25519':
Two thoughts:
- syncoid is not ssh-agent aware from what I can tell
- I started this in a tmux session
How long?
Now I wait. How long? Let’s estimate.
[18:26 r720-02 dvl ~] % zpool list data01 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 1.81T 760G 1.07T - - 29% 40% 1.00x ONLINE -
So, 760G to send.
The transfer started at:
[20:04 r720-02 dvl ~] % ps auwwx | grep syncoid root 13574 23.6 0.0 39908 27348 - Ss 18:24 21:35.80 ssh: /tmp/syncoid-root10550141-1765477470-13566-5722 [mux] (ssh) root 13566 0.0 0.0 32388 18332 2 I+ 18:24 0:00.99 /usr/local/bin/perl /usr/local/bin/syncoid -r --no-sync-snap --compress=lzo --quiet data01 root@10.55.0.141:data04/r720-02 root 30854 0.0 0.0 14404 2992 2 I+ 18:41 0:00.00 sh -c zfs send -I 'data01/jails/ingress01'@'mkjail-202105291951' 'data01/jails/ingress01'@'autosnap_2025-12-11_18:30:00_frequently' | lzop | mbuffer -q -s 128k -m 16M | ssh -S /tmp/syncoid-root10550141-1765477470-13566-5722 root@10.0.0.55 ' mbuffer -q -s 128k -m 16M | lzop -dfc | zfs receive -s -F '"'"'data04/r720-02/jails/ingress01'"'"' 2>&1' root 30858 0.0 0.0 21476 9556 2 I+ 18:41 0:00.01 ssh -S /tmp/syncoid-root10550141-1765477470-13566-5722 root@10.55.0.141 mbuffer -q -s 128k -m 16M | lzop -dfc | zfs receive -s -F 'data04/r720-02/jails/ingress01' 2>&1 dvl 97198 0.0 0.0 13836 2400 3 S+ 20:05 0:00.00 grep syncoid
18:24 – it is now 20:05 – so 1 hour and 41 minutes, or 101 minutes.
How much copied?
root@r730-01:~/.ssh # zfs list -r data04/r720-02 NAME USED AVAIL REFER MOUNTPOINT data04/r720-02 61.7G 17.2T 239K /data04/r720-02 data04/r720-02/freebsd_releases 469M 17.2T 468M /data04/r720-02/freebsd_releases data04/r720-02/jails 61.2G 17.2T 290K /data04/r720-02/jails data04/r720-02/jails/bw 3.18G 17.2T 1.60G /data04/r720-02/jails/bw data04/r720-02/jails/ingress01 58.0G 17.2T 16.9G /data04/r720-02/jails/ingress01 root@r730-01:~/.ssh #
61.7G
Doing the math: 61.7 / 760 = 0.0811 or about 8.11%
0.0811 of X is 101 minutes.
X = 101 / 0.0811
101 / 0.0811 = 1245.38 minutes = 20 hours 45 minutes 22.8 seconds
That’s how long it should take to get here. Should be done at 15:09 tomorrow.











