If you read my recent posts, I’m replacing an existing zpool with new devices. At first, I went to copy the old zpool to a new zpool. I then decided instead of copying, to replace. I’m also going to see about compression. I’m sure it won’t take effect, because replace is a block-by-block copy, or so I think. We’ll confirm.
Before
Here is the zpool, really, just the top-level filesystem. Notice how compression is on, no mention of algorithm. I’m sure that’s an indication of the default algorithm being in place (i.e. lz4). By default, I mean default as of the time the zpool was created (back in Feb 2023).
[9:50 pro02 dvl ~] % r730 zLast login: Sat Oct 18 13:05:25 2025 from pro02.startpoint.vpn.unixathome.org zf% [13:50 r730-01 dvl ~] % zfs get all data02 NAME PROPERTY VALUE SOURCE data02 type filesystem - data02 creation Thu Feb 9 23:01 2023 - data02 used 723G - data02 available 998G - data02 referenced 96K - data02 compressratio 1.59x - data02 mounted no - data02 quota none default data02 reservation none default data02 recordsize 128K default data02 mountpoint none local data02 sharenfs off default data02 checksum on default data02 compression on local data02 atime off local data02 devices on default data02 exec on default data02 setuid on default data02 readonly off default data02 jailed off default data02 snapdir hidden default data02 aclmode discard default data02 aclinherit restricted default data02 createtxg 1 - data02 canmount on default data02 xattr on default data02 copies 1 default data02 version 5 - data02 utf8only off - data02 normalization none - data02 casesensitivity sensitive - data02 vscan off default data02 nbmand off default data02 sharesmb off default data02 refquota none default data02 refreservation none default data02 guid 11969962450962706556 - data02 primarycache all default data02 secondarycache all default data02 usedbysnapshots 0B - data02 usedbydataset 96K - data02 usedbychildren 723G - data02 usedbyrefreservation 0B - data02 logbias latency default data02 objsetid 54 - data02 dedup off default data02 mlslabel none default data02 sync standard default data02 dnodesize legacy default data02 refcompressratio 1.00x - data02 written 96K - data02 logicalused 1.08T - data02 logicalreferenced 42.5K - data02 volmode default default data02 filesystem_limit none default data02 snapshot_limit none default data02 filesystem_count none default data02 snapshot_count none default data02 snapdev hidden default data02 acltype nfsv4 default data02 context none default data02 fscontext none default data02 defcontext none default data02 rootcontext none default data02 relatime on default data02 redundant_metadata all default data02 overlay on default data02 encryption off default data02 keylocation none default data02 keyformat none default data02 pbkdf2iters 0 default data02 special_small_blocks 0 default data02 snapshots_changed Sat Oct 18 13:04:45 2025 - data02 prefetch all default
Setting compression
After setting compression explicitly, the value from compression changes from on to zstd.
[13:50 r730-01 dvl ~] % sudo zfs set compression=zstd data02 [13:50 r730-01 dvl ~] % zfs get all data02 | grep zstd data02 compression zstd local
Replace, or add
I had a thought: I could add the other two devices in and make it a four-way mirror, then manually remove the two old devices.
Sitting here, listening to Saturday morning radio, at nearly 10AM, dogs walked, fed, and snoozing on the cough, I think I’m just going to do zpool replace – it does the right thing and I don’t have to muck about later.
autoexpand
Before I start the replace, I remembered that the zpool will autoexpand if you set this setting…. so let’s set it:
I can’t recall which setting it is. So let’s search:
[13:50 r730-01 dvl ~] % zpool get all data02 | grep auto data02 autoreplace off default data02 autoexpand off default data02 autotrim off default
Searching for zfs autoexpand confirms that’s the one I want:
[14:00 r730-01 dvl ~] % sudo zpool set autoexpand=on data02 [14:04 r730-01 dvl ~] % zpool get autoexpand data02 NAME PROPERTY VALUE SOURCE data02 autoexpand on local
I wondered: why would anyone want autoexpand=off? My guess: at one time, autoexpand was a new feature. It defaults to off so that existing behavior is retained. I’m happy for anyone to provide further information / history about this feature.
OK, let’s go.
zpool destroy
I have to free up the devices first. Yesterday, I create a new zpool using those devices. I *always* get nervous about destroying data.
[14:04 r730-01 dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 5.81T 4.80T 1.02T - - 2% 82% 1.00x ONLINE - data02 1.73T 723G 1.03T - - 47% 40% 1.00x ONLINE - data03 7.25T 1.29T 5.96T - - 32% 17% 1.00x ONLINE - data04 3.62T 588K 3.62T - - 0% 0% 1.00x ONLINE - new-data02 3.62T 657G 2.98T - - 0% 17% 1.00x ONLINE - zroot 424G 35.8G 388G - - 14% 8% 1.00x ONLINE - [14:06 r730-01 dvl ~] % sudo zpool destroy new-data02 [14:07 r730-01 dvl ~] %
OK, the difficult part is done.
Partitioning the first drive
I always partition my ZFS drives. It is not necessary, but it is something I do. Mainly, so if I have to replace that drive, and the new drive is slightly smaller, I’m not screwed. See also my post from 2000: ZFS: do not give it all your HDD
The two drives, as seen in yesterday’s post, are /dev/nda0 and /dev/nda3.
These are the drives I have. As an Amazon Associate I earn from qualifying purchases.
- The 4TB NVMe cards are Samsung 990 EVO Plus SSD 4TB, PCIe Gen 4×4 | Gen 5×2 M.2
[13:50 r730-01 dvl ~] % sudo gpart create -s gpt /dev/nda0 nda0 created [15:21 r730-01 dvl ~] % sudo diskinfo -v /dev/nda0 /dev/nda0 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 EVO Plus 4TB # Disk descr. S7U8NJ0Y716854P # Disk ident. nvme0 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM [15:22 r730-01 dvl ~] % grep nda0 /var/run/dmesg.boot nda0 at nvme0 bus 0 scbus18 target 0 lun 1 nda0:nda0: Serial Number S7U8NJ0Y716854P nda0: nvme version 2.0 nda0: 3815447MB (7814037168 512 byte sectors) [15:22 r730-01 dvl ~] % gpart show /dev/nda0 => 40 7814037088 nda0 GPT (3.6T) 40 7814037088 - free - (3.6T) [15:26 r730-01 dvl ~] %
In the following command, the -s argument is taken from the free component seen in the previous command. I excluded 1088 sectors to be spare. That’s 1088 * 512 or 557,056 bytes. As seen below, that’s the 544K free at the end. This is my buffer should any future replacement 4TB card be slightly smaller.
[15:26 r730-01 dvl ~] % sudo gpart add -i 1 -t freebsd-zfs -a 4k -l Samsung_990_S7U8NJ0Y716854P -s 7814036000 /dev/nda0 nda0p1 added [15:41 r730-01 dvl ~] % gpart show /dev/nda0 => 40 7814037088 nda0 GPT (3.6T) 40 7814036000 1 freebsd-zfs (3.6T) 7814036040 1088 - free - (544K) [15:42 r730-01 dvl ~] %
Let’s look at the label, not the partition type:
[15:46 r730-01 dvl ~] % gpart show -l /dev/nda0 => 40 7814037088 nda0 GPT (3.6T) 40 7814036000 1 Samsung_990_S7U8NJ0Y716854P (3.6T) 7814036040 1088 - free - (544K)
And here, let’s show the provider names, not the partition index.
[15:46 r730-01 dvl ~] % gpart show -lp /dev/nda0 => 40 7814037088 nda0 GPT (3.6T) 40 7814036000 nda0p1 Samsung_990_S7U8NJ0Y716854P (3.6T) 7814036040 1088 - free - (544K) [15:47 r730-01 dvl ~] %
That label also appears over here. This is the device name I’ll use when running the zpool replace.
[15:47 r730-01 dvl ~] % ls -l /dev/gpt/Sam* crw-r----- 1 root operator 0x443 2025.10.18 15:41 /dev/gpt/Samsung_990_S7U8NJ0Y716854P
An idea, never to be use I’m sure. I could use that 544K slot. I could use it for something. But I’m not going to. It was just an idea. Suggestions for use are welcomed.
The replace
It was at this point that I realized, after many searches of my own blog, that the command is not zfs replace.
It is zpool replace.
Of course it is. It is a pool action, not a filesystem action. That explains why it was so hard for me to find my own examples. This is not the first time I’ve made that search error.
This is the zpool before:
[15:49 r730-01 dvl ~] % zpool status data02 pool: data02 state: ONLINE scan: scrub repaired 0B in 00:37:38 with 0 errors on Thu Oct 16 04:11:19 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/BTHC536302CT400VGN ONLINE 0 0 0 da14p1 ONLINE 0 0 0 errors: No known data errors
Let’s try showing the full path:
[16:02 r730-01 dvl ~] % zpool status -P data02 pool: data02 state: ONLINE scan: scrub repaired 0B in 00:37:38 with 0 errors on Thu Oct 16 04:11:19 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/gpt/BTHC536302CT400VGN ONLINE 0 0 0 /dev/da14p1 ONLINE 0 0 0
OK, let’s try this command:
[16:02 r730-01 dvl ~] % sudo zpool replace data02 gpt/BTHC536302CT400VGN gpt/Samsung_990_S7U8NJ0Y716854P [16:44 r730-01 dvl ~] % zpool status -Lp data02 pool: data02 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Oct 18 16:43:58 2025 52.2G / 723G scanned at 6.53G/s, 0B / 723G issued 0B resilvered, 0.00% done, no estimated completion time config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 replacing-0 ONLINE 0 0 0 gpt/BTHC536302CT400VGN ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 da14p1 ONLINE 0 0 0 errors: No known data errors [16:44 r730-01 dvl ~] %
Now, I wait.
The zpool
This is what the zpool looks like now:
[16:44 r730-01 dvl ~] % zfs list -r data02 NAME USED AVAIL REFER MOUNTPOINT data02 723G 998G 96K none data02/freshports 329G 998G 88K none data02/freshports/dev-ingress01 246G 998G 88K none data02/freshports/dev-ingress01/dvl-src 209G 998G 209G /jails/dev-ingress01/usr/home/dvl/src data02/freshports/dev-ingress01/freshports 26.6G 998G 1.75G /jails/dev-ingress01/var/db/freshports data02/freshports/dev-ingress01/freshports/cache 576K 998G 140K /jails/dev-ingress01/var/db/freshports/cache data02/freshports/dev-ingress01/freshports/cache/html 332K 998G 332K /jails/dev-ingress01/var/db/freshports/cache/html data02/freshports/dev-ingress01/freshports/cache/spooling 104K 998G 104K /jails/dev-ingress01/var/db/freshports/cache/spooling data02/freshports/dev-ingress01/freshports/message-queues 24.9G 998G 13.7M /jails/dev-ingress01/var/db/freshports/message-queues data02/freshports/dev-ingress01/freshports/message-queues/archive 24.8G 998G 11.3G /jails/dev-ingress01/var/db/freshports/message-queues/archive data02/freshports/dev-ingress01/ingress 5.66G 998G 132K /jails/dev-ingress01/var/db/ingress data02/freshports/dev-ingress01/ingress/latest_commits 592K 998G 116K /jails/dev-ingress01/var/db/ingress/latest_commits data02/freshports/dev-ingress01/ingress/message-queues 3.22M 998G 1.64M /jails/dev-ingress01/var/db/ingress/message-queues data02/freshports/dev-ingress01/ingress/repos 5.65G 998G 120K /jails/dev-ingress01/var/db/ingress/repos data02/freshports/dev-ingress01/ingress/repos/doc 661M 998G 659M /jails/dev-ingress01/var/db/ingress/repos/doc data02/freshports/dev-ingress01/ingress/repos/ports 2.24G 998G 2.24G /jails/dev-ingress01/var/db/ingress/repos/ports data02/freshports/dev-ingress01/ingress/repos/src 2.76G 998G 2.76G /jails/dev-ingress01/var/db/ingress/repos/src data02/freshports/dev-ingress01/jails 4.65G 998G 104K /jails/dev-ingress01/jails data02/freshports/dev-ingress01/jails/freshports 4.65G 998G 507M /jails/dev-ingress01/jails/freshports data02/freshports/dev-ingress01/jails/freshports/ports 4.16G 998G 4.16G /jails/dev-ingress01/jails/freshports/usr/ports data02/freshports/dev-ingress01/modules 4.98M 998G 4.98M /jails/dev-ingress01/usr/local/lib/perl5/site_perl/FreshPorts data02/freshports/dev-ingress01/scripts 4.16M 998G 3.41M /jails/dev-ingress01/usr/local/libexec/freshports data02/freshports/dev-nginx01 54.9M 998G 96K none data02/freshports/dev-nginx01/www 54.9M 998G 96K /jails/dev-nginx01/usr/local/www data02/freshports/dev-nginx01/www/freshports 51.9M 998G 51.9M /jails/dev-nginx01/usr/local/www/freshports data02/freshports/dev-nginx01/www/freshsource 2.82M 998G 2.82M /jails/dev-nginx01/usr/local/www/freshsource data02/freshports/dvl-ingress01 16.4G 998G 96K none data02/freshports/dvl-ingress01/dvl-src 84.3M 998G 84.1M /jails/dvl-ingress01/usr/home/dvl/src data02/freshports/dvl-ingress01/freshports 4.09G 998G 96K /jails/dvl-ingress01/var/db/freshports data02/freshports/dvl-ingress01/freshports/cache 632K 998G 96K /jails/dvl-ingress01/var/db/freshports/cache data02/freshports/dvl-ingress01/freshports/cache/html 424K 998G 344K /jails/dvl-ingress01/var/db/freshports/cache/html data02/freshports/dvl-ingress01/freshports/cache/spooling 112K 998G 112K /jails/dvl-ingress01/var/db/freshports/cache/spooling data02/freshports/dvl-ingress01/freshports/message-queues 4.08G 998G 10.7M /jails/dvl-ingress01/var/db/freshports/message-queues data02/freshports/dvl-ingress01/freshports/message-queues/archive 4.07G 998G 4.07G /jails/dvl-ingress01/var/db/freshports/message-queues/archive data02/freshports/dvl-ingress01/ingress 8.55G 998G 120K /jails/dvl-ingress01/var/db/ingress data02/freshports/dvl-ingress01/ingress/latest_commits 100K 998G 100K /jails/dvl-ingress01/var/db/ingress/latest_commits data02/freshports/dvl-ingress01/ingress/message-queues 2.19M 998G 2.19M /jails/dvl-ingress01/var/db/ingress/message-queues data02/freshports/dvl-ingress01/ingress/repos 8.55G 998G 112K /jails/dvl-ingress01/var/db/ingress/repos data02/freshports/dvl-ingress01/ingress/repos/doc 685M 998G 657M /jails/dvl-ingress01/var/db/ingress/repos/doc data02/freshports/dvl-ingress01/ingress/repos/ports 3.47G 998G 2.24G /jails/dvl-ingress01/var/db/ingress/repos/ports data02/freshports/dvl-ingress01/ingress/repos/src 4.41G 998G 2.70G /jails/dvl-ingress01/var/db/ingress/repos/src data02/freshports/dvl-ingress01/jails 3.67G 998G 104K /jails/dvl-ingress01/jails data02/freshports/dvl-ingress01/jails/freshports 3.67G 998G 506M /jails/dvl-ingress01/jails/freshports data02/freshports/dvl-ingress01/jails/freshports/ports 3.18G 998G 2.64G /jails/dvl-ingress01/jails/freshports/usr/ports data02/freshports/dvl-ingress01/modules 2.90M 998G 2.90M /jails/dvl-ingress01/usr/local/lib/perl5/site_perl/FreshPorts data02/freshports/dvl-ingress01/scripts 3.07M 998G 2.44M /jails/dvl-ingress01/usr/local/libexec/freshports data02/freshports/dvl-nginx01 22.3M 998G 96K none data02/freshports/dvl-nginx01/www 22.2M 998G 96K none data02/freshports/dvl-nginx01/www/freshports 20.3M 998G 20.3M /jails/dvl-nginx01/usr/local/www/freshports data02/freshports/dvl-nginx01/www/freshsource 1.80M 998G 1.80M /jails/dvl-nginx01/usr/local/www/freshsource data02/freshports/jailed 24.3G 998G 96K none data02/freshports/jailed/dev-ingress01 3.22G 998G 96K none data02/freshports/jailed/dev-ingress01/mkjail 3.22G 998G 1.34G /var/db/mkjail data02/freshports/jailed/dev-ingress01/mkjail/14.2-RELEASE 952M 998G 952M /var/db/mkjail/14.2-RELEASE data02/freshports/jailed/dev-ingress01/mkjail/14.3-RELEASE 963M 998G 963M /var/db/mkjail/14.3-RELEASE data02/freshports/jailed/dev-nginx01 1.27G 998G 96K none data02/freshports/jailed/dev-nginx01/cache 1.27G 998G 96K /var/db/freshports/cache data02/freshports/jailed/dev-nginx01/cache/categories 1.00M 998G 964K /var/db/freshports/cache/categories data02/freshports/jailed/dev-nginx01/cache/commits 96K 998G 96K /var/db/freshports/cache/commits data02/freshports/jailed/dev-nginx01/cache/daily 5.02M 998G 4.95M /var/db/freshports/cache/daily data02/freshports/jailed/dev-nginx01/cache/general 2.73M 998G 2.66M /var/db/freshports/cache/general data02/freshports/jailed/dev-nginx01/cache/news 220K 998G 148K /var/db/freshports/cache/news data02/freshports/jailed/dev-nginx01/cache/packages 384K 998G 312K /var/db/freshports/cache/packages data02/freshports/jailed/dev-nginx01/cache/pages 96K 998G 96K /var/db/freshports/cache/pages data02/freshports/jailed/dev-nginx01/cache/ports 1.26G 998G 1.26G /var/db/freshports/cache/ports data02/freshports/jailed/dev-nginx01/cache/spooling 208K 998G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/dvl-ingress01 469M 998G 96K none data02/freshports/jailed/dvl-ingress01/distfiles 96K 998G 96K none data02/freshports/jailed/dvl-ingress01/mkjail 469M 998G 469M /var/db/mkjail data02/freshports/jailed/dvl-nginx01 1.88M 998G 96K none data02/freshports/jailed/dvl-nginx01/cache 1.69M 998G 148K /var/db/freshports/cache data02/freshports/jailed/dvl-nginx01/cache/categories 96K 998G 96K /var/db/freshports/cache/categories data02/freshports/jailed/dvl-nginx01/cache/commits 172K 998G 108K /var/db/freshports/cache/commits data02/freshports/jailed/dvl-nginx01/cache/daily 216K 998G 152K /var/db/freshports/cache/daily data02/freshports/jailed/dvl-nginx01/cache/general 96K 998G 96K /var/db/freshports/cache/general data02/freshports/jailed/dvl-nginx01/cache/news 212K 998G 148K /var/db/freshports/cache/news data02/freshports/jailed/dvl-nginx01/cache/packages 96K 998G 96K /var/db/freshports/cache/packages data02/freshports/jailed/dvl-nginx01/cache/pages 96K 998G 96K /var/db/freshports/cache/pages data02/freshports/jailed/dvl-nginx01/cache/ports 316K 998G 252K /var/db/freshports/cache/ports data02/freshports/jailed/dvl-nginx01/cache/spooling 184K 998G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/dvl-nginx01/freshports 96K 998G 96K none data02/freshports/jailed/stage-ingress01 1.39G 998G 96K none data02/freshports/jailed/stage-ingress01/data 96K 998G 96K none data02/freshports/jailed/stage-ingress01/mkjail 1.39G 998G 469M /var/db/mkjail data02/freshports/jailed/stage-ingress01/mkjail/14.3-RELEASE 954M 998G 954M /var/db/mkjail/14.3-RELEASE data02/freshports/jailed/stage-nginx01 13.9G 998G 96K none data02/freshports/jailed/stage-nginx01/cache 13.9G 998G 288K /var/db/freshports/cache data02/freshports/jailed/stage-nginx01/cache/categories 1.60M 998G 1.53M /var/db/freshports/cache/categories data02/freshports/jailed/stage-nginx01/cache/commits 12.1G 998G 12.1G /var/db/freshports/cache/commits data02/freshports/jailed/stage-nginx01/cache/daily 34.4M 998G 34.3M /var/db/freshports/cache/daily data02/freshports/jailed/stage-nginx01/cache/general 7.11M 998G 7.04M /var/db/freshports/cache/general data02/freshports/jailed/stage-nginx01/cache/news 220K 998G 148K /var/db/freshports/cache/news data02/freshports/jailed/stage-nginx01/cache/packages 416K 998G 344K /var/db/freshports/cache/packages data02/freshports/jailed/stage-nginx01/cache/pages 96K 998G 96K /var/db/freshports/cache/pages data02/freshports/jailed/stage-nginx01/cache/ports 1.80G 998G 1.79G /var/db/freshports/cache/ports data02/freshports/jailed/stage-nginx01/cache/spooling 216K 998G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/test-ingress01 3.21G 998G 96K none data02/freshports/jailed/test-ingress01/data 96K 998G 96K none data02/freshports/jailed/test-ingress01/mkjail 3.21G 998G 1.34G /var/db/mkjail data02/freshports/jailed/test-ingress01/mkjail/14.2-RELEASE 952M 998G 952M /var/db/mkjail/14.2-RELEASE data02/freshports/jailed/test-ingress01/mkjail/14.3-RELEASE 954M 998G 954M /var/db/mkjail/14.3-RELEASE data02/freshports/jailed/test-nginx01 841M 998G 96K none data02/freshports/jailed/test-nginx01/cache 841M 998G 272K /var/db/freshports/cache data02/freshports/jailed/test-nginx01/cache/categories 740K 998G 668K /var/db/freshports/cache/categories data02/freshports/jailed/test-nginx01/cache/commits 96K 998G 96K /var/db/freshports/cache/commits data02/freshports/jailed/test-nginx01/cache/daily 3.43M 998G 3.36M /var/db/freshports/cache/daily data02/freshports/jailed/test-nginx01/cache/general 2.78M 998G 2.68M /var/db/freshports/cache/general data02/freshports/jailed/test-nginx01/cache/news 220K 998G 148K /var/db/freshports/cache/news data02/freshports/jailed/test-nginx01/cache/packages 220K 998G 148K /var/db/freshports/cache/packages data02/freshports/jailed/test-nginx01/cache/pages 96K 998G 96K /var/db/freshports/cache/pages data02/freshports/jailed/test-nginx01/cache/ports 833M 998G 833M /var/db/freshports/cache/ports data02/freshports/jailed/test-nginx01/cache/spooling 216K 998G 120K /var/db/freshports/cache/spooling data02/freshports/stage-ingress01 19.4G 998G 96K none data02/freshports/stage-ingress01/cache 592K 998G 96K /jails/stage-ingress01/var/db/freshports/cache data02/freshports/stage-ingress01/cache/html 392K 998G 320K /jails/stage-ingress01/var/db/freshports/cache/html data02/freshports/stage-ingress01/cache/spooling 104K 998G 104K /jails/stage-ingress01/var/db/freshports/cache/spooling data02/freshports/stage-ingress01/freshports 10.7G 998G 96K none data02/freshports/stage-ingress01/freshports/archive 10.7G 998G 10.7G /jails/stage-ingress01/var/db/freshports/message-queues/archive data02/freshports/stage-ingress01/freshports/message-queues 11.7M 998G 9.57M /jails/stage-ingress01/var/db/freshports/message-queues data02/freshports/stage-ingress01/ingress 5.53G 998G 96K /jails/stage-ingress01/var/db/ingress data02/freshports/stage-ingress01/ingress/latest_commits 404K 998G 100K /jails/stage-ingress01/var/db/ingress/latest_commits data02/freshports/stage-ingress01/ingress/message-queues 2.44M 998G 1.18M /jails/stage-ingress01/var/db/ingress/message-queues data02/freshports/stage-ingress01/ingress/repos 5.52G 998G 5.51G /jails/stage-ingress01/var/db/ingress/repos data02/freshports/stage-ingress01/jails 507M 998G 104K /jails/stage-ingress01/jails data02/freshports/stage-ingress01/jails/freshports 506M 998G 506M /jails/stage-ingress01/jails/freshports data02/freshports/stage-ingress01/ports 2.61G 998G 2.61G /jails/stage-ingress01/jails/freshports/usr/ports data02/freshports/test-ingress01 23.8G 998G 96K none data02/freshports/test-ingress01/freshports 12.5G 998G 1.75G /jails/test-ingress01/var/db/freshports data02/freshports/test-ingress01/freshports/cache 520K 998G 96K /jails/test-ingress01/var/db/freshports/cache data02/freshports/test-ingress01/freshports/cache/html 320K 998G 320K /jails/test-ingress01/var/db/freshports/cache/html data02/freshports/test-ingress01/freshports/cache/spooling 104K 998G 104K /jails/test-ingress01/var/db/freshports/cache/spooling data02/freshports/test-ingress01/freshports/message-queues 10.8G 998G 10.4M /jails/test-ingress01/var/db/freshports/message-queues data02/freshports/test-ingress01/freshports/message-queues/archive 10.8G 998G 10.8G /jails/test-ingress01/var/db/freshports/message-queues/archive data02/freshports/test-ingress01/ingress 8.25G 998G 128K /jails/test-ingress01/var/db/ingress data02/freshports/test-ingress01/ingress/latest_commits 344K 998G 100K /jails/test-ingress01/var/db/ingress/latest_commits data02/freshports/test-ingress01/ingress/message-queues 2.76M 998G 1.19M /jails/test-ingress01/var/db/ingress/message-queues data02/freshports/test-ingress01/ingress/repos 8.24G 998G 5.43G /jails/test-ingress01/var/db/ingress/repos data02/freshports/test-ingress01/jails 3.07G 998G 96K /jails/test-ingress01/jails data02/freshports/test-ingress01/jails/freshports 3.07G 998G 507M /jails/test-ingress01/jails/freshports data02/freshports/test-ingress01/jails/freshports/ports 2.58G 998G 2.58G /jails/test-ingress01/jails/freshports/usr/ports data02/jails 307G 998G 9.54G /jails data02/jails/bacula 16.9G 998G 16.3G /jails/bacula data02/jails/bacula-sd-02 3.44G 998G 2.85G /jails/bacula-sd-02 data02/jails/bacula-sd-03 4.85G 998G 4.26G /jails/bacula-sd-03 data02/jails/besser 5.27G 998G 4.19G /jails/besser data02/jails/certs 3.02G 998G 2.42G /jails/certs data02/jails/certs-rsync 2.99G 998G 2.42G /jails/certs-rsync data02/jails/cliff2 3.05G 998G 2.47G /jails/cliff2 data02/jails/dev-ingress01 4.61G 998G 3.62G /jails/dev-ingress01 data02/jails/dev-nginx01 4.06G 998G 3.43G /jails/dev-nginx01 data02/jails/dev-pgeu 7.62G 998G 7.62G /jails/dev-pgeu data02/jails/dns-hidden-master 3.15G 998G 2.56G /jails/dns-hidden-master data02/jails/dns1 10.9G 998G 4.95G /jails/dns1 data02/jails/dvl-ingress01 8.13G 998G 5.34G /jails/dvl-ingress01 data02/jails/dvl-nginx01 1.75G 998G 1.16G /jails/dvl-nginx01 data02/jails/git 4.95G 998G 4.35G /jails/git data02/jails/jail_within_jail 1.28G 998G 752M /jails/jail_within_jail data02/jails/mqtt01 3.60G 998G 3.00G /jails/mqtt01 data02/jails/mydev 23.5G 998G 22.0G /jails/mydev data02/jails/mysql01 21.6G 998G 11.2G /jails/mysql01 data02/jails/nsnotify 3.28G 998G 2.61G /jails/nsnotify data02/jails/pg01 44.3G 998G 13.3G /jails/pg01 data02/jails/pg02 11.0G 998G 10.4G /jails/pg02 data02/jails/pg03 12.5G 998G 10.4G /jails/pg03 data02/jails/pkg01 17.0G 998G 13.7G /jails/pkg01 data02/jails/samdrucker 4.87G 998G 4.28G /jails/samdrucker data02/jails/serpico 3.10G 998G 2.49G /jails/serpico data02/jails/stage-ingress01 5.34G 998G 2.95G /jails/stage-ingress01 data02/jails/stage-nginx01 1.93G 998G 1.32G /jails/stage-nginx01 data02/jails/svn 10.5G 998G 9.77G /jails/svn data02/jails/talos 2.97G 998G 2.40G /jails/talos data02/jails/test-ingress01 2.31G 998G 1.37G /jails/test-ingress01 data02/jails/test-nginx01 1.89G 998G 1.24G /jails/test-nginx01 data02/jails/unifi01 29.5G 998G 12.1G /jails/unifi01 data02/jails/webserver 12.3G 998G 11.4G /jails/webserver data02/mkjail 96K 998G 96K /mkjail data02/vm 85.5G 998G 7.24G /usr/local/vm data02/vm/freebsd-test 743M 998G 112K /usr/local/vm/freebsd-test data02/vm/freebsd-test/disk0 743M 998G 743M - data02/vm/hass 73.8G 998G 15.1G /usr/local/vm/hass data02/vm/home-assistant 351M 998G 351M /usr/local/vm/home-assistant data02/vm/myguest 3.38G 998G 3.38G /usr/local/vm/myguest
One quick peek
30 minutes to go:
[16:44 r730-01 dvl ~] % zpool status -Lp data02 pool: data02 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Oct 18 16:43:58 2025 723G / 723G scanned, 5.25G / 723G issued at 413M/s 5.36G resilvered, 0.73% done, 00:29:38 to go config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 replacing-0 ONLINE 0 0 0 gpt/BTHC536302CT400VGN ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 (resilvering) da14p1 ONLINE 0 0 0 errors: No known data errors
30 minutes later
30 minutes later, while I’m watching S3E3 of The Diplomat, we’re done:
[16:58 r730-01 dvl ~] % zpool status -Lp data02 pool: data02 state: ONLINE scan: resilvered 746G in 00:32:13 with 0 errors on Sat Oct 18 17:16:11 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 da14p1 ONLINE 0 0 0
The next replace
Next, I’m replacing da14:
[17:18 r730-01 dvl ~] % grep da14 /var/run/dmesg.boot da14 at mrsas0 bus 1 scbus1 target 14 lun 0 da14:Fixed Direct Access SPC-4 SCSI device da14: Serial Number S2NKNCAH501524K da14: 150.000MB/s transfers da14: 1831420MB (3750748848 512 byte sectors) da14: quirks=0x8<4K>
With the other NVMe drive, /dev/nda3. Which, like the first one, needs some gpart.
[17:32 r730-01 dvl ~] % grep nda3 /var/run/dmesg.boot nda3 at nvme3 bus 0 scbus21 target 0 lun 1 nda3:nda3: Serial Number S7U8NJ0Y716801F nda3: nvme version 2.0 nda3: 3815447MB (7814037168 512 byte sectors)
This is the awkward part. I know the device name of the first NVMe drive added above, but how would I get that from just the zpool status output?
The only way I see is gpart show -l | less and search for the label above:
=> 40 7814037088 nda0 GPT (3.6T) 40 7814036000 1 Samsung_990_S7U8NJ0Y716854P (3.6T) 7814036040 1088 - free - (544K)
There, it’s nda0.
Let’s look at the first part:
[17:30 r730-01 dvl ~] % gpart backup /dev/nda0 GPT 128 1 freebsd-zfs 40 7814036000 Samsung_990_S7U8NJ0Y716854P
Next, let’s restore that.
[17:32 r730-01 dvl ~] % gpart backup /dev/nda0 | sudo gpart restore /dev/nda3 [17:32 r730-01 dvl ~] % gpart show -l nda3 => 34 7814037101 nda3 GPT (3.6T) 34 6 - free - (3.0K) 40 7814036000 1 (null) (3.6T) 7814036040 1095 - free - (548K)
And set the label:
[17:35 r730-01 dvl ~] % sudo gpart modify -i 1 -l Samsung_990_S7U8NJ0Y716801F nda3 nda3p1 modified [17:35 r730-01 dvl ~] % gpart show -l nda3 => 34 7814037101 nda3 GPT (3.6T) 34 6 - free - (3.0K) 40 7814036000 1 Samsung_990_S7U8NJ0Y716801F (3.6T) 7814036040 1095 - free - (548K)
Next, another replace.
Another replace
Let’s do the next replace. At the end of this, data02 should be a 4TB (more or less) pool. Remember, we set autoexpand on.
[17:35 r730-01 dvl ~] % zpool list data02 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data02 1.73T 723G 1.03T - - 47% 40% 1.00x ONLINE - [17:37 r730-01 dvl ~] % zpool status data02 pool: data02 state: ONLINE scan: resilvered 746G in 00:32:13 with 0 errors on Sat Oct 18 17:16:11 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 da14p1 ONLINE 0 0 0 errors: No known data errors
Here we go!
[17:37 r730-01 dvl ~] % sudo zpool replace data02 da14p1 gpt/Samsung_990_S7U8NJ0Y716801F [17:38 r730-01 dvl ~] % zpool status data02 pool: data02 state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sat Oct 18 17:38:27 2025 46.9G / 723G scanned at 6.70G/s, 0B / 723G issued 0B resilvered, 0.00% done, no estimated completion time config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 replacing-1 ONLINE 0 0 0 da14p1 ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716801F ONLINE 0 0 0 errors: No known data errors [17:38 r730-01 dvl ~] %
A few minutes later, we had:
... 61.0G resilvered, 8.01% done, 00:12:03 to go ...
I don’t know why the second replace would be so much faster than the first replace.
Could it be ARC?
And we’re done
Done.
[17:50 r730-01 dvl ~] % zpool status data02 pool: data02 state: ONLINE scan: resilvered 747G in 00:13:39 with 0 errors on Sat Oct 18 17:52:06 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716854P ONLINE 0 0 0 gpt/Samsung_990_S7U8NJ0Y716801F ONLINE 0 0 0 errors: No known data errors
Woo hoo! That was much easier and far less work.
After:
[17:52 r730-01 dvl ~] % zpool list data02 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data02 3.62T 722G 2.92T - - 22% 19% 1.00x ONLINE -
It’s gone from 1.73T to 3.62T. Mission accomplished.
Also notice that ALLOC is still 1.73T – that confirms the block-by-block expectation from above.
Yes, it’s zstd for compression. That won’t take much effect until files are rewritten.
[18:00 r730-01 dvl ~] % zfs get -t filesystem -r compression data02 | grep -v zstd NAME PROPERTY VALUE SOURCE
Thank you for coming to my TED talk.