This post is part of a series of storage-related posts. The goal: move the zroot of r730-01 to smaller drives.
NOTE: This approach failed. Investigations continue, however this post is not something you should try.
Earlier today, as the first step in my proof-of-concept, I moved the zroot of r730-04 from 2x SATADOM device (128G) to 2x SSD devices (1TB). I could have just installed FreeBSD on the 1TB drives, but that wouldn’t be as fun.
In this post:
- FreeBSD 14.3
zpool replace cannot accept smaller devices
In the previous post, I used zfs replace to migrate the zpool from one set of devices to another.
The main point, from man zpool-replace:
The size of new-device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration.
My first idea, made is sound like I needed to do this:
- run gpart on the new drives
- boot live thumbdrive
- zfs send receive
- stamp bootcode
While researching this, I found these pages:
- Replace Mirrored ZFS Boot Pool (rpool) with Smaller Devices
- Migrate Proxmox VE to smaller root disks
- Migrating a ZFS Pool to smaller disks via https://bsd.network/web/@hbauer@social.caserio.de/115529430244554090
In my head, this was going to be way more complex then zfs replace. For example:
- The two zpools need different names.
- The old zpool needs to be exported before reboot.
- The new zpool needs to be renamed before reboot.
Then I read this post. It was a reply to my post on Mastodon.
It was there I learned I can use zpool add instead of zpool replace. Then use zfs remove to take out the larger devices.
The zpool needs the device_removal feature flag enabled. To confirm if you zpool has that, try this:
dvl@r730-04:~ $ zpool get feature@device_removal zroot NAME PROPERTY VALUE SOURCE zroot feature@device_removal enabled local
Good. Let’s go.
The destination drives
I was originally going to migrate to these drives.
Now I’m going to use these drives instead:

Here they are being inserted into the server:
Nov 20 14:13:40 r730-04 kernel: mrsas0: System PD created target ID: 0x6 Nov 20 14:13:40 r730-04 kernel: mrsas0: System PD created target ID: 0x7 Nov 20 14:13:40 r730-04 kernel: da2 at mrsas0 bus 1 scbus1 target 7 lun 0 Nov 20 14:13:40 r730-04 kernel: da2:Fixed Direct Access SPC-4 SCSI device Nov 20 14:13:40 r730-04 kernel: da2: Serial Number 134106402147 Nov 20 14:13:40 r730-04 kernel: da2: 150.000MB/s transfers Nov 20 14:13:40 r730-04 kernel: da2: 30533MB (62533296 512 byte sectors) Nov 20 14:13:40 r730-04 kernel: da3 at mrsas0 bus 1 scbus1 target 6 lun 0 Nov 20 14:13:40 r730-04 kernel: da3: Fixed Direct Access SPC-4 SCSI device Nov 20 14:13:40 r730-04 kernel: da3: Serial Number 134106400102 Nov 20 14:13:40 r730-04 kernel: da3: 150.000MB/s transfers Nov 20 14:13:40 r730-04 kernel: da3: 30533MB (62533296 512 byte sectors) Nov 20 14:13:40 r730-04 kernel: GEOM_MIRROR: Device swap already configured. Nov 20 14:13:40 r730-04 syslogd: last message repeated 1 times
Those swap related messages are probably related to the drives already being configured:
gpart: No such geom: dad3.
root@r730-04:~ # gpart show da2 da3
=> 40 62533216 da2 GPT (30G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 58335232 3 freebsd-zfs (28G)
62531584 1672 - free - (836K)
=> 34 62533229 da3 GPT (30G)
34 6 - free - (3.0K)
40 16777216 1 freebsd-zfs (8.0G)
16777256 2008 - free - (1.0M)
16779264 20971520 2 freebsd-zfs (10G)
37750784 20971520 3 freebsd-zfs (10G)
58722304 3810959 - free - (1.8G)
root@r730-04:~ #
That theory is supported by this output:
root@r730-04:~ # zpool import
pool: UNUSED2
id: 9422690791977013756
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
UNUSED2 DEGRADED
mirror-0 DEGRADED
da3p2 ONLINE
diskid/DISK-133887400683p2 UNAVAIL cannot open
pool: UNUSED1
id: 16584619994779156808
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:
UNUSED1 DEGRADED
mirror-0 DEGRADED
da3p3 ONLINE
diskid/DISK-133887400683p3 UNAVAIL cannot open
pool: zroot
id: 9216699319623116443
state: ONLINE
status: Some supported features are not enabled on the pool.
(Note that they may be intentionally disabled if the
'compatibility' property is set.)
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:
zroot ONLINE
da2p3 ONLINE
Yes, these drives were in use when removed from the previous host.
Repartitioning
I used Duplicating partitions to a larger drive #ZFS #FreeBSD as a basis for partitioning these drives to look like the existing, but larger drives.
The following demonstrates how gpart wants to save your data, yet I stumble forward, ignoring the advice.
root@r730-04:~ # gpart destroy da2 gpart: Device busy root@r730-04:~ # gpart destroy -F da2 da2 destroyed root@r730-04:~ # gpart destroy da3 gpart: Device busy root@r730-04:~ # gpart destroy -F da3 da3 destroyed root@r730-04:~ #
I partitioned those two drives (see gpart backup | gpart restore does not alway give you exact replication) for how I did that.
Next, I configured the bootcode on da2 and da3 using the method shown in Moving a zpool to larger drives #ZFS #FreeBSD.
source
This is the existing set of drives this host is booting from.
root@r730-04:~ # zpool status zroot
pool: zroot
state: ONLINE
scan: resilvered 3.74M in 00:00:01 with 0 errors on Thu Nov 20 14:13:12 2025
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p3 ONLINE 0 0 0
da1p3 ONLINE 0 0 0
errors: No known data errors
root@r730-04:~ # gpart show da0 da1
=> 34 1953525101 da0 GPT (932G)
34 6 - free - (3.0K)
40 532480 1 efi (260M)
532520 2008 - free - (1.0M)
534528 16777216 2 freebsd-swap (8.0G)
17311744 1936213384 3 freebsd-zfs (923G)
1953525128 7 - free - (3.5K)
=> 34 1953525101 da1 GPT (932G)
34 6 - free - (3.0K)
40 532480 1 efi (260M)
532520 2008 - free - (1.0M)
534528 16777216 2 freebsd-swap (8.0G)
17311744 1936213384 3 freebsd-zfs (923G)
1953525128 7 - free - (3.5K)
root@r730-04:~ #
destination
These are the drives I want to move zroot onto:
root@r730-04:~ # gpart show da2 da3
=> 34 62533229 da2 GPT (30G)
34 6 - free - (3.0K)
40 532480 1 efi (260M)
532520 16777216 2 freebsd-swap (8.0G)
17309736 45223520 3 freebsd-zfs (22G)
62533256 7 - free - (3.5K)
=> 34 62533229 da3 GPT (30G)
34 6 - free - (3.0K)
40 532480 1 efi (260M)
532520 16777216 2 freebsd-swap (8.0G)
17309736 45223520 3 freebsd-zfs (22G)
62533256 7 - free - (3.5K)
The backup
At this point, I made a backup of my boot drives, and blogged about.
I did that for two reasons:
- So I know I can still boot this host after I muck about with the zroot zpool
- It’s something I’ve long wanted to confirm can be done.
The change
The first step: add in the new mirror:
dvl@r730-04:~ $ zpool add zroot mirror /dev/da2p3 /dev/da3p3 cannot use '/dev/da2p3': must be a block device or regular file
And that gets shutdown quickly.











