Yesterday I managed to move a zroot zpool from larger drives to smaller drives using snapshot, send, recv.
Today, I’m doing the same thing. Yet, what is different?
- using serial numbers in some partition labels
- moving to SATADOM
This really is a dry-run for when I do this on my production host, r730-01 – apart from the whole moving-to-smaller-drives part.
host ssh keys
NOTE: not shown in this post are messages like this:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the ED25519 key sent by the remote host is SHA256:LWF/RpMvu/76V0sc1YlaXG8acZtsX57bF8s7estyjio. Please contact your system administrator. Add correct host key in /Users/dvl/.ssh/known_hosts to get rid of this message. Offending ECDSA key in /Users/dvl/.ssh/known_hosts:770 Host key for 10.55.0.215 has changed and you have requested strict checking. Host key verification failed.
When swapping between my own zroots and mfsbsd, I kept deleting entries from my ~/.ssh/known_hosts.
The existing zpool
You will recognize this zpool name from yesterday’s work.
dvl@r730-04:~ $ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot_old 220G 927M 219G - - 0% 0% 1.00x ONLINE -
dvl@r730-04:~ $ zpool status
pool: zroot_old
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zroot_old ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p3 ONLINE 0 0 0
da1p3 ONLINE 0 0 0
errors: No known data errors
dvl@r730-04:~ $ gpart show da0 da1
=> 40 468862048 da0 GPT (224G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 464664576 3 freebsd-zfs (222G)
468860928 1160 - free - (580K)
=> 40 468862048 da1 GPT (224G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 4194304 2 freebsd-swap (2.0G)
4196352 464664576 3 freebsd-zfs (222G)
468860928 1160 - free - (580K)
The destination drives
These are the SATADOM drives I’m aiming for:
ada0 at ahcich10 bus 0 scbus13 target 0 lun 0 ada0:ACS-2 ATA SATA 3.x device ada0: Serial Number 20170718AA0000185556 ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 1024bytes) ada0: Command Queueing enabled ada0: 118288MB (242255664 512 byte sectors) ada1 at ahcich11 bus 0 scbus14 target 0 lun 0 ada1: ACS-2 ATA SATA 3.x device ada1: Serial Number 20170719AA1178164201 ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 1024bytes) ada1: Command Queueing enabled ada1: 118288MB (242255664 512 byte sectors)
They are sold as 128G devices. Photos are in r730-01: storage plan and smarctl here.
Preparing the destination drives
I’m following yesterday’s work:
root@r730-04:~ # gpart create -s gpt /dev/ada0 ada0 created root@r730-04:~ # gpart create -s gpt /dev/ada1 ada1 created root@r730-04:~ # gpart add -a 1M -s 200M -t efi -l efi0_20170718AA0000185556 /dev/ada0 ada0p1 added root@r730-04:~ # gpart add -a 1M -s 200M -t efi -l efi1_20170719AA1178164201 /dev/ada1 ada1p1 added root@r730-04:~ # newfs_msdos /dev/gpt/efi0_20170718AA0000185556 /dev/gpt/efi0_20170718AA0000185556: 409360 sectors in 25585 FAT16 clusters (8192 bytes/cluster) BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=100 SecPerTrack=63 Heads=16 HiddenSecs=0 HugeSectors=409600 root@r730-04:~ # newfs_msdos /dev/gpt/efi1_20170719AA1178164201 /dev/gpt/efi1_20170719AA1178164201: 409360 sectors in 25585 FAT16 clusters (8192 bytes/cluster) BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=100 SecPerTrack=63 Heads=16 HiddenSecs=0 HugeSectors=409600 root@r730-04:~ # root@r730-04:~ # gpart add -a 1m -s 8G -t freebsd-swap -l swap0 /dev/ada0 ada0p2 added root@r730-04:~ # gpart add -a 1m -s 8G -t freebsd-swap -l swap1 /dev/ada1 ada1p2 added root@r730-04:~ # gpart add -t freebsd-zfs -l zfs0_20170718AA0000185556 /dev/ada0 ada0p3 added root@r730-04:~ # gpart add -t freebsd-zfs -l zfs1_20170719AA1178164201 /dev/ada1 ada1p3 added
This is what the destination drives look like now:
root@r730-04:~ # gpart show ada0 ada1
=> 40 242255584 ada0 GPT (116G)
40 2008 - free - (1.0M)
2048 409600 1 efi (200M)
411648 16777216 2 freebsd-swap (8.0G)
17188864 225066760 3 freebsd-zfs (107G)
=> 40 242255584 ada1 GPT (116G)
40 2008 - free - (1.0M)
2048 409600 1 efi (200M)
411648 16777216 2 freebsd-swap (8.0G)
17188864 225066760 3 freebsd-zfs (107G)
This is what they look like with labels showing:
root@r730-04:~ # gpart show -l ada0 ada1
=> 40 242255584 ada0 GPT (116G)
40 2008 - free - (1.0M)
2048 409600 1 efi0_20170718AA0000185556 (200M)
411648 16777216 2 swap0 (8.0G)
17188864 225066760 3 zfs0_20170718AA0000185556 (107G)
=> 40 242255584 ada1 GPT (116G)
40 2008 - free - (1.0M)
2048 409600 1 efi1_20170719AA1178164201 (200M)
411648 16777216 2 swap1 (8.0G)
17188864 225066760 3 zfs1_20170719AA1178164201 (107G)
New zpool creation
This is the command for easy copy/paste. A side benefit of using labels: In this command, it is device-name independent.
zpool create -f \ -o altroot=/altroot \ -o cachefile=/tmp/zpool.cache \ -O mountpoint=none \ -O atime=off \ -O compression=zstd \ zroot_n mirror /dev/gpt/zfs0_20170718AA0000185556 /dev/gpt/zfs1_20170719AA1178164201
This is what I ran:
root@r730-04:~ # zpool create -f \ > -o altroot=/altroot \ > -o cachefile=/tmp/zpool.cache \ > -O mountpoint=none \ > -O atime=off \ > -O compression=zstd \ > zroot_n mirror /dev/gpt/zfs0_20170718AA0000185556 /dev/gpt/zfs1_20170719AA1178164201 root@r730-04:~ # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zroot_n 107G 600K 107G - - 0% 0% 1.00x ONLINE /altroot zroot_old 220G 928M 219G - - 0% 0% 1.00x ONLINE - root@r730-04:~ # zpool status zroot_n pool: zroot_n state: ONLINE config: NAME STATE READ WRITE CKSUM zroot_n ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/zfs0_20170718AA0000185556 ONLINE 0 0 0 gpt/zfs1_20170719AA1178164201 ONLINE 0 0 0 errors: No known data errors root@r730-04:~ #
snapshot and replication
This is where the destination drives receive the data from the currently-in-use drives:
root@r730-04:~ # zfs snapshot -r zroot_old@2delete-later-full2 cannot create snapshot 'zroot_old@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/usr@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/usr/ports@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/usr/src@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/home@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/home/dvl@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/tmp@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/ROOT@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/ROOT/default@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/ROOT/clean@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var/log@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var/audit@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var/tmp@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var/mail@2delete-later-full2': dataset already exists cannot create snapshot 'zroot_old/var/crash@2delete-later-full2': dataset already exists no snapshots were created root@r730-04:~ #
Oh yes, these drives were used yesterday. Let’s delete so you get the full picture. First, a dry run:
root@r730-04:~ # zfs destroy -nrv zroot_old@2delete-later-full2 would destroy zroot_old@2delete-later-full2 would destroy zroot_old/usr@2delete-later-full2 would destroy zroot_old/usr/ports@2delete-later-full2 would destroy zroot_old/usr/src@2delete-later-full2 would destroy zroot_old/home@2delete-later-full2 would destroy zroot_old/home/dvl@2delete-later-full2 would destroy zroot_old/tmp@2delete-later-full2 would destroy zroot_old/ROOT@2delete-later-full2 would destroy zroot_old/ROOT/default@2delete-later-full2 would destroy zroot_old/ROOT/clean@2delete-later-full2 would destroy zroot_old/var@2delete-later-full2 would destroy zroot_old/var/log@2delete-later-full2 would destroy zroot_old/var/audit@2delete-later-full2 would destroy zroot_old/var/tmp@2delete-later-full2 would destroy zroot_old/var/mail@2delete-later-full2 would destroy zroot_old/var/crash@2delete-later-full2 would reclaim 316K
That looks OK. The next command omits the -n parameter.
root@r730-04:~ # zfs destroy -rv zroot_old@2delete-later-full2 will destroy zroot_old@2delete-later-full2 will destroy zroot_old/usr@2delete-later-full2 will destroy zroot_old/usr/ports@2delete-later-full2 will destroy zroot_old/usr/src@2delete-later-full2 will destroy zroot_old/home@2delete-later-full2 will destroy zroot_old/home/dvl@2delete-later-full2 will destroy zroot_old/tmp@2delete-later-full2 will destroy zroot_old/ROOT@2delete-later-full2 will destroy zroot_old/ROOT/default@2delete-later-full2 will destroy zroot_old/ROOT/clean@2delete-later-full2 will destroy zroot_old/var@2delete-later-full2 will destroy zroot_old/var/log@2delete-later-full2 will destroy zroot_old/var/audit@2delete-later-full2 will destroy zroot_old/var/tmp@2delete-later-full2 will destroy zroot_old/var/mail@2delete-later-full2 will destroy zroot_old/var/crash@2delete-later-full2 will reclaim 316K
And now for the regularly schedule command:
root@r730-04:~ # zfs snapshot -r zroot_old@2delete-later-full2 root@r730-04:~ #
The send | receive:
root@r730-04:~ # zfs send -Rv zroot_old@2delete-later-full2 | zfs receive -uFv zroot_n full send of zroot_old@clean estimated size is 43.1K send from @clean to zroot_old@2delete-later-full2 estimated size is 624B full send of zroot_old/usr@clean estimated size is 43.1K send from @clean to zroot_old/usr@2delete-later-full2 estimated size is 624B full send of zroot_old/usr/ports@clean estimated size is 43.1K send from @clean to zroot_old/usr/ports@2delete-later-full2 estimated size is 624B full send of zroot_old/usr/src@clean estimated size is 43.1K send from @clean to zroot_old/usr/src@2delete-later-full2 estimated size is 624B full send of zroot_old/home@clean estimated size is 43.1K send from @clean to zroot_old/home@2delete-later-full2 estimated size is 624B full send of zroot_old/home/dvl@clean estimated size is 59.1K send from @clean to zroot_old/home/dvl@2delete-later-full2 estimated size is 34.1K full send of zroot_old/tmp@clean estimated size is 56.1K send from @clean to zroot_old/tmp@2delete-later-full2 estimated size is 48.1K full send of zroot_old/ROOT@clean estimated size is 43.1K send from @clean to zroot_old/ROOT@2delete-later-full2 estimated size is 624B full send of zroot_old/ROOT/default@clean estimated size is 1.74G send from @clean to zroot_old/ROOT/default@2025-11-22-21:22:10-0 estimated size is 624B send from @2025-11-22-21:22:10-0 to zroot_old/ROOT/default@2delete-later-full2 estimated size is 242K full send of zroot_old/ROOT/clean@2delete-later-full2 estimated size is 1.74G full send of zroot_old/var@clean estimated size is 43.1K send from @clean to zroot_old/var@2delete-later-full2 estimated size is 624B full send of zroot_old/var/log@clean estimated size is 278K send from @clean to zroot_old/var/log@2delete-later-full2 estimated size is 431K full send of zroot_old/var/audit@clean estimated size is 43.1K send from @clean to zroot_old/var/audit@2delete-later-full2 estimated size is 624B full send of zroot_old/var/tmp@clean estimated size is 43.1K send from @clean to zroot_old/var/tmp@2delete-later-full2 estimated size is 4.61K full send of zroot_old/var/mail@clean estimated size is 43.1K send from @clean to zroot_old/var/mail@2delete-later-full2 estimated size is 624B full send of zroot_old/var/crash@clean estimated size is 43.6K send from @clean to zroot_old/var/crash@2delete-later-full2 estimated size is 624B total estimated size is 3.47G TIME SENT SNAPSHOT zroot_old@clean TIME SENT SNAPSHOT zroot_old@2delete-later-full2 receiving full stream of zroot_old@clean into zroot_n@clean TIME SENT SNAPSHOT zroot_old/usr@clean received 46.0K stream in 0.03 seconds (1.77M/sec) receiving incremental stream of zroot_old@2delete-later-full2 into zroot_n@2delete-later-full2 received 312B stream in 0.01 seconds (30.0K/sec) receiving full stream of zroot_old/usr@clean into zroot_n/usr@clean TIME SENT SNAPSHOT zroot_old/usr@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/usr/ports@clean TIME SENT SNAPSHOT zroot_old/usr/ports@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/usr/src@clean received 46.0K stream in 0.01 seconds (3.13M/sec) receiving incremental stream of zroot_old/usr@2delete-later-full2 into zroot_n/usr@2delete-later-full2 received 312B stream in 0.01 seconds (27.4K/sec) receiving full stream of zroot_old/usr/ports@clean into zroot_n/usr/ports@clean received 46.0K stream in 0.01 seconds (3.00M/sec) receiving incremental stream of zroot_old/usr/ports@2delete-later-full2 into zroot_n/usr/ports@2delete-later-full2 received 312B stream in 0.01 seconds (22.7K/sec) receiving full stream of zroot_old/usr/src@clean into zroot_n/usr/src@clean TIME SENT SNAPSHOT zroot_old/usr/src@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/home@clean TIME SENT SNAPSHOT zroot_old/home@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/home/dvl@clean received 46.0K stream in 0.01 seconds (3.82M/sec) receiving incremental stream of zroot_old/usr/src@2delete-later-full2 into zroot_n/usr/src@2delete-later-full2 received 312B stream in 0.01 seconds (23.1K/sec) receiving full stream of zroot_old/home@clean into zroot_n/home@clean received 47.5K stream in 0.01 seconds (3.24M/sec) receiving incremental stream of zroot_old/home@2delete-later-full2 into zroot_n/home@2delete-later-full2 received 312B stream in 0.01 seconds (26.7K/sec) receiving full stream of zroot_old/home/dvl@clean into zroot_n/home/dvl@clean TIME SENT SNAPSHOT zroot_old/home/dvl@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/tmp@clean received 65.4K stream in 0.01 seconds (5.13M/sec) receiving incremental stream of zroot_old/home/dvl@2delete-later-full2 into zroot_n/home/dvl@2delete-later-full2 received 16.4K stream in 0.01 seconds (1.09M/sec) receiving full stream of zroot_old/tmp@clean into zroot_n/tmp@clean TIME SENT SNAPSHOT zroot_old/tmp@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/ROOT@clean received 62.0K stream in 0.02 seconds (3.86M/sec) receiving incremental stream of zroot_old/tmp@2delete-later-full2 into zroot_n/tmp@2delete-later-full2 received 39.4K stream in 0.01 seconds (2.96M/sec) receiving full stream of zroot_old/ROOT@clean into zroot_n/ROOT@clean TIME SENT SNAPSHOT zroot_old/ROOT@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/ROOT/default@clean received 46.0K stream in 0.02 seconds (2.97M/sec) receiving incremental stream of zroot_old/ROOT@2delete-later-full2 into zroot_n/ROOT@2delete-later-full2 received 312B stream in 0.01 seconds (26.4K/sec) receiving full stream of zroot_old/ROOT/default@clean into zroot_n/ROOT/default@clean 15:02:58 1.05G zroot_old/ROOT/default@clean TIME SENT SNAPSHOT zroot_old/ROOT/default@2025-11-22-21:22:10-0 TIME SENT SNAPSHOT zroot_old/ROOT/default@2delete-later-full2 15:02:59 1.75G zroot_old/ROOT/default@2delete-later-full2 15:03:00 1.75G zroot_old/ROOT/default@2delete-later-full2 received 1.75G stream in 3.82 seconds (469M/sec) receiving incremental stream of zroot_old/ROOT/default@2025-11-22-21:22:10-0 into zroot_n/ROOT/default@2025-11-22-21:22:10-0 received 312B stream in 0.01 seconds (25.3K/sec) receiving incremental stream of zroot_old/ROOT/default@2delete-later-full2 into zroot_n/ROOT/default@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/ROOT/clean@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var@clean TIME SENT SNAPSHOT zroot_old/var@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var/log@clean received 314K stream in 0.02 seconds (17.0M/sec) found clone origin zroot_n/ROOT/default@2025-11-22-21:22:10-0 receiving incremental stream of zroot_old/ROOT/clean@2delete-later-full2 into zroot_n/ROOT/clean@2delete-later-full2 received 312B stream in 0.03 seconds (10.0K/sec) receiving full stream of zroot_old/var@clean into zroot_n/var@clean received 46.0K stream in 0.01 seconds (3.05M/sec) receiving incremental stream of zroot_old/var@2delete-later-full2 into zroot_n/var@2delete-later-full2 received 312B stream in 0.01 seconds (27.0K/sec) receiving full stream of zroot_old/var/log@clean into zroot_n/var/log@clean TIME SENT SNAPSHOT zroot_old/var/log@2delete-later-full2 received 292K stream in 0.02 seconds (17.0M/sec) receiving incremental stream of zroot_old/var/log@2delete-later-full2 into zroot_n/var/log@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var/audit@clean TIME SENT SNAPSHOT zroot_old/var/audit@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var/tmp@clean received 409K stream in 0.03 seconds (14.0M/sec) receiving full stream of zroot_old/var/audit@clean into zroot_n/var/audit@clean received 49.1K stream in 0.02 seconds (3.15M/sec) receiving incremental stream of zroot_old/var/audit@2delete-later-full2 into zroot_n/var/audit@2delete-later-full2 received 312B stream in 0.01 seconds (22.6K/sec) receiving full stream of zroot_old/var/tmp@clean into zroot_n/var/tmp@clean TIME SENT SNAPSHOT zroot_old/var/tmp@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var/mail@clean TIME SENT SNAPSHOT zroot_old/var/mail@2delete-later-full2 TIME SENT SNAPSHOT zroot_old/var/crash@clean received 47.5K stream in 0.02 seconds (2.97M/sec) receiving incremental stream of zroot_old/var/tmp@2delete-later-full2 into zroot_n/var/tmp@2delete-later-full2 received 312B stream in 0.01 seconds (22.5K/sec) receiving full stream of zroot_old/var/mail@clean into zroot_n/var/mail@clean received 47.0K stream in 0.02 seconds (3.02M/sec) receiving incremental stream of zroot_old/var/mail@2delete-later-full2 into zroot_n/var/mail@2delete-later-full2 received 312B stream in 0.01 seconds (22.9K/sec) receiving full stream of zroot_old/var/crash@clean into zroot_n/var/crash@clean TIME SENT SNAPSHOT zroot_old/var/crash@2delete-later-full2 received 47.5K stream in 0.02 seconds (3.04M/sec) receiving incremental stream of zroot_old/var/crash@2delete-later-full2 into zroot_n/var/crash@2delete-later-full2 received 312B stream in 0.01 seconds (22.7K/sec)
See how the allocated amounts are so very similar
root@r730-04:~ # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zroot_n 107G 926M 106G - - 0% 0% 1.00x ONLINE /altroot zroot_old 220G 927M 219G - - 0% 0% 1.00x ONLINE -
Notice how the compression algorithm was also replicated over, despite the specification of zstd on zpool creation:
root@r730-04:~ # zfs get compression zroot_n NAME PROPERTY VALUE SOURCE zroot_n compression lz4 received
Mountpoint and bootfs
root@r730-04:~ # zfs set mountpoint=/ zroot_n/ROOT/default root@r730-04:~ # zpool set bootfs=zroot_n/ROOT/default zroot_n root@r730-04:~ #
Bootloader for uefi
I don’t know why the instructions had this mount command, but I’m blindly following along (bad advice).
root@r730-04:~ # zfs mount -a root@r730-04:~ # mkdir -p /mnt/boot/efi root@r730-04:~ # mount -t msdosfs /dev/gpt/efi0_20170718AA0000185556 /mnt/boot/efi root@r730-04:~ # mkdir -p /mnt/boot/efi/EFI/BOOT root@r730-04:~ # cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI root@r730-04:~ # umount /mnt/boot/efi root@r730-04:~ # mount -t msdosfs /dev/gpt/efi1_20170719AA1178164201 /mnt/boot/efi root@r730-04:~ # mkdir -p /mnt/boot/efi/EFI/BOOT root@r730-04:~ # cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI root@r730-04:~ # umount /mnt/boot/efi
Rename the pool
It was here that I booted into mfsbsd. I deviated slightly from the instructions by adding -f to the first import
root@mfsbsd:~ # zpool import -f zroot_n zroot root@mfsbsd:~ # zpool export zroot root@mfsbsd:~ # shutdown -r now
Getting rid of the other zroot
I couldn’t get the system too boot from the SATADOMs. It was always booting into the 30GB drives I set up during Moving a zpool to smaller drives with UEFI #ZFS #FreeBSD
I booted into mfsBSD (on the thumbdrive) and renamed that zpool, like this:
Here are the available zpools. Notice how the drives are showing up as mfisyspd devices. I should have dropped into the loader prompt and issues set hw.mfi.mrsas_enable=”YES”.
root@mfsbsd:~ # zpool import
pool: zroot
id: 244204476102814311
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zroot ONLINE
mirror-0 ONLINE
mfisyspd2p3 ONLINE
mfisyspd3p3 ONLINE
pool: zroot_old
id: 16510343806874404582
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zroot_old ONLINE
mirror-0 ONLINE
gptid/2743db32-c7e7-11f0-9fcc-ecf4bbe70138 ONLINE
mfisyspd1p3 ONLINE
pool: zroot
id: 9688445990579055578
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zroot ONLINE
mirror-0 ONLINE
gpt/zfs0_20170718AA0000185556 ONLINE
gpt/zfs1_20170719AA1178164201 ONLINE
I can see two zroots. I know the second one is the one I want to boot from (because it has gpt/zfs*, as configured above).
I import the first one via ID, renaming it in the process, adjust the bootfs, and export it.
root@mfsbsd:~ # zpool import 244204476102814311 zroot_30G_drives
cannot import 'zroot': pool was previously in use from another system.
Last accessed by r730-04.int.unixathome.org (hostid=0) at Sun Nov 23 16:09:34 2025
The pool can be imported, use 'zpool import -f' to import the pool.
root@mfsbsd:~ # zpool import -f 244204476102814311 zroot_30G_drives
root@mfsbsd:~ # zpool set bootfs=zroot_30G_drives/ROOT/default zroot_30G_drives
you have mail
root@mfsbsd:~ # zpool export zroot_30G_drives
root@mfsbsd:~ # shutdown -r now
Shutdown NOW!
shutdown: [pid 3777]
root@mfsbsd:~ #
*** FINAL System shutdown message from root@mfsbsd ***
System going down IMMEDIATELY
System shutdown time has arrived
Connection to 10.55.0.215 closed by remote host.
Connection to 10.55.0.215 closed.
Getting back in
That worked, it booted into the SATADOMs:
[11:34 pro05 dvl ~] % ssh dvl@10.55.0.215 The authenticity of host '10.55.0.215 (10.55.0.215)' can't be established. ED25519 key fingerprint is SHA256:RlezjzwPeCsDv7CFMfLjo6ie/urY91caSXIxqDmJHcg. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.55.0.215' (ED25519) to the list of known hosts. Last login: Sun Nov 23 14:37:59 2025 from pro07.startpoint.vpn.unixathome.org FreeBSD 14.3-RELEASE (GENERIC) releng/14.3-n271432-8c9ce319fef7 Welcome to FreeBSD! Release Notes, Errata: https://www.FreeBSD.org/releases/ Security Advisories: https://www.FreeBSD.org/security/ FreeBSD Handbook: https://www.FreeBSD.org/handbook/ FreeBSD FAQ: https://www.FreeBSD.org/faq/ Questions List: https://www.FreeBSD.org/lists/questions/ FreeBSD Forums: https://forums.FreeBSD.org/ Documents installed with the system are in the /usr/local/share/doc/freebsd/ directory, or can be installed later with: pkg install en-freebsd-doc For other languages, replace "en" with a language code like de or fr. Show the version of FreeBSD installed: freebsd-version ; uname -a Please include that output and any error messages when posting questions. Introduction to manual pages: man man FreeBSD directory layout: man hier To change this login announcement, see motd(5). To erase a line you've written at the command prompt, use "Ctrl-U". -- Drudvl@r730-04:~ $ zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zroot 107G 927M 106G - - 0% 0% 1.00x ONLINE - dvl@r730-04:~ $ zpool status pool: zroot state: ONLINE config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/zfs0_20170718AA0000185556 ONLINE 0 0 0 gpt/zfs1_20170719AA1178164201 ONLINE 0 0 0 errors: No known data errors dvl@r730-04:~ $
I declare success and this process is ready for production.











