Creating a backup boot drive from a zroot mirror

Today, using FreeBSD 14.3, I’m going to test booting from a drive which is a mirror of the boot drives.

This is what I’m booting from:

root@r730-04:~ # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   923G  1.72G   921G        -         -     0%     0%  1.00x    ONLINE  -
root@r730-04:~ # zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 3.74M in 00:00:01 with 0 errors on Thu Nov 20 14:13:12 2025
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0

errors: No known data errors
root@r730-04:~ # 

I’m going to add this drive into the zpool:

Nov 20 15:25:29 r730-04 kernel: ada0 at ahcich1 bus 0 scbus3 target 0 lun 0
Nov 20 15:25:29 r730-04 kernel: ada0:  ACS-4 ATA SATA 3.x device
Nov 20 15:25:29 r730-04 kernel: ada0: Serial Number S757NS0Y700758M
Nov 20 15:25:29 r730-04 kernel: ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 512bytes)
Nov 20 15:25:29 r730-04 kernel: ada0: Command Queueing enabled
Nov 20 15:25:29 r730-04 kernel: ada0: 3815447MB (7814037168 512 byte sectors)
Nov 20 15:25:29 r730-04 kernel: ada0: quirks=0x3<4K,NCQ_TRIM_BROKEN>

Preparing the drive

I first copy the partitioning from one of the boot drives:

root@r730-04:~ # gpart show ada0
gpart: No such geom: ada0.
root@r730-04:~ # gpart backup da0 | gpart restore ada0
root@r730-04:~ # gpart show ada0
=>        34  7814037101  ada0  GPT  (3.6T)
          34           6        - free -  (3.0K)
          40      532480     1  efi  (260M)
      532520        2008        - free -  (1.0M)
      534528    16777216     2  freebsd-swap  (8.0G)
    17311744  1936213384     3  freebsd-zfs  (923G)
  1953525128  5860512007        - free -  (2.7T)

Then, I populated the efi partition with bootcode, as shown in Moving a zpool to larger drives #ZFS #FreeBSD.

Adding the new drive into the mirror

With this step, I am adding a third disk to the mirror. The p3 partition will be identical on all drives. Yes, one of you might object to this declaration. For what we’re doing here, the drives are identical.

root@r730-04:~ # zpool attach zroot da1p3 ada0p3
root@r730-04:~ # zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 1.74G in 00:00:05 with 0 errors on Thu Nov 20 16:05:29 2025
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0
	    ada0p3  ONLINE       0     0     0

errors: No known data errors
root@r730-04:~ # 

Because this is so little data, it took very little time to come a proper full resilvered drive.

Removing the spare drive from the mirror

I found zpool-detach, which mentioned zpool-offline, so I tried that first, just to see how it works.

root@r730-04:~ # zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Online the device using 'zpool online' or replace the device with
	'zpool replace'.
  scan: resilvered 1.74G in 00:00:05 with 0 errors on Thu Nov 20 16:05:29 2025
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       DEGRADED     0     0     0
	  mirror-0  DEGRADED     0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0
	    ada0p3  OFFLINE      0     0     0

errors: No known data errors
root@r730-04:~ # zpool detach zroot ada0p3
root@r730-04:~ # zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 1.74G in 00:00:05 with 0 errors on Thu Nov 20 16:05:29 2025
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0

errors: No known data errors
root@r730-04:~ # 

Testing the backup drive

I will refer to this extra drive as the backup drive.

Next, I powered down the system, I removed the two boot drives, and I placed the backup drive into the front drive cages.

NOTE: I don’t have enough drive cages for this Dell R730. In the previous steps, the backup drive was attached to hot-swap drive cage in a PCIe slot. That’s why it had device name ada0. I could not get that drive to boot from that location. After giving up on that, I used a drive cage which had another drive in it but was not involved in this test.

With the backup drive in place, I booted. I found this:

dvl@r730-04:~ $ zpool status
  pool: zroot
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
	the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-2Q
  scan: resilvered 1.74G in 00:00:05 with 0 errors on Thu Nov 20 16:05:29 2025
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       DEGRADED     0     0     0
	  mirror-0  DEGRADED     0     0     0
	    da0p3   FAULTED      0     0     0  corrupted data
	    da1p3   UNAVAIL      0     0     0  cannot open
	    da0p3   ONLINE       0     0     0

errors: No known data errors
dvl@r730-04:~ $ uptimr
-sh: uptimr: not found
dvl@r730-04:~ $ uptime
 5:27PM  up 6 mins, 1 user, load averages: 0.00, 0.00, 0.00
dvl@r730-04:~ $ 

Success. This is a valid method for creating a backup of your boot drives.

Remember to test them.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top