Test run – moving to a smaller zpool (zroot) using zfs snapshot and send | recv

Today, I’m going to try a test run of moving a zroot into a smaller zpool. Over the past few posts, I’ve tested moving to smaller zpools using zfs remove. That’s not what I want to do with my zpool – mostly because it does not leave you with a functioning original zpool.

I’m following https://www.hagen-bauer.de/2025/08/zfs-migration-to-smaller-disk.html and I particularly like their use of labels on each partition.

In this post:

  • FreeBSe 14.3
  • mfsbsd-14.2-RELEASE-amd64

The existing zpool

I’m starting with this test zroot zpool:

root@r730-04:/home/dvl # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot   220G   925M   219G        -         -     0%     0%  1.00x    ONLINE  -
root@r730-04:/home/dvl # zpool status 
  pool: zroot
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0

errors: No known data errors
root@r730-04:/home/dvl # gpart show da0 da1
=>       40  468862048  da0  GPT  (224G)
         40       1024    1  freebsd-boot  (512K)
       1064        984       - free -  (492K)
       2048    4194304    2  freebsd-swap  (2.0G)
    4196352  464664576    3  freebsd-zfs  (222G)
  468860928       1160       - free -  (580K)

=>       40  468862048  da1  GPT  (224G)
         40       1024    1  freebsd-boot  (512K)
       1064        984       - free -  (492K)
       2048    4194304    2  freebsd-swap  (2.0G)
    4196352  464664576    3  freebsd-zfs  (222G)
  468860928       1160       - free -  (580K)

root@r730-04:/home/dvl # bectl list
BE      Active Mountpoint Space Created
clean   -      -          8K    2025-11-22 21:22
default NR     /          922M  2025-11-22 21:06

root@r730-04:/home/dvl # zfs list -t snapshot zroot
NAME          USED  AVAIL  REFER  MOUNTPOINT
zroot@clean     0B      -    96K  -

The destination drives

These are the drives I’m aiming for:

root@r730-04:/home/dvl # diskinfo -v da2
da2
	512         	# sectorsize
	32017047552 	# mediasize in bytes (30G)
	62533296    	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	3892        	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware.
	ATA SanDisk SDSSDRC0	# Disk descr.
	134106400102	# Disk ident.
	mrsas0      	# Attachment
	Yes         	# TRIM/UNMAP support
	0           	# Rotation rate in RPM
	Not_Zoned   	# Zone Mode

root@r730-04:/home/dvl # diskinfo -v da3
da3
	512         	# sectorsize
	32017047552 	# mediasize in bytes (30G)
	62533296    	# mediasize in sectors
	0           	# stripesize
	0           	# stripeoffset
	3892        	# Cylinders according to firmware.
	255         	# Heads according to firmware.
	63          	# Sectors according to firmware.
	ATA SanDisk SDSSDRC0	# Disk descr.
	134106402147	# Disk ident.
	mrsas0      	# Attachment
	Yes         	# TRIM/UNMAP support
	0           	# Rotation rate in RPM
	Not_Zoned   	# Zone Mode

Preparing the destination drives

As I said, I like the use of labels, although I think I’ll also add in the serial numbers for the next run.

root@r730-04:/home/dvl # gpart create -s gpt /dev/da2
da2 created
root@r730-04:/home/dvl # gpart create -s gpt /dev/da3
da3 created

root@r730-04:/home/dvl # gpart add -a 1M -s 200M -t efi -l efi0 /dev/da2
da2p1 added
root@r730-04:/home/dvl # gpart add -a 1M -s 200M -t efi -l efi1 /dev/da3
da3p1 added

root@r730-04:/home/dvl # newfs_msdos /dev/gpt/efi0
/dev/gpt/efi0: 409360 sectors in 25585 FAT16 clusters (8192 bytes/cluster)
BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=100 SecPerTrack=63 Heads=255 HiddenSecs=0 HugeSectors=409600
root@r730-04:/home/dvl # newfs_msdos /dev/gpt/efi1
/dev/gpt/efi1: 409360 sectors in 25585 FAT16 clusters (8192 bytes/cluster)
BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=100 SecPerTrack=63 Heads=255 HiddenSecs=0 HugeSectors=409600


root@r730-04:/home/dvl # gpart add -a 1m -s 8G -t freebsd-swap -l swap0 /dev/da2
da2p2 added
root@r730-04:/home/dvl # gpart add -a 1m -s 8G -t freebsd-swap -l swap1 /dev/da3
da3p2 added


root@r730-04:/home/dvl # gpart add -t freebsd-zfs -l zfs0 /dev/da2
da2p3 added
root@r730-04:/home/dvl # gpart add -t freebsd-zfs -l zfs1 /dev/da3
da3p3 added

This is what the destination drives look like now:

root@r730-04:/home/dvl # gpart show da2 da3
=>      40  62533216  da2  GPT  (30G)
        40      2008       - free -  (1.0M)
      2048    409600    1  efi  (200M)
    411648  16777216    2  freebsd-swap  (8.0G)
  17188864  45344392    3  freebsd-zfs  (22G)

=>      40  62533216  da3  GPT  (30G)
        40      2008       - free -  (1.0M)
      2048    409600    1  efi  (200M)
    411648  16777216    2  freebsd-swap  (8.0G)
  17188864  45344392    3  freebsd-zfs  (22G)

root@r730-04:/home/dvl # gpart show -l da2 da3
=>      40  62533216  da2  GPT  (30G)
        40      2008       - free -  (1.0M)
      2048    409600    1  efi0  (200M)
    411648  16777216    2  swap0  (8.0G)
  17188864  45344392    3  zfs0  (22G)

=>      40  62533216  da3  GPT  (30G)
        40      2008       - free -  (1.0M)
      2048    409600    1  efi1  (200M)
    411648  16777216    2  swap1  (8.0G)
  17188864  45344392    3  zfs1  (22G)

root@r730-04:/home/dvl # 

new zpool creation

This is the command for easy copy/paste. A side benefit of using labels: In this command, it is device-name independent.

zpool create -f \
  -o altroot=/altroot \
  -o cachefile=/tmp/zpool.cache \
  -O mountpoint=none \
  -O atime=off \
  -O compression=zstd \
  zroot_n mirror /dev/gpt/zfs0 /dev/gpt/zfs1

This is what I ran:

root@r730-04:/home/dvl # mkdir /altroot
root@r730-04:/home/dvl # zpool create -f \
>   -o altroot=/altroot \
>   -o cachefile=/tmp/zpool.cache \
>   -O mountpoint=none \
>   -O atime=off \
>   -O compression=zstd \
>   zroot_n mirror /dev/gpt/zfs0 /dev/gpt/zfs1
root@r730-04:/home/dvl # zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot     220G   926M   219G        -         -     0%     0%  1.00x    ONLINE  -
zroot_n  21.5G   432K  21.5G        -         -     0%     0%  1.00x    ONLINE  /altroot
root@r730-04:/home/dvl # 

snapshot and replication

root@r730-04:/home/dvl # zfs snapshot -r zroot@2delete-later-full2
root@r730-04:/home/dvl # zfs send -Rv zroot@2delete-later-full2 | zfs receive -uFv zroot_n
full send of zroot@clean estimated size is 43.1K
send from @clean to zroot@2delete-later-full2 estimated size is 624B
full send of zroot/usr@clean estimated size is 43.1K
send from @clean to zroot/usr@2delete-later-full2 estimated size is 624B
full send of zroot/usr/ports@clean estimated size is 43.1K
send from @clean to zroot/usr/ports@2delete-later-full2 estimated size is 624B
full send of zroot/usr/src@clean estimated size is 43.1K
send from @clean to zroot/usr/src@2delete-later-full2 estimated size is 624B
full send of zroot/home@clean estimated size is 43.1K
send from @clean to zroot/home@2delete-later-full2 estimated size is 624B
full send of zroot/home/dvl@clean estimated size is 59.1K
send from @clean to zroot/home/dvl@2delete-later-full2 estimated size is 624B
full send of zroot/tmp@clean estimated size is 56.1K
send from @clean to zroot/tmp@2delete-later-full2 estimated size is 35.6K
full send of zroot/ROOT@clean estimated size is 43.1K
send from @clean to zroot/ROOT@2delete-later-full2 estimated size is 624B
full send of zroot/ROOT/default@clean estimated size is 1.74G
send from @clean to zroot/ROOT/default@2025-11-22-21:22:10-0 estimated size is 624B
send from @2025-11-22-21:22:10-0 to zroot/ROOT/default@2delete-later-full2 estimated size is 83.1K
full send of zroot/ROOT/clean@2delete-later-full2 estimated size is 1.74G
full send of zroot/var@clean estimated size is 43.1K
send from @clean to zroot/var@2delete-later-full2 estimated size is 624B
full send of zroot/var/log@clean estimated size is 278K
send from @clean to zroot/var/log@2delete-later-full2 estimated size is 29.6K
full send of zroot/var/audit@clean estimated size is 43.1K
send from @clean to zroot/var/audit@2delete-later-full2 estimated size is 624B
full send of zroot/var/tmp@clean estimated size is 43.1K
send from @clean to zroot/var/tmp@2delete-later-full2 estimated size is 624B
full send of zroot/var/mail@clean estimated size is 43.1K
send from @clean to zroot/var/mail@2delete-later-full2 estimated size is 624B
full send of zroot/var/crash@clean estimated size is 43.6K
send from @clean to zroot/var/crash@2delete-later-full2 estimated size is 624B
total estimated size is 3.47G
TIME        SENT   SNAPSHOT zroot@clean
TIME        SENT   SNAPSHOT zroot@2delete-later-full2
receiving full stream of zroot@clean into zroot_n@clean
TIME        SENT   SNAPSHOT zroot/usr@clean
received 46.0K stream in 0.22 seconds (209K/sec)
receiving incremental stream of zroot@2delete-later-full2 into zroot_n@2delete-later-full2
received 312B stream in 0.07 seconds (4.50K/sec)
receiving full stream of zroot/usr@clean into zroot_n/usr@clean
TIME        SENT   SNAPSHOT zroot/usr@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/usr/ports@clean
TIME        SENT   SNAPSHOT zroot/usr/ports@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/usr/src@clean
received 46.0K stream in 0.17 seconds (275K/sec)
receiving incremental stream of zroot/usr@2delete-later-full2 into zroot_n/usr@2delete-later-full2
received 312B stream in 0.15 seconds (1.97K/sec)
receiving full stream of zroot/usr/ports@clean into zroot_n/usr/ports@clean
received 46.0K stream in 0.12 seconds (380K/sec)
receiving incremental stream of zroot/usr/ports@2delete-later-full2 into zroot_n/usr/ports@2delete-later-full2
received 312B stream in 0.10 seconds (2.99K/sec)
receiving full stream of zroot/usr/src@clean into zroot_n/usr/src@clean
TIME        SENT   SNAPSHOT zroot/usr/src@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/home@clean
TIME        SENT   SNAPSHOT zroot/home@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/home/dvl@clean
received 46.0K stream in 0.09 seconds (532K/sec)
receiving incremental stream of zroot/usr/src@2delete-later-full2 into zroot_n/usr/src@2delete-later-full2
received 312B stream in 0.09 seconds (3.31K/sec)
receiving full stream of zroot/home@clean into zroot_n/home@clean
received 47.5K stream in 0.14 seconds (345K/sec)
receiving incremental stream of zroot/home@2delete-later-full2 into zroot_n/home@2delete-later-full2
received 312B stream in 0.09 seconds (3.47K/sec)
receiving full stream of zroot/home/dvl@clean into zroot_n/home/dvl@clean
TIME        SENT   SNAPSHOT zroot/home/dvl@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/tmp@clean
TIME        SENT   SNAPSHOT zroot/tmp@2delete-later-full2
received 65.4K stream in 0.10 seconds (674K/sec)
receiving incremental stream of zroot/home/dvl@2delete-later-full2 into zroot_n/home/dvl@2delete-later-full2
received 312B stream in 0.08 seconds (3.75K/sec)
receiving full stream of zroot/tmp@clean into zroot_n/tmp@clean
TIME        SENT   SNAPSHOT zroot/ROOT@clean
TIME        SENT   SNAPSHOT zroot/ROOT@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/ROOT/default@clean
received 62.0K stream in 0.27 seconds (232K/sec)
receiving incremental stream of zroot/tmp@2delete-later-full2 into zroot_n/tmp@2delete-later-full2
received 13.6K stream in 0.11 seconds (119K/sec)
receiving full stream of zroot/ROOT@clean into zroot_n/ROOT@clean
received 46.0K stream in 0.18 seconds (258K/sec)
receiving incremental stream of zroot/ROOT@2delete-later-full2 into zroot_n/ROOT@2delete-later-full2
received 312B stream in 0.10 seconds (2.96K/sec)
receiving full stream of zroot/ROOT/default@clean into zroot_n/ROOT/default@clean
21:38:42    495M   zroot/ROOT/default@clean
21:38:43    894M   zroot/ROOT/default@clean
21:38:44    894M   zroot/ROOT/default@clean
21:38:45    894M   zroot/ROOT/default@clean
21:38:46    894M   zroot/ROOT/default@clean
21:38:47    894M   zroot/ROOT/default@clean
21:38:48    894M   zroot/ROOT/default@clean
21:38:49    894M   zroot/ROOT/default@clean
21:38:50    894M   zroot/ROOT/default@clean
21:38:51    894M   zroot/ROOT/default@clean
21:38:52    894M   zroot/ROOT/default@clean
21:38:53    894M   zroot/ROOT/default@clean
21:38:54    894M   zroot/ROOT/default@clean
21:38:55    894M   zroot/ROOT/default@clean
21:38:56    894M   zroot/ROOT/default@clean
21:38:57    894M   zroot/ROOT/default@clean
21:38:58    894M   zroot/ROOT/default@clean
21:38:59    894M   zroot/ROOT/default@clean
21:39:00    894M   zroot/ROOT/default@clean
21:39:01    894M   zroot/ROOT/default@clean
21:39:02    894M   zroot/ROOT/default@clean
21:39:03    894M   zroot/ROOT/default@clean
21:39:04    894M   zroot/ROOT/default@clean
21:39:05    894M   zroot/ROOT/default@clean
21:39:06    894M   zroot/ROOT/default@clean
21:39:07    921M   zroot/ROOT/default@clean
21:39:08   1.60G   zroot/ROOT/default@clean
21:39:09   1.60G   zroot/ROOT/default@clean
21:39:10   1.67G   zroot/ROOT/default@clean
21:39:11   1.67G   zroot/ROOT/default@clean
21:39:12   1.67G   zroot/ROOT/default@clean
21:39:13   1.67G   zroot/ROOT/default@clean
21:39:14   1.67G   zroot/ROOT/default@clean
21:39:15   1.67G   zroot/ROOT/default@clean
21:39:16   1.67G   zroot/ROOT/default@clean
21:39:17   1.67G   zroot/ROOT/default@clean
21:39:18   1.67G   zroot/ROOT/default@clean
21:39:19   1.67G   zroot/ROOT/default@clean
21:39:20   1.67G   zroot/ROOT/default@clean
21:39:21   1.67G   zroot/ROOT/default@clean
21:39:22   1.67G   zroot/ROOT/default@clean
21:39:23   1.67G   zroot/ROOT/default@clean
21:39:24   1.67G   zroot/ROOT/default@clean
21:39:25   1.67G   zroot/ROOT/default@clean
21:39:26   1.67G   zroot/ROOT/default@clean
21:39:27   1.67G   zroot/ROOT/default@clean
21:39:28   1.67G   zroot/ROOT/default@clean
21:39:29   1.67G   zroot/ROOT/default@clean
21:39:30   1.67G   zroot/ROOT/default@clean
TIME        SENT   SNAPSHOT zroot/ROOT/default@2025-11-22-21:22:10-0
TIME        SENT   SNAPSHOT zroot/ROOT/default@2delete-later-full2
21:39:31   1.75G   zroot/ROOT/default@2delete-later-full2
21:39:32   1.75G   zroot/ROOT/default@2delete-later-full2
21:39:33   1.75G   zroot/ROOT/default@2delete-later-full2
21:39:34   1.75G   zroot/ROOT/default@2delete-later-full2
21:39:35   1.75G   zroot/ROOT/default@2delete-later-full2
21:39:36   1.75G   zroot/ROOT/default@2delete-later-full2
received 1.75G stream in 55.82 seconds (32.1M/sec)
receiving incremental stream of zroot/ROOT/default@2025-11-22-21:22:10-0 into zroot_n/ROOT/default@2025-11-22-21:22:10-0
received 312B stream in 0.07 seconds (4.16K/sec)
receiving incremental stream of zroot/ROOT/default@2delete-later-full2 into zroot_n/ROOT/default@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/ROOT/clean@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var@clean
TIME        SENT   SNAPSHOT zroot/var@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var/log@clean
received 102K stream in 0.24 seconds (428K/sec)
found clone origin zroot_n/ROOT/default@2025-11-22-21:22:10-0
receiving incremental stream of zroot/ROOT/clean@2delete-later-full2 into zroot_n/ROOT/clean@2delete-later-full2
received 312B stream in 0.22 seconds (1.40K/sec)
receiving full stream of zroot/var@clean into zroot_n/var@clean
received 46.0K stream in 0.15 seconds (300K/sec)
receiving incremental stream of zroot/var@2delete-later-full2 into zroot_n/var@2delete-later-full2
received 312B stream in 0.08 seconds (3.72K/sec)
receiving full stream of zroot/var/log@clean into zroot_n/var/log@clean
TIME        SENT   SNAPSHOT zroot/var/log@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var/audit@clean
TIME        SENT   SNAPSHOT zroot/var/audit@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var/tmp@clean
received 292K stream in 0.16 seconds (1.84M/sec)
receiving incremental stream of zroot/var/log@2delete-later-full2 into zroot_n/var/log@2delete-later-full2
received 11.5K stream in 0.14 seconds (81.5K/sec)
receiving full stream of zroot/var/audit@clean into zroot_n/var/audit@clean
TIME        SENT   SNAPSHOT zroot/var/tmp@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var/mail@clean
received 49.1K stream in 0.17 seconds (297K/sec)
receiving incremental stream of zroot/var/audit@2delete-later-full2 into zroot_n/var/audit@2delete-later-full2
received 312B stream in 0.09 seconds (3.48K/sec)
receiving full stream of zroot/var/tmp@clean into zroot_n/var/tmp@clean
received 47.5K stream in 0.22 seconds (219K/sec)
receiving incremental stream of zroot/var/tmp@2delete-later-full2 into zroot_n/var/tmp@2delete-later-full2
received 312B stream in 0.10 seconds (3.20K/sec)
receiving full stream of zroot/var/mail@clean into zroot_n/var/mail@clean
TIME        SENT   SNAPSHOT zroot/var/mail@2delete-later-full2
TIME        SENT   SNAPSHOT zroot/var/crash@clean
TIME        SENT   SNAPSHOT zroot/var/crash@2delete-later-full2
received 47.0K stream in 0.20 seconds (241K/sec)
receiving incremental stream of zroot/var/mail@2delete-later-full2 into zroot_n/var/mail@2delete-later-full2
received 312B stream in 0.13 seconds (2.42K/sec)
receiving full stream of zroot/var/crash@clean into zroot_n/var/crash@clean
received 47.5K stream in 0.14 seconds (341K/sec)
receiving incremental stream of zroot/var/crash@2delete-later-full2 into zroot_n/var/crash@2delete-later-full2
received 312B stream in 0.08 seconds (4.04K/sec)

See how the allocated amounts are so very similar:

root@r730-04:/home/dvl # zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot     220G   926M   219G        -         -     0%     0%  1.00x    ONLINE  -
zroot_n  21.5G   925M  20.6G        -         -     0%     4%  1.00x    ONLINE  /altroot

Notice how the compression algorithm was also replicated over, despite the specification of zstd on zpool creation:

root@r730-04:/home/dvl # zfs get compression zroot_n
NAME     PROPERTY     VALUE           SOURCE
zroot_n  compression  lz4             received

Mountpoint and bootfs

root@r730-04:/home/dvl # zfs set mountpoint=/ zroot_n/ROOT/default
root@r730-04:/home/dvl # zpool set bootfs=zroot_n/ROOT/default zroot_n
root@r730-04:/home/dvl # 

Bootloader for uefi

root@r730-04:/home/dvl # zfs mount -a
root@r730-04:/home/dvl # mkdir -p /mnt/boot/efi

root@r730-04:/home/dvl # mount -t msdosfs /dev/gpt/efi0 /mnt/boot/efi
root@r730-04:/home/dvl # mkdir -p /mnt/boot/efi/EFI/BOOT
root@r730-04:/home/dvl # cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI
root@r730-04:/home/dvl # umount /mnt/boot/efi

root@r730-04:/home/dvl # mount -t msdosfs /dev/gpt/efi1 /mnt/boot/efi
root@r730-04:/home/dvl # mkdir -p /mnt/boot/efi/EFI/BOOT
root@r730-04:/home/dvl # cp /boot/loader.efi /mnt/boot/efi/EFI/BOOT/BOOTX64.EFI
root@r730-04:/home/dvl # umount /mnt/boot/efi

Rename the pool

It was here that I booted into mfsbsd. I deviated slightly from the instructions by adding -f to the first import.

[18:12 pro04 dvl ~] % ssh root@10.55.0.215
(root@10.55.0.215) Password for root@mfsbsd:
root@mfsbsd:~ # zpool import zroot zroot_old
cannot import 'zroot': pool was previously in use from another system.
Last accessed by r730-04.int.unixathome.org (hostid=0) at Sat Nov 22 22:25:53 2025
The pool can be imported, use 'zpool import -f' to import the pool.

root@mfsbsd:~ # zpool list
no pools available
root@mfsbsd:~ # zpool import -f zroot zroot_old
root@mfsbsd:~ # zpool export zroot_old
root@mfsbsd:~ # 
root@mfsbsd:~ # zpool import -f zroot_n zroot
root@mfsbsd:~ # zpool export zroot

root@mfsbsd:~ # shutdown -r now
Shutdown NOW!
shutdown: [pid 3793]
root@mfsbsd:~ #                                                                                
*** FINAL System shutdown message from root@mfsbsd ***                       

System going down IMMEDIATELY                                                  

                                                                               

System shutdown time has arrived
Connection to 10.55.0.215 closed by remote host.
Connection to 10.55.0.215 closed.
[18:18 pro04 dvl ~] % 

I ssh‘d in:

dvl@r730-04:~ $ zpool status
  pool: zroot_old
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	zroot_old   ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    da0p3   ONLINE       0     0     0
	    da1p3   ONLINE       0     0     0

errors: No known data errors
dvl@r730-04:~ $ 

Oh, it’s booting off the old drives. Let’s disable that boot and try again. At least that proves the process is not destructive.

That took me a few tries… but we have success:

dvl@r730-04:~ $ zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  21.5G   927M  20.6G        -         -     0%     4%  1.00x    ONLINE  -
dvl@r730-04:~ $ 

Woot!

What’s next?

I’ll run this test again, specifying serial numbers, and hoping to document the procedure to change the R730 from BIOS to UEFI booting, and vice-versa.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top