I have 4 new storage devices to create 2 new zpools, each a two-vdev mirror. Let’s go.
In this post:
- FreeBSD 14.3
These are the drives I have. As an Amazon Associate I earn from qualifying purchases.
- The 4TB NVMe cards are Samsung 990 EVO Plus SSD 4TB, PCIe Gen 4×4 | Gen 5×2 M.2
- The 4TB SSDs are SAMSUNG 870 EVO 4TB 2.5 Inch SATA III Internal SSD (MZ-77E4T0B/AM)
The new data02
The new zpool will be replacing this one:
[2:19 r730-01 dvl ~] % zpool list data02 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data02 1.73T 722G 1.03T - - 47% 40% 1.00x ONLINE - [2:08 r730-01 dvl ~] % zpool status data02 pool: data02 state: ONLINE scan: scrub repaired 0B in 00:37:38 with 0 errors on Thu Oct 16 04:11:19 2025 config: NAME STATE READ WRITE CKSUM data02 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/BTHC536302CT400VGN ONLINE 0 0 0 da14p1 ONLINE 0 0 0 errors: No known data errors [2:19 r730-01 dvl ~] %
Based on the test zpool Samsung SSD 990 EVO Plus – 4TB NVMe mirror, I’m going to do this:
[2:19 r730-01 dvl ~] % sudo zpool create new-data02 mirror /dev/nda0 /dev/nda3 [2:23 r730-01 dvl ~] % sudo zfs set compression=zstd atime=off new-data02 [2:24 r730-01 dvl ~] % zpool get all new-data02 NAME PROPERTY VALUE SOURCE new-data02 size 3.62T - new-data02 capacity 0% - new-data02 altroot - default new-data02 health ONLINE - new-data02 guid 15893773323119196952 - new-data02 version - default new-data02 bootfs - default new-data02 delegation on default new-data02 autoreplace off default new-data02 cachefile - default new-data02 failmode wait default new-data02 listsnapshots off default new-data02 autoexpand off default new-data02 dedupratio 1.00x - new-data02 free 3.62T - new-data02 allocated 468K - new-data02 readonly off - new-data02 ashift 0 default new-data02 comment - default new-data02 expandsize - - new-data02 freeing 0 - new-data02 fragmentation 0% - new-data02 leaked 0 - new-data02 multihost off default new-data02 checkpoint - - new-data02 load_guid 12666352933249787186 - new-data02 autotrim off default new-data02 compatibility off default new-data02 bcloneused 0 - new-data02 bclonesaved 0 - new-data02 bcloneratio 1.00x - new-data02 feature@async_destroy enabled local new-data02 feature@empty_bpobj enabled local new-data02 feature@lz4_compress active local new-data02 feature@multi_vdev_crash_dump enabled local new-data02 feature@spacemap_histogram active local new-data02 feature@enabled_txg active local new-data02 feature@hole_birth active local new-data02 feature@extensible_dataset active local new-data02 feature@embedded_data active local new-data02 feature@bookmarks enabled local new-data02 feature@filesystem_limits enabled local new-data02 feature@large_blocks enabled local new-data02 feature@large_dnode enabled local new-data02 feature@sha512 enabled local new-data02 feature@skein enabled local new-data02 feature@edonr enabled local new-data02 feature@userobj_accounting active local new-data02 feature@encryption enabled local new-data02 feature@project_quota active local new-data02 feature@device_removal enabled local new-data02 feature@obsolete_counts enabled local new-data02 feature@zpool_checkpoint enabled local new-data02 feature@spacemap_v2 active local new-data02 feature@allocation_classes enabled local new-data02 feature@resilver_defer enabled local new-data02 feature@bookmark_v2 enabled local new-data02 feature@redaction_bookmarks enabled local new-data02 feature@redacted_datasets enabled local new-data02 feature@bookmark_written enabled local new-data02 feature@log_spacemap active local new-data02 feature@livelist enabled local new-data02 feature@device_rebuild enabled local new-data02 feature@zstd_compress active local new-data02 feature@draid enabled local new-data02 feature@zilsaxattr enabled local new-data02 feature@head_errlog active local new-data02 feature@blake3 enabled local new-data02 feature@block_cloning enabled local new-data02 feature@vdev_zaps_v2 active local x[2:24 r730-01 dvl ~] % zfs get all new-data02 NAME PROPERTY VALUE SOURCE new-data02 type filesystem - new-data02 creation Sat Oct 18 2:23 2025 - new-data02 used 468K - new-data02 available 3.51T - new-data02 referenced 96K - new-data02 compressratio 1.00x - new-data02 mounted yes - new-data02 quota none default new-data02 reservation none default new-data02 recordsize 128K default new-data02 mountpoint /new-data02 default new-data02 sharenfs off default new-data02 checksum on default new-data02 compression zstd local new-data02 atime off local new-data02 devices on default new-data02 exec on default new-data02 setuid on default new-data02 readonly off default new-data02 jailed off default new-data02 snapdir hidden default new-data02 aclmode discard default new-data02 aclinherit restricted default new-data02 createtxg 1 - new-data02 canmount on default new-data02 xattr on default new-data02 copies 1 default new-data02 version 5 - new-data02 utf8only off - new-data02 normalization none - new-data02 casesensitivity sensitive - new-data02 vscan off default new-data02 nbmand off default new-data02 sharesmb off default new-data02 refquota none default new-data02 refreservation none default new-data02 guid 11648436011971568052 - new-data02 primarycache all default new-data02 secondarycache all default new-data02 usedbysnapshots 0B - new-data02 usedbydataset 96K - new-data02 usedbychildren 372K - new-data02 usedbyrefreservation 0B - new-data02 logbias latency default new-data02 objsetid 54 - new-data02 dedup off default new-data02 mlslabel none default new-data02 sync standard default new-data02 dnodesize legacy default new-data02 refcompressratio 1.00x - new-data02 written 96K - new-data02 logicalused 156K - new-data02 logicalreferenced 42.5K - new-data02 volmode default default new-data02 filesystem_limit none default new-data02 snapshot_limit none default new-data02 filesystem_count none default new-data02 snapshot_count none default new-data02 snapdev hidden default new-data02 acltype nfsv4 default new-data02 context none default new-data02 fscontext none default new-data02 defcontext none default new-data02 rootcontext none default new-data02 relatime on default new-data02 redundant_metadata all default new-data02 overlay on default new-data02 encryption off default new-data02 keylocation none default new-data02 keyformat none default new-data02 pbkdf2iters 0 default new-data02 special_small_blocks 0 default new-data02 prefetch all default
An entirely new zpool, with a fantastically original name: data04
Based on that same article as before, I’m going to do this with the 4TB SSDs:
[2:27 r730-01 dvl ~] % sudo zpool create data04 mirror /dev/da12 /dev/da13 [2:28 r730-01 dvl ~] % sudo zfs set compression=zstd atime=off data04 [2:28 r730-01 dvl ~] %
I don’t do the zpool get all and zfs get all again because I assume it’s going to be the same as new-data02.
Let’s start a send-recv
Let’s start a copy using syncoid:
[2:28 r730-01 dvl ~] % sudo syncoid -r --compress=zstd --quiet data02 new-data02 WARNING: Unrecognised compression value zstd, defaulting to lzo CRITICAL ERROR: Target new-data02 exists but has no snapshots matching with data02! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target.
It’s definintely copying:
[2:34 r730-01 dvl ~] % zfs list -r new-data02 NAME USED AVAIL REFER MOUNTPOINT new-data02 51.0G 3.46T 96K /new-data02 new-data02/freshports 51.0G 3.46T 88K /new-data02/freshports new-data02/freshports/dev-ingress01 51.0G 3.46T 88K /new-data02/freshports/dev-ingress01 new-data02/freshports/dev-ingress01/dvl-src 51.0G 3.46T 51.0G /new-data02/freshports/dev-ingress01/dvl-src [2:35 r730-01 dvl ~] %
Tomorrow I want to check if the snapshots also copied over.