It’s time.
Time to create the 8x 4T raidz2 zpool.
These are the devices: disks=”nda0 nda1 nda2 nda3 nda5 nda8 da12 da13″
In this post:
- FreeBSD 14.3
Check for existing partitions
Using the list above:
[18:15 r730-01 dvl ~] % gpart show | grep -E "nda0|nda1|nda2|nda3|nda5|nda8|da12|da13" => 40 7814037088 nda5 GPT (3.6T) => 34 7814037101 nda8 GPT (3.6T)
Note I could not do this command because it fails at the first error:
[18:05 r730-01 dvl ~] % gpart show nda0 nda1 nda2 nda3 nda5 nda8 da12 da13 gpart: No such geom: nda0.
Looking through the output of gpart show -l, I find:
=> 40 7814037088 nda5 GPT (3.6T)
40 7814036000 1 Samsung_990_S7U8NJ0Y716854P (3.6T)
7814036040 1088 - free - (544K)
..
=> 34 7814037101 nda8 GPT (3.6T)
34 6 - free - (3.0K)
40 7814036000 1 Samsung_990_S7U8NJ0Y716801F (3.6T)
7814036040 1095 - free - (548K)
Remember that partition size (7814036000) as it will be referenced later.
Looking at the output of zpool import, I can confirm the above two devices can from the zpool formerly known as data02:
[18:18 r730-01 dvl ~] % sudo zpool import
pool: data02_old
id: 14532602998618854058
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
data02_old ONLINE
mirror-0 ONLINE
gpt/Samsung_990_S7U8NJ0Y716854P ONLINE
gpt/Samsung_990_S7U8NJ0Y716801F ONLINE
pool: data04_old
id: 14613959245391720618
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
data04_old ONLINE
mirror-0 ONLINE
da12 ONLINE
da13 ONLINE
You should note that data04_old uses whole disks, no partitions. That’s not my usual approach, as I will soon demonstrate.
Clearing old partitions
This is me, clearing the partitions:
[18:56 r730-01 dvl ~] % sudo gpart destroy -F nda5 nda5 destroyed [18:57 r730-01 dvl ~] % sudo gpart destroy -F nda8 nda8 destroyed
That takes care of data02_old.
You will notice the timestamp in the command prompt is not in sequence. I found I had to redo some of this work.
Clearing old labels
I use these commands to clear out the existing labels. This is a precaution, just because I can.
[18:26 r730-01 dvl ~] % sudo zpool labelclear da12 use '-f' to override the following error: /dev/da12 is a member of exported pool "data04_old" [18:26 r730-01 dvl ~] % sudo zpool labelclear -f da12 [18:26 r730-01 dvl ~] % sudo zpool labelclear -f da13
That cleared out data04_old – it’s gone.
Now there’s nothing to import:
[18:29 r730-01 dvl ~] % sudo zpool import no pools available to import
All the diskinfo
Scripting is good, it saves us time:
[18:32 r730-01 dvl ~] % echo "nda0 nda1 nda2 nda3 nda5 nda8 da12 da13" | xargs -n 1 -I % sudo diskinfo -v /dev/% /dev/nda0 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 PRO 4TB # Disk descr. S7KGNU0Y722875X # Disk ident. nvme0 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/nda1 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 PRO 4TB # Disk descr. S7KGNU0Y915666E # Disk ident. nvme1 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/nda2 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 PRO 4TB # Disk descr. S7KGNU0Y912937J # Disk ident. nvme2 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/nda3 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 PRO 4TB # Disk descr. S7KGNU0Y912955D # Disk ident. nvme3 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/nda5 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 EVO Plus 4TB # Disk descr. S7U8NJ0Y716854P # Disk ident. nvme5 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/nda8 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 0 # stripesize 0 # stripeoffset Samsung SSD 990 EVO Plus 4TB # Disk descr. S7U8NJ0Y716801F # Disk ident. nvme8 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM /dev/da12 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 4096 # stripesize 0 # stripeoffset 486401 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. ATA Samsung SSD 870 # Disk descr. S757NS0Y700758M # Disk ident. mrsas0 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM Not_Zoned # Zone Mode /dev/da13 512 # sectorsize 4000787030016 # mediasize in bytes (3.6T) 7814037168 # mediasize in sectors 4096 # stripesize 0 # stripeoffset 486401 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. ATA Samsung SSD 870 # Disk descr. S757NS0Y700760R # Disk ident. mrsas0 # Attachment Yes # TRIM/UNMAP support 0 # Rotation rate in RPM Not_Zoned # Zone Mode [18:32 r730-01 dvl ~] %
Creating new partitioning schemes
This is the start of our partitions, or more precisely, our partition schemes into which we will add our partition.
[18:40 r730-01 dvl ~] % echo "nda0 nda1 nda2 nda3 nda5 nda8 da12 da13" | xargs -n 1 -I % sudo gpart create -s gpt % nda0 created nda1 created nda2 created nda3 created nda5 created nda8 created da12 created da13 created
Creating the freebsd-zfs partitions
With the diskinfo output (shown above) in one terminal, it was easy to do these commands. Again, ignoring those two messages is OK, it is expected.
[18:56 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7KGNU0Y722875X nda0 nda0p1 added [19:01 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7KGNU0Y915666E nda1 nda1p1 added [19:02 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7KGNU0Y912937J nda2 nda2p1 added [19:02 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7KGNU0Y912955D nda3 nda3p1 added [19:02 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7U8NJ0Y716854P nda5 nda5p1 added [19:02 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S7U8NJ0Y716801F nda8 nda8p1 added [19:03 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S757NS0Y700758M da12 da12p1 added [19:03 r730-01 dvl ~] % sudo gpart add -a 4K -i 1 -t freebsd-zfs -s 7814036000 -l S757NS0Y700760R da13 da13p1 added [19:03 r730-01 dvl ~] %
Checking the partition sizes
Let’s check the partitions using the same command which failed earlier:
[19:01 r730-01 dvl ~] % gpart show nda0 nda1 nda2 nda3 nda5 nda8 da12 da13
=> 40 7814037088 nda0 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 nda1 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 nda2 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 nda3 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 nda5 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 nda8 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 da12 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
=> 40 7814037088 da13 GPT (3.6T)
40 7814036000 1 freebsd-zfs (3.6T)
7814036040 1088 - free - (544K)
Creating the zpool
Here we go, creating the big zpool. I’m basing this on the one done in prod – moving to a SATADOM based zpool (zroot) using zfs snapshot and send | recv (more or less).
zpool create \ -O mountpoint=none \ -O atime=off \ -O compression=zstd \ data04 raidz2 /dev/gpt/S7KGNU0Y722875X /dev/gpt/S7KGNU0Y915666E /dev/gpt/S7KGNU0Y912937J /dev/gpt/S7KGNU0Y912955D /dev/gpt/S7U8NJ0Y716854P /dev/gpt/S7U8NJ0Y716801F /dev/gpt/S757NS0Y700758M /dev/gpt/S757NS0Y700760R
To avoid copy/paste, I ran this command to get the list of devices:
[19:09 r730-01 dvl ~] % gpart show -l nda0 nda1 nda2 nda3 nda5 nda8 da12 da13 | grep 7814036000 | cut -f 5 -w | xargs -n 1 -I % echo /dev/gpt/% | xargs /dev/gpt/S7KGNU0Y722875X /dev/gpt/S7KGNU0Y915666E /dev/gpt/S7KGNU0Y912937J /dev/gpt/S7KGNU0Y912955D /dev/gpt/S7U8NJ0Y716854P /dev/gpt/S7U8NJ0Y716801F /dev/gpt/S757NS0Y700758M /dev/gpt/S757NS0Y700760R
It exploits the fact that each partition has size 7814036000.
Creation:
[19:09 r730-01 dvl ~] % sudo zpool create \ -O mountpoint=none \ -O atime=off \ -O compression=zstd \ data04 raidz2 /dev/gpt/S7KGNU0Y722875X /dev/gpt/S7KGNU0Y915666E /dev/gpt/S7KGNU0Y912937J /dev/gpt/S7KGNU0Y912955D /dev/gpt/S7U8NJ0Y716854P /dev/gpt/S7U8NJ0Y716801F /dev/gpt/S757NS0Y700758M /dev/gpt/S757NS0Y700760R [19:12 r730-01 dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 5.81T 5.10T 731G - - 3% 87% 1.00x ONLINE - data02 928G 648G 280G - - 0% 69% 1.00x ONLINE - data03 7.25T 1.29T 5.96T - - 33% 17% 1.00x ONLINE - data04 29.1T 1.34M 29.1T - - 0% 0% 1.00x ONLINE - zroot 107G 30.2G 76.8G - - 15% 28% 1.00x ONLINE - [19:12 r730-01 dvl ~] % zpool status data04 pool: data04 state: ONLINE config: NAME STATE READ WRITE CKSUM data04 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/S7KGNU0Y722875X ONLINE 0 0 0 gpt/S7KGNU0Y915666E ONLINE 0 0 0 gpt/S7KGNU0Y912937J ONLINE 0 0 0 gpt/S7KGNU0Y912955D ONLINE 0 0 0 gpt/S7U8NJ0Y716854P ONLINE 0 0 0 gpt/S7U8NJ0Y716801F ONLINE 0 0 0 gpt/S757NS0Y700758M ONLINE 0 0 0 gpt/S757NS0Y700760R ONLINE 0 0 0 errors: No known data errors
I figured 8 x 4T in raidz2 would give me a 24T pool. I seem to have been off by 5T.











