In recent posts:
- Doing a bit of stress work on a new HDD
- x8dtu: adding in the smaller drive
- Swapping zpools – moving from using main_tank to using data
If you have a look over those posts, you’ll see why you never want to downsize a zpool.
In this post:
- FreeBSD 14.2
- removing the zfs labels from an drive which was part of a zpool which is no longer in use
- destroying the partitions
- creating a new partition
- adding that drive to a recently created single-drive zpool to create a mirror
Identify the drive
The drive I want is ada, as seen in zpool status and log entries from the posts mentioned above.
[18:50 x8dtu dvl ~] % gpart show ada2 => 6 1220942635 ada2 GPT (4.5T) 6 1220280320 1 freebsd-zfs (4.5T) 1220280326 662315 - free - (2.5G) [18:50 x8dtu dvl ~] % [19:03 x8dtu dvl ~] % sudo diskinfo -v /dev/ada2 /dev/ada2 4096 # sectorsize 5000981078016 # mediasize in bytes (4.5T) 1220942646 # mediasize in sectors 0 # stripesize 0 # stripeoffset 1211252 # Cylinders according to firmware. 16 # Heads according to firmware. 63 # Sectors according to firmware. TOSHIBA MG04ACA500A # Disk descr. 44E1K00HFK7A # Disk ident. ahcich2 # Attachment id1,enc@n3061686369656d30/type@0/slot@3/elmdesc@Slot_02 # Physical path No # TRIM/UNMAP support 7200 # Rotation rate in RPM Not_Zoned # Zone Mode [19:03 x8dtu dvl ~] %
Yes, that’s the one, the 5TB drive.
zpool labelclear
Why am I running labelclear? TO remove any evidence that this drive was part of a zpool. Sometimes things get messy when old labels are still around.
[18:57 x8dtu dvl ~] % sudo zpool labelclear ada2p1 use '-f' to override the following error: /dev/ada2p1 is a member of exported pool "main_tank" [18:57 x8dtu dvl ~] % sudo zpool labelclear -f ada2p1
Destroy the old partitions
Here we go:
[18:57 x8dtu dvl ~] % sudo gpart destroy ada2 gpart: Device busy [18:59 x8dtu dvl ~] % sudo gpart destroy -F ada2 ada2 destroyed
I’m sure I could have just deleted the partitions and created my new one. I prefer to do it this way.
Creating a new partition
First, I had to do the math, I’ll show that later.
[19:05 x8dtu dvl ~] % sudo gpart add -i 1 -t freebsd-zfs -a 4k -l SLOT_2_TO_44E1K00HFK7A -s 976754636 ada2 ada2p1 added [19:10 x8dtu dvl ~] % gpart show ada2 ada3 => 6 1220942635 ada2 GPT (4.5T) 6 976754636 1 freebsd-zfs (3.6T) 976754642 244187999 - free - (932G) => 40 7814037088 ada3 GPT (3.6T) 40 7814037088 1 freebsd-zfs (3.6T) [19:10 x8dtu dvl ~] % gpart show -l ada2 ada3 => 6 1220942635 ada2 GPT (4.5T) 6 976754636 1 SLOT_2_TO_44E1K00HFK7A (3.6T) 976754642 244187999 - free - (932G) => 40 7814037088 ada3 GPT (3.6T) 40 7814037088 1 SLOT_3_TO_382AK6KIFJKA (3.6T) [19:10 x8dtu dvl ~] %
I know it’s slot, from using sesutil in previous post. And because the diskinfo output says so.
To know the size to specify:
ada3 has 7814037088 sectors of 512 bytes = 4000786989056 bytes. We need the right number of 4096 sectors to make ada2 have the same partition size: 4000786989056 / 4096 = 976754636, the magic number used when creating the partition.
Adding the drive to the zpool
This part always gives me concern. I fear creating a stripe instead of a mirror.
[19:20 x8dtu dvl ~] % sudo zpool attach data gpt/SLOT_3_TO_382AK6KIFJKA gpt/SLOT_2_TO_44E1K00HFK7A [19:21 x8dtu dvl ~] %
gpt/SLOT_3_TO_382AK6KIFJKA is what you see in the existing zpool status output:
[19:18 x8dtu dvl ~] % zpool status data pool: data state: ONLINE scan: scrub in progress since Sun Jul 13 17:06:50 2025 1.22T / 1.22T scanned, 1.07T / 1.22T issued at 146M/s 0B repaired, 87.76% done, 00:17:55 to go config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 errors: No known data errors
gpt/SLOT_2_TO_44E1K00HFK7A is the label I created in the previous section.
And here we have a resilver:
[19:21 x8dtu dvl ~] % zpool status data pool: data state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Sun Jul 13 19:21:05 2025 1.22T / 1.22T scanned, 13.2G / 1.22T issued at 138M/s 13.3G resilvered, 1.05% done, 02:33:26 to go config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 gpt/SLOT_2_TO_44E1K00HFK7A ONLINE 0 0 1.36K (resilvering) errors: No known data errors [19:24 x8dtu dvl ~] %
These are the log messages it generated:
Jul 13 19:21:00 x8dtu kernel: vdev_geom_open_by_path:799[1]: Found provider by name /dev/gpt/SLOT_2_TO_44E1K00HFK7A. Jul 13 19:21:00 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_2_TO_44E1K00HFK7A. Jul 13 19:21:00 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_2_TO_44E1K00HFK7A.
I’ll add a concluding section later.
zpool scrub done
Well, this isn’t ideal.
[0:11 x8dtu dvl ~] % zpool status data pool: data state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: resilvered 1.23T in 03:08:32 with 0 errors on Sun Jul 13 22:29:37 2025 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 gpt/SLOT_2_TO_44E1K00HFK7A ONLINE 0 0 1.36K errors: No known data errors
The errors are on the drive I just added (slot 2). Let’s try a scrub:
[0:11 x8dtu dvl ~] % sudo zpool scrub data [0:14 x8dtu dvl ~] % [0:14 x8dtu dvl ~] % zpool status data pool: data state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub in progress since Mon Jul 14 00:14:12 2025 1.23T / 1.23T scanned, 447M / 1.23T issued at 224M/s 0B repaired, 0.03% done, 01:35:43 to go config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 gpt/SLOT_2_TO_44E1K00HFK7A ONLINE 0 0 1.36K errors: No known data errors
Monday morning: 6:58 AM
Current status:
z% [10:57 x8dtu dvl ~] % zpool status data pool: data state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P scan: scrub repaired 0B in 02:38:15 with 0 errors on Mon Jul 14 02:52:27 2025 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 gpt/SLOT_2_TO_44E1K00HFK7A ONLINE 0 0 1.36K errors: No known data errors
All OK on the scrub. Let’s clear, and for fun and games, scrub again.
[10:57 x8dtu dvl ~] % sudo zpool scrub data [10:57 x8dtu dvl ~] % zpool status data pool: data state: ONLINE scan: scrub in progress since Mon Jul 14 10:57:49 2025 1.23T / 1.23T scanned, 4.28G / 1.23T issued at 168M/s 0B repaired, 0.34% done, 02:06:38 to go config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/SLOT_3_TO_382AK6KIFJKA ONLINE 0 0 0 gpt/SLOT_2_TO_44E1K00HFK7A ONLINE 0 0 0 errors: No known data errors
I’ll report back later.