During zfs: setting compression and adding new vdevs I replace 2x 1TB drives with 2x 4TB drives.
This afternoon as I was taking laundry down to the basement, I realized: I have two unused drives in the host. Either I remove them or I repurpose them.
I know I have 4x 1TB drives (two SSDs and two NVMe). I could make a 2TB mirror out of them.
Where can I use them?
r730-03
r730-03 has this:
[19:56 r730-03 dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 32.7T 23.4T 9.34T - - 26% 71% 1.00x ONLINE - zroot 412G 26.6G 385G - - 27% 6% 1.00x ONLINE - [19:52 r730-03 dvl ~] % zfs list -r data01/bacula-volumes NAME USED AVAIL REFER MOUNTPOINT data01/bacula-volumes 16.6T 9.21T 96K /jails/bacula-sd-04/usr/local/bacula/volumes data01/bacula-volumes/DiffFile 446G 578G 390G /jails/bacula-sd-04/usr/local/bacula/volumes/DiffFile data01/bacula-volumes/FullFile 6.78T 1.72T 6.28T /jails/bacula-sd-04/usr/local/bacula/volumes/FullFile data01/bacula-volumes/FullFileNoNextPool 8.98T 9.21T 5.68T /jails/bacula-sd-04/usr/local/bacula/volumes/FullFileNoNextPool data01/bacula-volumes/IncrFile 369G 2.14T 185G /jails/bacula-sd-04/usr/local/bacula/volumes/IncrFile
I could move the DiffFile and IncrFile file pools over to that new zpool. That would take that
Do I room to add in new drives
Existing drives:
[19:52 r730-03 dvl ~] % sysctl -n kern.disks | xargs -n 1 | sort | xargs ada0 ada1 cd0 da0 da1 da2 da3 da4 da5 da6 da7
Ignore cd0
ada0 and ada1 are part of zroot.
[19:55 r730-03 dvl ~] % zpool status zroot pool: zroot state: ONLINE scan: scrub repaired 0B in 00:01:51 with 0 errors on Thu Oct 16 04:13:57 2025 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 errors: No known data errors
That leaves eight drives (da0..7). But two of them must be unused because there’s only 6 drives in here:
[19:55 r730-03 dvl ~] % zpool status -P data01 pool: data01 state: ONLINE scan: scrub repaired 0B in 1 days 01:05:40 with 0 errors on Fri Oct 17 05:17:20 2025 config: NAME STATE READ WRITE CKSUM data01 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/gpt/SEAG_ZJV4HFPE ONLINE 0 0 0 /dev/gpt/SEAG_ZHZ16KEX ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /dev/gpt/SG_ZHZ03BAT ONLINE 0 0 0 /dev/gpt/HGST_8CJW1G4E ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /dev/gpt/SG_ZL2NJBT2 ONLINE 0 0 0 /dev/gpt/HGST_5PGGTH3D ONLINE 0 0 0
Let’s look at this and find the suspects:
[20:23 r730-03 dvl ~] % gpart show -l
=> 40 937703008 ada0 GPT (447G)
40 1024 1 gptboot1 (512K)
1064 984 - free - (492K)
2048 67108864 2 swap1 (32G)
67110912 870590464 3 zfs1 (415G)
937701376 1672 - free - (836K)
=> 40 937703008 ada1 GPT (447G)
40 1024 1 gptboot0 (512K)
1064 984 - free - (492K)
2048 67108864 2 swap0 (32G)
67110912 870590464 3 zfs0 (415G)
937701376 1672 - free - (836K)
=> 40 23437770672 da4 GPT (11T)
40 23437770600 1 HGST_8CJW1G4E (11T)
23437770640 72 - free - (36K)
=> 40 23437770672 da1 GPT (11T)
40 23437770600 1 HGST_5PGGTH3D (11T)
23437770640 72 - free - (36K)
=> 40 23437770672 da0 GPT (11T)
40 23437770600 1 SEAG_ZJV4HFPE (11T)
23437770640 72 - free - (36K)
=> 34 23437770685 da5 GPT (11T)
34 6 - free - (3.0K)
40 23437770600 1 SG_ZL2NJBT2 (11T)
23437770640 79 - free - (40K)
=> 40 23437770672 da6 GPT (11T)
40 23437770600 1 SLOT_7_ST_ZL2G3LB9 (11T)
23437770640 72 - free - (36K)
=> 40 23437770672 da3 GPT (11T)
40 23437770600 1 SG_ZHZ03BAT (11T)
23437770640 72 - free - (36K)
=> 40 23437770672 da2 GPT (11T)
40 23437770600 1 SEAG_ZHZ16KEX (11T)
23437770640 72 - free - (36K)
[20:23 r730-03 dvl ~] %
I’ve highlighted the drives I have seen in the output of zpool status. That leaves da6 (line 33) unused. This also tells me da7 (not seen in this output) must not have a gpart partitioning scheme, or it would have been present in that output.
What is da7
After looking at this, I’m sure da7 is the built-in drive at the top left of the chassis as you look at it.
[20:23 r730-03 dvl ~] % grep da7 /var/run/dmesg.boot da7:Fixed Direct Access SCSI device da7: Serial Number 012345678901 da7: 40.000MB/s transfers da7: 7600MB (15564800 512 byte sectors) da7: quirks=0x2
I think it’s one what Dell calls an “Internal dual SD module”. From this Dell link I found:
The Internal Dual SD module (IDSDM) provides you with a redundant SD card solution. You can configure the IDSDM for storage or as the OS boot partition.
So that’s a redundant 7600MB (7.6GB storage). I’m sure I don’t want to use that.
This tells me there’s two 3.5″ slots free. This host has 8x 3.5″ drive bays. That will do for the 2x 1TB SSDs. What about the 2x NVMe?
I have spare PCIe adaptors for NVMe cards. I don’t know if I have two free slots. Two of the slots are used by the zroot drives.
This is a case of where insufficient notes means I have to break open the chassis and see what I have. I could buy another one of the two-card NVMe adaptors I’m using in r730-01.
If you want the same hardware, you can use these links. As an Amazon Associate I earn from qualifying purchases. This is a link to Amazon: 10Gtek Dual M.2 NVMe SSD Adapter Card – PCIe 3.0 x8 Slot (M-Key)
I’d prefer not to have both halves of a mirror on the same adaptor.
I could replace the zroot drives: they are 2.5″ SSDs. Buy two of those adaptor cards, put a 1TB in each, and something smaller for the zroot. I may have older smaller NVMe sitting on the bench.
The current idea
I think I need to relieve space on data01, perhaps by moving one or both of DiffFile-03 and IncrFile-03 out of that zpool.
[0:14 r730-01 dvl ~] % zfs list -r data01/bacula/volumes NAME USED AVAIL REFER MOUNTPOINT data01/bacula/volumes 3.45T 280G 205K /jails/bacula-sd-03/usr/local/bacula/volumes data01/bacula/volumes/DiffFile-03 549G 280G 549G /jails/bacula-sd-03/usr/local/bacula/volumes/DiffFile-03 data01/bacula/volumes/FullFile-03 2.43T 280G 2.14T /jails/bacula-sd-03/usr/local/bacula/volumes/FullFile-03 data01/bacula/volumes/IncrFile-03 492G 280G 492G /jails/bacula-sd-03/usr/local/bacula/volumes/IncrFile-03 [0:14 r730-01 dvl ~] % zpool list data01 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 5.81T 4.85T 981G - - 3% 83% 1.00x ONLINE - [0:15 r730-01 dvl ~] % zpool status data01 pool: data01 state: ONLINE scan: scrub repaired 0B in 00:19:57 with 0 errors on Thu Oct 30 04:48:16 2025 config: NAME STATE READ WRITE CKSUM data01 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gpt/Y7P0A022TEVE ONLINE 0 0 0 gpt/Y7P0A02ATEVE ONLINE 0 0 0 gpt/Y7P0A02DTEVE ONLINE 0 0 0 gpt/Y7P0A02GTEVE ONLINE 0 0 0 gpt/Y7P0A02LTEVE ONLINE 0 0 0 gpt/Y7P0A02MTEVE ONLINE 0 0 0 gpt/Y7P0A02QTEVE ONLINE 0 0 0 gpt/Y7P0A033TEVE ONLINE 0 0 0 errors: No known data errors
There isn’t much else in there:
[0:25 r730-01 dvl ~] % zfs list -r data01 NAME USED AVAIL REFER MOUNTPOINT data01 3.73T 280G 205K none data01/bacula 3.45T 280G 188K /jails/bacula-sd-03/usr/local/bacula data01/bacula/volumes 3.45T 280G 205K /jails/bacula-sd-03/usr/local/bacula/volumes data01/bacula/volumes/DiffFile-03 549G 280G 549G /jails/bacula-sd-03/usr/local/bacula/volumes/DiffFile-03 data01/bacula/volumes/FullFile-03 2.43T 280G 2.14T /jails/bacula-sd-03/usr/local/bacula/volumes/FullFile-03 data01/bacula/volumes/IncrFile-03 492G 280G 492G /jails/bacula-sd-03/usr/local/bacula/volumes/IncrFile-03 data01/bacula/working 264K 280G 264K /jails/bacula-sd-03/usr/local/bacula/working data01/jail_within_jail 956M 280G 205K none data01/jail_within_jail/jails 617M 280G 205K none data01/jail_within_jail/jails/freshports 617M 280G 617M none data01/jail_within_jail/mkjail 339M 280G 339M none data01/mkjail 4.10G 280G 1.36G /var/db/mkjail data01/mkjail/14.2-RELEASE 1.36G 280G 1.36G /var/db/mkjail/14.2-RELEASE data01/mkjail/14.3-RELEASE 1.37G 280G 1.37G /var/db/mkjail/14.3-RELEASE data01/reserved 290G 570G 205K none
What’s next?
Investigate that host and see how many empty PCIe slots it has and how many drive bays are free.
I issued these commands:
$ sysctl kern.disks $ zpool status $ gpart show -l
I copied the output of the first command into a document.
I used the second command to remove drives from that document as I scrolled through. If a drive could not be identified from the zpool status, I searched for the device (by label name) within gpart show -l output.
Based on that, these drives are installed and unused:
da15 da14 nda2 nda1
These are those drive details, as found in /var/run/dmesg.boot:
da15 at mrsas0 bus 1 scbus1 target 15 lun 0 da15: <ATA SAMSUNG MZ7KM960 GB32> Fixed Direct Access SPC-4 SCSI device da15: Serial Number S2NKNCAH501536P da15: 150.000MB/s transfers da15: 1831420MB (3750748848 512 byte sectors) da15: quirks=0x8<4K> da14: <ATA SAMSUNG MZ7KM960 GB32> Fixed Direct Access SPC-4 SCSI device da14: Serial Number S2NKNCAH501524K da14: 150.000MB/s transfers da14: 1831420MB (3750748848 512 byte sectors) da14: quirks=0x8<4K>
That’s a pair of 2TB drives. Searches show those drives were acquired around December 2024.
nda1 at nvme1 bus 0 scbus19 target 0 lun 1 nda1: <Samsung SSD 980 PRO with Heatsink 1TB 4B2QGXA7 S6WSNJ0T208743F> nda1: Serial Number S6WSNJ0T208743F nda1: nvme version 1.3 nda1: 953869MB (1953525168 512 byte sectors) nda2 at nvme2 bus 0 scbus20 target 0 lun 1 nda2: <Samsung SSD 980 PRO with Heatsink 1TB 4B2QGXA7 S6WSNJ0T207774T> nda2: Serial Number S6WSNJ0T207774T nda2: nvme version 1.3 nda2: 953869MB (1953525168 512 byte sectors)
That’s a pair of 1TB devices. They were acquired in December 2022.
I could create a zpool of mirrored devices with 3TB, right there.
But I’m not going to
I’ve been given a hardware fund, so I’m going to buy four new devices and dispose of the above four devices.
That would build me an 8TB zpool. That could completely replace the existing data01 with the added bonus of freeing up 8 drive cages. BONUS.
Or… I could buy 8x 2TB drives, update data01 to a 12TB zpool, giving me an additional 6TB on that host. And leaving me with 4 unused devices in the host.
I think that’s the better solution.











