Creating a new zpool for r730-01

Finally, it’s time to put together a new zpool for r730-01 (the storage information at that URL is already out of date; once this new zpool is created, another post will replace it).

In this post:

  • FreeBSD 14.3

I just realized, that my previous post on the zpools for this host revealed there is already a zpool with 2x 4TB NVMe: data02:

[20:17 r730-01 dvl ~] % zpool list data02
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data02  3.62T   703G  2.94T        -         -    32%    18%  1.00x    ONLINE  -

[20:19 r730-01 dvl ~] % zpool status data02
  pool: data02
 state: ONLINE
  scan: scrub repaired 0B in 00:12:09 with 0 errors on Thu Nov 20 04:22:47 2025
config:

	NAME                                 STATE     READ WRITE CKSUM
	data02                               ONLINE       0     0     0
	  mirror-0                           ONLINE       0     0     0
	    gpt/Samsung_990_S7U8NJ0Y716854P  ONLINE       0     0     0
	    gpt/Samsung_990_S7U8NJ0Y716801F  ONLINE       0     0     0

errors: No known data errors
[20:19 r730-01 dvl ~] % 

I know I have these NVMe devices:

[20:10 r730-01 dvl ~] % sudo nvmecontrol devlist
 nvme0: Samsung SSD 990 PRO 4TB
    nvme0ns1 (3815447MB)
 nvme1: Samsung SSD 990 PRO 4TB
    nvme1ns1 (3815447MB)
 nvme2: Samsung SSD 990 PRO 4TB
    nvme2ns1 (3815447MB)
 nvme3: Samsung SSD 990 PRO 4TB
    nvme3ns1 (3815447MB)
 nvme4: WDC WDS250G2B0C-00PXH0
    nvme4ns1 (238475MB)
 nvme5: Samsung SSD 990 EVO Plus 4TB
    nvme5ns1 (3815447MB)
 nvme6: Samsung SSD 980 PRO with Heatsink 1TB
    nvme6ns1 (953869MB)
 nvme7: Samsung SSD 980 PRO with Heatsink 1TB
    nvme7ns1 (953869MB)
 nvme8: Samsung SSD 990 EVO Plus 4TB
    nvme8ns1 (3815447MB)

Looking at gpart show -l (below) and comparing against the zpool status data02 output, I see nda5 and nda8 are part of that zpool. They correspond to nvme5 and nvme8 listed above.

I am sure the 1TB devices (nvme6 & nvme7) are unused.

That tells me I have these NVMe devices ready to use:

nvme0 nvme1 nvme2 nvme3 – that quantity matches up with recent purchases. I’m sure these four on on the new NVMe board I bought.

nvme4 is 250G – I can remove that – it’s not much use here.

I also have 2x 4TB SSDs, da12 and da13

Here’s that gpart information I mentioned above. In the next section, I’ll assemble the list of free drives and think about what to do next.

[20:44 r730-01 dvl ~] % gpart show -l
=>       40  488397088  nda4  GPT  (233G)
         40       1024     1  gptboot0  (512K)
       1064        984        - free -  (492K)
       2048    4194304     2  swap0  (2.0G)
    4196352  484200448     3  zfs0  (231G)
  488396800        328        - free -  (164K)

=>        40  7814037088  nda5  GPT  (3.6T)
          40  7814036000     1  Samsung_990_S7U8NJ0Y716854P  (3.6T)
  7814036040        1088        - free -  (544K)

=>        34  7814037101  nda8  GPT  (3.6T)
          34           6        - free -  (3.0K)
          40  7814036000     1  Samsung_990_S7U8NJ0Y716801F  (3.6T)
  7814036040        1095        - free -  (548K)

=>       40  242255584  ada0  GPT  (116G)
         40       2008        - free -  (1.0M)
       2048     409600     1  efi0_20170718AA0000185556  (200M)
     411648   16777216     2  swap0  (8.0G)
   17188864  225066760     3  zfs0_20170718AA0000185556  (107G)

=>       40  242255584  ada1  GPT  (116G)
         40       2008        - free -  (1.0M)
       2048     409600     1  efi1_20170719AA1178164201  (200M)
     411648   16777216     2  swap1  (8.0G)
   17188864  225066760     3  zfs1_20170719AA1178164201  (107G)

=>        40  1562824288  da2  GPT  (745G)
          40  1562824288    1  Y7P0A033TEVE  (745G)

=>        40  1562824288  da1  GPT  (745G)
          40  1562824288    1  Y7P0A02MTEVE  (745G)

=>        40  1562824288  da3  GPT  (745G)
          40  1562824288    1  Y7P0A022TEVE  (745G)

=>        40  1562824288  da4  GPT  (745G)
          40  1562824288    1  Y7P0A02QTEVE  (745G)

=>        40  1562824288  da5  GPT  (745G)
          40  1562824288    1  Y7P0A02ATEVE  (745G)

=>        40  1562824288  da6  GPT  (745G)
          40  1562824288    1  Y7P0A02GTEVE  (745G)

=>        40  1562824288  da7  GPT  (745G)
          40  1562824288    1  Y7P0A02DTEVE  (745G)

=>        40  7814037088  da11  GPT  (3.6T)
          40  7814037088     1  WD_230151800473  (3.6T)

=>        40  7814037088  da8  GPT  (3.6T)
          40  7814037088    1  WD_230151801478  (3.6T)

=>        40  7814037088  da9  GPT  (3.6T)
          40  7814037088    1  WD_230151801284  (3.6T)

=>        40  7814037088  da10  GPT  (3.6T)
          40  7814037088     1  WD_22492H800867  (3.6T)

=>        40  1562824288  da0  GPT  (745G)
          40  1562824288    1  Y7P0A02LTEVE  (745G)

=>        34  7814037101  da12  GPT  (3.6T)
          34           6        - free -  (3.0K)
          40      532480     1  (null)  (260M)
      532520        2008        - free -  (1.0M)
      534528    16777216     2  (null)  (8.0G)
    17311744  1936213384     3  (null)  (923G)
  1953525128  5860512007        - free -  (2.7T)

[20:44 r730-01 dvl ~] % 

Free drives

The free drives are:

  • nvme0 nvme1 nvme2 nvme3 – 4TB NVMe
  • da12 and da13 – 4TB SSDs

There is an existing 4TB mirror, data02 – I could add the above 6 devices and get a 16TB mirror. Well. That might do me for a while.

Expanding an existing zpool starts with some devices fuller than the newly added drives. As time goes on, that levels out.

This next bit is something I’m unlikely to do. It sounds like way more work than is required. That existing zpool (data02) contains only 703G of data. Now I could move that all over to something else (like those 1TB NVME devices), then create a brand new zpool and copy the data base in.

I might also want to consider smaller zpools… but really, should I do that? Why not have a 16TB zpool, based on 8x 4TB SSDs? vs 2 zpools of 8TB, for example.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top