In this post, I am creating a 4 drive ZFS pool consisting of two mirror.
In this post:
- I’m booting from mfsBSD 12.1
- ZFS
Creating partitions
On FreeBSD, it is convention to create partitions. It is not necessary, but I don’t use all the space in case a future replacement drive is slightly smaller than these drives.
NOTE: I am not, but have not confirmed that this is no longer a problem in ZFS, it can handle replacement drives without exact matching size since about FreeBSD 10.
root@mfsbsd:~ # gpart create -s gpt da2 da2 created root@mfsbsd:~ # gpart create -s gpt da3 da3 created root@mfsbsd:~ # gpart create -s gpt da4 da4 created root@mfsbsd:~ # gpart create -s gpt da5 da5 created root@mfsbsd:~ #
Creating freebsd-zfs partitions
Get the known size. Create one slightly smaller.
root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K da2 da2p1 added root@mfsbsd:~ # gpart show da2 => 40 1953525088 da2 GPT (932G) 40 1953525088 1 freebsd-zfs (932G) root@mfsbsd:~ #
Now that I know the full size, delete, and later create one smaller:
root@mfsbsd:~ # gpart delete -i 1 da2 da2p1 deleted root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K -s 1953520000 -l S59VNS0N809087J_S00 da2 da2p1 added root@mfsbsd:~ # gpart show da2 => 40 1953525088 da2 GPT (932G) 40 1953520000 1 freebsd-zfs (932G) 1953520040 5088 - free - (2.5M) root@mfsbsd:~ # gpart show -l da2 => 40 1953525088 da2 GPT (932G) 40 1953520000 1 S59VNS0N809087J_S00 (932G) 1953520040 5088 - free - (2.5M) root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K -s 1953520000 -l S59VNS0N809087J_S00 da2 da2p1 added
Now we create a smaller partition, and label it.
Note the label, which is the drive’s serial number. S00 means drive bay 0.
I like having the serial number in the label. It helps to correctly identify the drive if it needs to be replaced. The drive bay helps you find it. The serial number helps you confirm you have the correct drive.
Repeat for the other drives:
root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K -s 1953520000 -l S59VNJ0N631973D_S01 da3 da3p1 added root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K -s 1953520000 -l S5B3NDFN807383E_S02 da4 da4p1 added root@mfsbsd:~ # gpart add -t freebsd-zfs -a 4K -s 1953520000 -l S5B3NDFN807386P_S03 da5 da5p1 added root@mfsbsd:~ # gpart show -l da2 da3 da4 da5 => 40 1953525088 da2 GPT (932G) 40 1953520000 1 S59VNS0N809087J_S00 (932G) 1953520040 5088 - free - (2.5M) => 40 1953525088 da3 GPT (932G) 40 1953520000 1 S59VNJ0N631973D_S01 (932G) 1953520040 5088 - free - (2.5M) => 40 1953525088 da4 GPT (932G) 40 1953520000 1 S5B3NDFN807383E_S02 (932G) 1953520040 5088 - free - (2.5M) => 40 1953525088 da5 GPT (932G) 40 1953520000 1 S5B3NDFN807386P_S03 (932G) 1953520040 5088 - free - (2.5M) root@mfsbsd:~ #
Here are the corresponding devices from which the zpool will be created:
root@mfsbsd:~ # ls -l /dev/gpt/S* crw-r----- 1 root operator 0xd6 Oct 11 00:14 /dev/gpt/S59VNJ0N631973D_S01 crw-r----- 1 root operator 0xd2 Oct 11 00:13 /dev/gpt/S59VNS0N809087J_S00 crw-r----- 1 root operator 0xda Oct 11 00:16 /dev/gpt/S5B3NDFN807383E_S02 crw-r----- 1 root operator 0xde Oct 11 00:16 /dev/gpt/S5B3NDFN807386P_S03 root@mfsbsd:~ #
Creating the zpool
Here, we create a zpool consisting of two vdevs, both of which are a mirror of two drives.
root@mfsbsd:~ # time zpool create data01 \ mirror /dev/gpt/S59VNS0N809087J_S00 /dev/gpt/S59VNJ0N631973D_S01 \ mirror /dev/gpt/S5B3NDFN807383E_S02 /dev/gpt/S5B3NDFN807386P_S03 0.000u 0.027s 0:07.30 0.2% 56+144k 128+0io 0pf+0w root@mfsbsd:~ # root@mfsbsd:~ # zpool status pool: data01 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data01 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gpt/S59VNS0N809087J_S00 ONLINE 0 0 0 gpt/S59VNJ0N631973D_S01 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gpt/S5B3NDFN807383E_S02 ONLINE 0 0 0 gpt/S5B3NDFN807386P_S03 ONLINE 0 0 0 errors: No known data errors root@mfsbsd:~ #
smartctl
I also ran some smartctl tests before I started the above. The results are in this gist: https://gist.github.com/dlangille/ac4fe6a1422b346a7196d2b996daa363
Hah, from your title I almost thought this would be a performance comparison between ZFS striping vs mirrors… Good post!
I can see why you reached that conclusion. :)
That ambiguity was not intentional.
I have two disks in a zfs pool striped and I now have two more disks the same and want to add them to the pool, but want them to mirror the existing stripe. Is there a way to do that without starting over from scratch?
Yes, I am sure that can be done. I have a stripe over two mirrors, which is still four drives, but different from what you want/have:
Can you post your existing zpool status please?