Tonight’s goals:
- Install a second HDD
- get gmirror running
- install the remaining 5x2TB HDD
- add the two RAID cards
After adding in the second HDD for the OS, dmesg shows this:
ad4: 76319MB <seagate ST380815AS 4.AAB> at ata2-master UDMA100 SATA 3Gb/s acd0: DVDR <tsstcorp CDDVDW SH-S223C/SB01> at ata3-master UDMA100 SATA 1.5Gb/s ad8: 152587MB <wdc WD1600AAJS-75M0A0 02.03E02> at ata4-master UDMA100 SATA 3Gb/s
The following shows me I’m booting from ad4, so ad8 must be my newly added HDD.
$ cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/ad4s1b none swap sw 0 0 /dev/ad4s1a / ufs rw 1 1 /dev/ad4s1e /tmp ufs rw 2 2 /dev/ad4s1f /usr ufs rw 2 2 /dev/ad4s1d /var ufs rw 2 2 /dev/acd0 /cdrom cd9660 ro,noauto 0 0
Clearly, I want to mirror from ad4 to ad8.
Reading my previous work on gmirror, I found I had to do this:
# gmirror label -v -b round-robin gm0 /dev/ad4 gmirror: Can't store metadata on /dev/ad4: Operation not permitted.
Oops. I need to be running from my DVD and with FIXIT (read the above URL for details).
After following the instructions properly, and rebooting, I found this:
# mount /dev/mirror/gm0s1a on / (ufs, local, soft-updates) devfs on /dev (devfs, local, multilabel) /dev/mirror/gm0s1e on /tmp (ufs, local, soft-updates) /dev/mirror/gm0s1f on /usr (ufs, local, soft-updates) /dev/mirror/gm0s1d on /var (ufs, local, soft-updates)
Then I added the extra HDD into the mirror:
gmirror insert gm0 /dev/ad8
The status is:
# gmirror list Geom name: gm0 State: DEGRADED Components: 2 Balance: round-robin Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ID: 3685004504 Providers: 1. Name: mirror/gm0 Mediasize: 80026361344 (75G) Sectorsize: 512 Mode: r6w5e14 Consumers: 1. Name: ad4 Mediasize: 80026361856 (75G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 1 ID: 3456884141 2. Name: ad8 Mediasize: 160000000000 (149G) Sectorsize: 512 Mode: r1w1e1 State: SYNCHRONIZING Priority: 0 Flags: DIRTY, SYNCHRONIZING GenID: 0 SyncID: 1 Synchronized: 1% ID: 880733105
I waited until it was 100%, then powered down and added in the remaining hardware.
After installing the 5 2TB drives and rebooting, dmesg showed me:
ad8: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata4-master UDMA100 SATA 3Gb/s ad10: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata5-master UDMA100 SATA 3Gb/s ad12: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata6-master UDMA100 SATA 3Gb/s ad14: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata7-master UDMA100 SATA 3Gb/s ad16: 1907729MB <Hitachi HDS722020ALA330 JKAOA28A> at ata8-master UDMA100 SATA 3Gb/s ad20: 76319MB <Seagate ST380815AS 4.AAB> at ata10-master UDMA100 SATA 3Gb/s acd0: DVDR <TSSTcorp CDDVDW SH-S223C/SB01> at ata11-master UDMA100 SATA 1.5Gb/s ad24: 152587MB <WDC WD1600AAJS-75M0A0 02.03E02> at ata12-master UDMA100 SATA 3Gb/s
Interesting… with gmirror, even though the HDD changed device numbers, it still booted correctly. I am impressed. This shows the new device numbers:
$ gmirror list Geom name: gm0 State: COMPLETE Components: 2 Balance: round-robin Slice: 4096 Flags: NONE GenID: 0 SyncID: 2 ID: 3685004504 Providers: 1. Name: mirror/gm0 Mediasize: 80026361344 (75G) Sectorsize: 512 Mode: r5w5e14 Consumers: 1. Name: ad20 Mediasize: 80026361856 (75G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 2 ID: 3456884141 2. Name: ad24 Mediasize: 160000000000 (149G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 2 ID: 880733105
At this point, I checked my power consumption. It was 147W.
To enable and then start ZFS:
# echo 'zfs_enable="YES"' >> /etc/rc.conf # /etc/rc.d/zfs start
Which produced these log entries:
Mar 5 21:43:12 kraken kernel: ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present; Mar 5 21:43:12 kraken kernel: to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf. Mar 5 21:43:12 kraken kernel: ZFS filesystem version 3 Mar 5 21:43:12 kraken kernel: ZFS storage pool version 14
I find the Prefetch message interesting. I have 4GB of RAM.
To create a ZFS raidz array with parity 1:
# zpool create storage raidz1 ad8 ad10 ad12 ad14 ad16
Note the new mount point:
# mount /dev/mirror/gm0s1a on / (ufs, local, soft-updates) devfs on /dev (devfs, local, multilabel) /dev/mirror/gm0s1e on /tmp (ufs, local, soft-updates) /dev/mirror/gm0s1f on /usr (ufs, local, soft-updates) /dev/mirror/gm0s1d on /var (ufs, local, soft-updates) storage on /storage (zfs, local)
And the space:
# df Filesystem 1K-blocks Used Avail Capacity Mounted on /dev/mirror/gm0s1a 1012974 506268 425670 54% / devfs 1 1 0 100% /dev /dev/mirror/gm0s1e 4058062 58 3733360 0% /tmp /dev/mirror/gm0s1f 60456546 3565214 52054810 6% /usr /dev/mirror/gm0s1d 4058062 107242 3626176 3% /var storage 7651768192 0 7651768192 0% /storage
To add daily status messages:
echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf
To see a list:
# zfs list NAME USED AVAIL REFER MOUNTPOINT storage 98.3K 7.13T 28.8K /storage
And the history:
# zpool history History for 'storage': 2010-03-05.21:51:07 zpool create storage raidz1 ad8 ad10 ad12 ad14 ad16
And this is interesteing to hear, if only on an empty system:
zpool scrub storage
Status:
$ zpool status storage pool: storage state: ONLINE scrub: scrub completed after 0h0m with 0 errors on Fri Mar 5 22:01:35 2010 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1 ONLINE 0 0 0 ad8 ONLINE 0 0 0 ad10 ONLINE 0 0 0 ad12 ONLINE 0 0 0 ad14 ONLINE 0 0 0 ad16 ONLINE 0 0 0 errors: No known data errors
Even more information:
$ zpool get all storage NAME PROPERTY VALUE SOURCE storage size 9.06T - storage used 156K - storage available 9.06T - storage capacity 0% - storage altroot - default storage health ONLINE - storage guid 14112753923970899783 - storage version 14 default storage bootfs - default storage delegation on default storage autoreplace off default storage cachefile - default storage failmode wait default storage listsnapshots off default
I must say, it was a bit of a non-event. It look more time to install the HDD than it did to configure ZFS.
Just the way it should be, if you ask me. :)
“I must say, it was a bit of a non-event. It look more time to install the HDD than it did to configure ZFS.”
I think so, too.
kean@web.de
Thanks Kean. I appreciate the help.
You left a link to a clothing store in your URL. It’s a shame about that. Spammers are not welcome here at all.