ZFS

zpool upgrade to get lz4_compress

FreeBSD now has zl4 compression for ZFS. But you must upgrade your pools to get it. If you try before upgrading, you’ll get: # zfs create -o recordsize=8k -o primarycache=metadata -o compress=lz4 system/usr/home/dan/pgsqlsql cannot create ‘system/usr/home/dan/pgsql’: pool must be upgraded to set this property or value So, let’s upgrade: # zpool upgrade -a This system supports ZFS pool feature flags. Successfully upgraded ‘system’ from version 28 to feature flags. Enabled the following features […]

zpool upgrade to get lz4_compress Read More »

ZFS system hits high load during scrub

My ZFS system hits a high load average every week during its scrub. Here is what top looks like after I pressed i and then S: last pid: 42049; load averages: 3.78, 3.26, 3.42 up 18+13:47:14 11:43:01 220 processes: 3 running, 216 sleeping, 1 waiting CPU: 7.5% user, 0.0% nice, 12.8% system, 1.2% interrupt, 78.5% idle Mem: 765M Active, 236M Inact, 8667M Wired, 17M Cache, 3284M Buf, 22G Free Swap: 8192M Total, 2404K

ZFS system hits high load during scrub Read More »

Moving zfs datasets / filesystems around

After the zfs send, I had system/foo/bar, but I really just wanted system/bar. How do I fix that? mv(1) was my first guess. I was wrong. What I wanted was zfs(8) rename. Let me explain. After doing my: zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ I quite, correctly, wound up with system/usr/local/bacula/FromKraken, not the system/usr/local/bacula which I wanted. Here’s how I fixed that, but first, I took snapshots,

Moving zfs datasets / filesystems around Read More »

zfs send | zfs receive; finally finished

Today it finished. That ZFS send I started on Monday finally finished. Recap The original command was: # zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ That’s was the initial send of the filesystem. But more data has been added to it. So I take another snapshot, and send that: Send what’s accumulated since previous send # zfs snapshot storage/compressed/bacula@2013.07.25:08:20 And now I send the incremental stream: # zfs

zfs send | zfs receive; finally finished Read More »

zfs send taking a while…

That ZFS send from yesterday, is taking more than 24 hours. Current status: in @ 53.7 MB/s, out @ 53.7 MB/s, 3892 GB total, buffer 100% full That’s about 4TB out nearly 13TB: $ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT storage 12.7T 12.0T 739G 94% 1.00x ONLINE – Based on that progress, my prediction for completion is Thursday morning. On a related note, my pool needs to be upgraded:

zfs send taking a while… Read More »

zfs send from old file server

Earlier today, I started a zfs send | zfs receive between the old file server and the new file server. It won’t finish until early morning tomorrow. [root@kraken ~]# zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ Pseudo-terminal will not be allocated because stdin is not a terminal. in @ 54.9 MB/s, out @ 54.9 MB/s, 47.6 GB total, buffer 100% full Why mbuffer? Because it shows me some

zfs send from old file server Read More »

raidz2: 10 disks – half and half

Following up on last weeks comparison of 10 disks vs 8 disks on raidz2, it was suggested by Marie Helene that I try more HDD on the m/b. I’m here to report just that. I’ve put 5 HDD on the SUPERMICRO MBD-X8SIL-O Micro ATX Server m/b and five on the LSI SATA/SAS 9211-8i 6Gb/s. Here are the pg_restore times: 18m28.511s 18m32.359s 18m30.077s An average of 18m 30s. This is the slowest of all

raidz2: 10 disks – half and half Read More »

raidz2: 10 disks vs 8 disks

Yesterday, I mentioned I wanted to compare raidz2 when using 10 disks versus using 8 disks. Along the way, I noticed that my ZFS-setup script needed a change to cope with mixed devices. Once I got past that hurdle, I started adding packages and setting up the system for some very simple tests. The test My test of preference is loading a database, and then dumping it. Each pg_restore was done into a

raidz2: 10 disks vs 8 disks Read More »

Scroll to Top