Wyze uses this tuning on his 2GB NAS box.
# Dynamically adjust write limit based on previous txg commits to attempt # to maintain a 2-second commit time vfs.zfs.txg.synctime_ms="2000" # Set a minimum write limit of 50MB which is slightly more than the default vfs.zfs.write_limit_min="50M" # Default vfs.zfs.write_limit_shift appears to be "3" which on a system # with 2GB RAM such as this one results in a write_limit_max of 256MB, which # is appropriate, so we're not going to change that here. # Commit async writes after 1 minute if the write limit hasn't been met vfs.zfs.txg.timeout="60" # Set maximum oustanding vdev I/O to "1" to prevent parallel reads/writes # since we only have lowly 5400RPM SATA drives in this thing vfs.zfs.vdev.min_pending="1" vfs.zfs.vdev.max_pending="1" # Increase vdev cache size a little bit. The vdev cache is used to hold pre-fetched # metadata, so a larger one helps with scrub performance, among other # metadata-intensive workloads. vfs.zfs.vdev.cache.size="16M"
Given he has only 2GB, you could set the write limits higher than I have if you have more RAM.
Wyze also said:
- I left vfs.zfs.write_limit_shift as the default and that’s the value you’d want to tune. I can’t actually remember what the shift value did, though, but it’s a ‘binary shift’ of the system memory value, IIRC, so a 3-bit shift is 1/8th RAM? Something like that.
- btw, bumping vfs.zfs.cache.size made a noticeable difference in my scrub time. Could also help you anywhere that you use a lot of metadata, such as big directory tree operations
- double the max raw rate you get from dd and make that your write_limit target. That would match an ideal 2-second commit-time for a purely sequential transaction.
- so 320-360MB would be an appropriate write_limit max if you’re getting 160-180MB/s
- and you could bump the minimum MUCH higher, well. maybe 100MB, don’t go nuts.
Here’s an interesting post which talks about the percentage chances of losing your data after one HDD fails in a raidz arrangement. It compares RAID-10 arrangements to raidz3. Interesting math. It makes me want to set up a raidz3 environment with 11x3TB HDD.