slocum
This post has been replaced by a newer one. For future reference, this is the slocum server, which I use for various jails and services:
This post has been replaced by a newer one. For future reference, this is the slocum server, which I use for various jails and services:
After the zfs send, I had system/foo/bar, but I really just wanted system/bar. How do I fix that? mv(1) was my first guess. I was wrong. What I wanted was zfs(8) rename. Let me explain. After doing my: zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ I quite, correctly, wound up with system/usr/local/bacula/FromKraken, not the system/usr/local/bacula which I wanted. Here’s how I fixed that, but first, I took snapshots,
Moving zfs datasets / filesystems around Read More »
Today it finished. That ZFS send I started on Monday finally finished. Recap The original command was: # zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ That’s was the initial send of the filesystem. But more data has been added to it. So I take another snapshot, and send that: Send what’s accumulated since previous send # zfs snapshot storage/compressed/bacula@2013.07.25:08:20 And now I send the incremental stream: # zfs
zfs send | zfs receive; finally finished Read More »
That ZFS send from yesterday, is taking more than 24 hours. Current status: in @ 53.7 MB/s, out @ 53.7 MB/s, 3892 GB total, buffer 100% full That’s about 4TB out nearly 13TB: $ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT storage 12.7T 12.0T 739G 94% 1.00x ONLINE – Based on that progress, my prediction for completion is Thursday morning. On a related note, my pool needs to be upgraded:
zfs send taking a while… Read More »
Earlier today, I started a zfs send | zfs receive between the old file server and the new file server. It won’t finish until early morning tomorrow. [root@kraken ~]# zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ Pseudo-terminal will not be allocated because stdin is not a terminal. in @ 54.9 MB/s, out @ 54.9 MB/s, 47.6 GB total, buffer 100% full Why mbuffer? Because it shows me some
zfs send from old file server Read More »
Following up on last weeks comparison of 10 disks vs 8 disks on raidz2, it was suggested by Marie Helene that I try more HDD on the m/b. I’m here to report just that. I’ve put 5 HDD on the SUPERMICRO MBD-X8SIL-O Micro ATX Server m/b and five on the LSI SATA/SAS 9211-8i 6Gb/s. Here are the pg_restore times: 18m28.511s 18m32.359s 18m30.077s An average of 18m 30s. This is the slowest of all
raidz2: 10 disks – half and half Read More »
Yesterday, I mentioned I wanted to compare raidz2 when using 10 disks versus using 8 disks. Along the way, I noticed that my ZFS-setup script needed a change to cope with mixed devices. Once I got past that hurdle, I started adding packages and setting up the system for some very simple tests. The test My test of preference is loading a database, and then dumping it. Each pg_restore was done into a
raidz2: 10 disks vs 8 disks Read More »
THe last system I set up with ZFS, I used a script. Today, when trying a system with different device types, I encountered a limitation of that script. I need to fix it. In the above script, NUM takes on the value of what digits appear in the string of I. Given that DISKS contains both ada0 and da0, two disks will get the same value for $I. That’s a problem. I need
Problem with disk numbering in my ZFS creation script Read More »
The traditional thoughts on ZFS are to have arrays which are a power-of-2 when it comes to number of disks. That means 2, 4, 8, etc. But for raidz1, that would be 3, 5, 9. For my choice of raidz2, it would be 4, 6, and 10. In the system I’m building up now, I have an 8-port LSI SATA/SAS 9211-8i 6Gb/s card. I can add two more HDD by attaching them to
ZFS: raidz2 with 8 or 10 disks? Read More »
My gateway is dying. My file server is getting full. Over the past few weeks, I’ve been dealing with aging infrastructure. My development server started acting up and was showing signs of age. This aging desktop box contained 80GB HDD, which means it’s probably 6 or 7 years old. That’s getting on in age. It’s been replaced by a much gruntier server. After completeing that work, I have to consider my gateway, which
Gateway getting old; File server getting full Read More »