zfs send from old file server

Earlier today, I started a zfs send | zfs receive between the old file server and the new file server. It won’t finish until early morning tomorrow.

[root@kraken ~]# zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 'zfs receive system/usr/local/bacula/FromKraken'
Pseudo-terminal will not be allocated because stdin is not a terminal.
in @ 54.9 MB/s, out @ 54.9 MB/s, 47.6 GB total, buffer 100% full

Why mbuffer? Because it shows me some progress. And I’ve used it before.

iostat

This is over a 1GB home network. 55MB/s is about 440 Mb/s, which seems a bit shy of 1000Mb/s. I suspect we’re reaching the limit of what the source HDD can supply. Or the HDD can take…

Here is some iostat on the send system:

 $ zpool iostat 10 20
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     11.9T   802G    162    141  13.1M  4.73M
storage     11.9T   802G    390     52  43.9M   131K
storage     11.9T   802G    394     45  49.2M   191K
storage     11.9T   802G    407     76  48.8M   272K
storage     11.9T   802G    392     49  48.9M   128K
storage     11.9T   802G    394     64  48.9M   247K
storage     11.9T   802G    393     56  49.0M   226K
storage     11.9T   802G    396     47  48.6M   123K
storage     11.9T   802G    387     46  48.3M   195K
storage     11.9T   802G    406     62  49.0M   166K
storage     11.9T   802G    399     52  34.6M   138K
storage     11.9T   802G    232     69  14.0M   272K
storage     11.9T   802G    397     64  37.5M   233K
storage     11.9T   802G    392     61  42.2M   164K
storage     11.9T   802G    402     42  42.0M   107K
storage     11.9T   802G    392     58  48.6M   150K
storage     11.9T   802G    390     62  48.5M   232K
storage     11.9T   802G    398     61  49.3M   151K
storage     11.9T   802G    400     54  49.2M   213K
storage     11.9T   802G    395     44  49.2M   110K
And on the send system for the same time period:

$ zpool iostat 10 20
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
system       217G  26.8T      0     83  3.92K  7.60M
system       218G  26.8T      0    392  12.0K  38.1M
system       218G  26.8T      0    415  12.0K  48.5M
system       219G  26.8T      0    422      0  49.1M
system       220G  26.8T      0    421      0  48.9M
system       220G  26.8T      0    456  11.2K  49.6M
system       221G  26.8T      0    434      0  49.1M
system       222G  26.8T      0    423      0  48.8M
system       222G  26.8T      0    413      0  48.3M
system       223G  26.8T      0    421      0  48.9M
system       223G  26.8T      0    439      0  40.3M
system       224G  26.8T      0    337      0  22.6M
system       224G  26.8T      0    358      0  27.0M
system       225G  26.8T      0    421      0  44.4M
system       225G  26.8T      0    416      0  39.7M
system       226G  26.8T      0    430      0  47.6M
system       226G  26.8T      0    441      0  48.7M
system       227G  26.8T      0    439      0  49.6M
system       228G  26.8T      0    439      0  49.6M
system       228G  26.8T      0    422      0  49.0M

Background on the send/receive

But this took a while to get started.

The first thing I did was allow my ssh login on the destination system to have full control over the fileset into which this was being received:

# zfs create system/usr/local/bacula
# zfs set compression=lzjb system/usr/local/bacula
# zfs allow -l dan create,destroy,snapshot,rollback,clone,promote,rename,mount,send,receive,quota,reservation  system/usr/local/bacula

I’m sure that’s overkill on the permissions, but it’s what I’m going with.

Based on the Download time calculator, this transfer should take about 18 hours.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

Leave a Comment

Scroll to Top