Mar 052010
 

I thought I’d run some simple tests, based on a tuning thread I found:

# dd if=/dev/urandom of=./file1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 14.283206 secs (75175127 bytes/sec)

And while the above was running:

# zpool iostat 5
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     1.25G  9.06T      0      3     95   402K
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.25G  9.06T      0      0      0      0
storage     1.40G  9.06T      0    207    102  24.0M
storage     1.88G  9.06T      0    658    102  79.6M
storage     1.88G  9.06T      0    225      0  28.2M
storage     1.88G  9.06T      0    369      0  46.1M
storage     1.88G  9.06T      0      0      0      0
storage     1.88G  9.06T      0      0      0      0

  One Response to “ZFS – some initial testing”

  1. I know this is an old post, but it should be noted that this is not a correct way to measure filesystem write-performance as /dev/urandom is oftentimes the bottleneck, not your filesystem.

    For some peace of mind, I just rechecked this on two basic machines (one a current fbsd, the other a mac) and raw disk write speed is anywhere between 4x and 10x faster than /dev/urandom can deliver on those machines.

    It’s better to use /dev/zero or a script that writes semi-random data at high speed, like these 5 lines of c code;

    #include
    int main(int argc, char**argv) {
    unsigned char a[65003]; int i;
    for(i=0;i<65003;i++) a[i]=(i*31+i/256)%256;
    while(1) write(1,a,65003);}