Feb 072013
 

Following on from yesterday’s benchmark, I’m adding in some ZFS tests.

All hardware is the same.

Of note:

$ zdb | grep ashift
            ashift: 9

$ sudo diskinfo -v ada0
Password:
ada0
        512             # sectorsize
        128035676160    # mediasize in bytes (119G)
        250069680       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        248085          # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        S12PNEACB31368P # Disk ident.

$ sudo smartctl -i /dev/ada0 | grep Sector
Sector Size:      512 bytes logical/physical

create the zpool

[root@bulldog:/home/dan] # zpool create -m /mnt example /dev/ada0
[root@bulldog:/home/dan] # dmesg | grep ada0
ada0 at ahcich1 bus 0 scbus1 target 0 lun 0
ada0:  ATA-9 SATA 3.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 122104MB (250069680 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad6

[root@bulldog:/home/dan] # mkdir /mnt/dan
[root@bulldog:/home/dan] # chown dan:dan /mnt/dan
[root@bulldog:/home/dan] # exit
root@bulldog:/home/dan # exit
[dan@bulldog:~] $
[dan@bulldog:~] $ cd /mnt/dan
[dan@bulldog:~] $ ~/bin/ddFileSystem
dd if=/dev/zero of=testing32 bs=32k count=300k
307200+0 records in
307200+0 records out
10066329600 bytes transferred in 35.546824 secs (283185063 bytes/sec)
dd if=/dev/zero of=testing64 bs=64k count=300k
307200+0 records in
307200+0 records out
20132659200 bytes transferred in 80.051449 secs (251496499 bytes/sec)
dd if=/dev/zero of=testing128 bs=128k count=300k
307200+0 records in
307200+0 records out
40265318400 bytes transferred in 171.402642 secs (234916556 bytes/sec)
dd if=/dev/zero of=testing256 bs=256k count=300k
dd: testing256: No space left on device
210945+0 records in
210944+1 records out
55297835008 bytes transferred in 226.678619 secs (243948173 bytes/sec)
dd if=/dev/zero of=testing1024 bs=1024k count=300k
dd: testing1024: No space left on device
dd if=/dev/zero of=testing2048 bs=2048k count=300k
dd: testing2048: No space left on device

[dan@bulldog:/mnt/dan] $ ls -lh
total 122830455
-rw-r--r--  1 dan  dan    37G Feb  7 18:11 testing128
-rw-r--r--  1 dan  dan    51G Feb  7 18:15 testing256
-rw-r--r--  1 dan  dan   9.4G Feb  7 18:07 testing32
-rw-r--r--  1 dan  dan    18G Feb  7 18:08 testing64
[dan@bulldog:/mnt/dan] $ df -h /mnt
Filesystem    Size    Used   Avail Capacity  Mounted on
example       117G    117G      0B   100%    /mnt
[dan@bulldog:/mnt/dan] $

dd before filling the SSD

 $ ~/bin/ddFileSystem
dd if=/dev/zero of=testing32 bs=32k count=300k
307200+0 records in
307200+0 records out
10066329600 bytes transferred in 35.546824 secs (283185063 bytes/sec)
dd if=/dev/zero of=testing64 bs=64k count=300k
307200+0 records in
307200+0 records out
20132659200 bytes transferred in 80.051449 secs (251496499 bytes/sec)
dd if=/dev/zero of=testing128 bs=128k count=300k
307200+0 records in
307200+0 records out
40265318400 bytes transferred in 171.402642 secs (234916556 bytes/sec)
dd if=/dev/zero of=testing256 bs=256k count=300k
dd: testing256: No space left on device
210945+0 records in
210944+1 records out
55297835008 bytes transferred in 226.678619 secs (243948173 bytes/sec)
dd if=/dev/zero of=testing1024 bs=1024k count=300k
dd: testing1024: No space left on device
dd if=/dev/zero of=testing2048 bs=2048k count=300k
dd: testing2048: No space left on device
[dan@bulldog:/mnt/dan] $ ls -lh
total 122830455
-rw-r--r--  1 dan  dan    37G Feb  7 18:11 testing128
-rw-r--r--  1 dan  dan    51G Feb  7 18:15 testing256
-rw-r--r--  1 dan  dan   9.4G Feb  7 18:07 testing32
-rw-r--r--  1 dan  dan    18G Feb  7 18:08 testing64
[dan@bulldog:/mnt/dan] $ df -h /mnt
Filesystem    Size    Used   Avail Capacity  Mounted on
example       117G    117G      0B   100%    /mnt
[dan@bulldog:/mnt/dan] $ rm testing*

dd after SSD filled

I’m doing it this way because that’s what happened in previous tests, more or less.

[dan@bulldog:/mnt/dan] $ rm testing256
dd if=/dev/zero of=testing256 bs=256k count=300k
[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing256 bs=256k count=300k
307200+0 records in
307200+0 records out
80530636800 bytes transferred in 316.159015 secs (254715611 bytes/sec)

[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing512 bs=512k count=100k
dd: testing512: No space left on device
86273+0 records in
86272+0 records out
45231374336 bytes transferred in 186.070643 secs (243087107 bytes/sec)

[dan@bulldog:/mnt/dan] $ rm testing*
[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing512 bs=512k count=100k
102400+0 records in
102400+0 records out
53687091200 bytes transferred in 210.601851 secs (254922219 bytes/sec)
[dan@bulldog:/mnt/dan] $ rm testing*

[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing1024 bs=1024k count=100k
102400+0 records in
102400+0 records out
107374182400 bytes transferred in 422.826334 secs (253943933 bytes/sec)
[dan@bulldog:/mnt/dan] $ ': df -h /mnt
Filesystem    Size    Used   Avail Capacity  Mounted on
example       117G    100G     17G    85%    /mnt

Some smaller blocksizes

These are really interesting. Results range between 206MB/s and 567MB/s:

[dan@bulldog:/mnt/dan] $ ls
[dan@bulldog:/mnt/dan] $ cat ~/bin/ddFileSystem4k
#!/bin/sh

COUNTS="100 200 400 800 1600 3200"

for count in ${COUNTS}
do
  CMD="dd if=/dev/zero of=testing${count} bs=4k count=${count}k"
  echo '$' ${CMD}
  `${CMD}`
done
[dan@bulldog:/mnt/dan] $ ~/bin/ddFileSystem4k
$ dd if=/dev/zero of=testing100 bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.705185 secs (594780714 bytes/sec)
$ dd if=/dev/zero of=testing200 bs=4k count=200k
204800+0 records in
204800+0 records out
838860800 bytes transferred in 2.350452 secs (356893406 bytes/sec)
$ dd if=/dev/zero of=testing400 bs=4k count=400k
409600+0 records in
409600+0 records out
1677721600 bytes transferred in 6.259230 secs (268039609 bytes/sec)
$ dd if=/dev/zero of=testing800 bs=4k count=800k
819200+0 records in
819200+0 records out
3355443200 bytes transferred in 15.466264 secs (216952407 bytes/sec)
$ dd if=/dev/zero of=testing1600 bs=4k count=1600k
1638400+0 records in
1638400+0 records out
6710886400 bytes transferred in 28.426775 secs (236076247 bytes/sec)
$ dd if=/dev/zero of=testing3200 bs=4k count=3200k
3276800+0 records in
3276800+0 records out
13421772800 bytes transferred in 53.636045 secs (250237929 bytes/sec)

[dan@bulldog:/mnt/dan] $ df -h /mnt
Filesystem    Size    Used   Avail Capacity  Mounted on
example       117G     24G     92G    21%    /mnt

Again, with deletes between the files

[dan@bulldog:/mnt/dan] $ ls
[dan@bulldog:/mnt/dan] $ cat ~/bin/ddFileSystem4k
#!/bin/sh

COUNTS="100 200 400 800 1600 3200"

for count in ${COUNTS}
do
  CMD="dd if=/dev/zero of=testing${count} bs=4k count=${count}k"
  echo '$' ${CMD}
  `${CMD}`
  rm testing${count}
done
[dan@bulldog:/mnt/dan] $ ~/bin/ddFileSystem4k
$ dd if=/dev/zero of=testing100 bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.722075 secs (580868204 bytes/sec)
$ dd if=/dev/zero of=testing200 bs=4k count=200k
204800+0 records in
204800+0 records out
838860800 bytes transferred in 3.779956 secs (221923424 bytes/sec)
$ dd if=/dev/zero of=testing400 bs=4k count=400k
409600+0 records in
409600+0 records out
1677721600 bytes transferred in 5.745623 secs (291999939 bytes/sec)
$ dd if=/dev/zero of=testing800 bs=4k count=800k
819200+0 records in
819200+0 records out
3355443200 bytes transferred in 11.859253 secs (282938829 bytes/sec)
$ dd if=/dev/zero of=testing1600 bs=4k count=1600k
1638400+0 records in
1638400+0 records out
6710886400 bytes transferred in 27.687562 secs (242379100 bytes/sec)
$ dd if=/dev/zero of=testing3200 bs=4k count=3200k
3276800+0 records in
3276800+0 records out
13421772800 bytes transferred in 55.870071 secs (240231891 bytes/sec)
[dan@bulldog:/mnt/dan] $

Now let’s try it with a partition, and not all the SSD

In this test, we’re doing something slightly different. I’ve heard that not using all the SSD can result in improvments. So let’s try leaving 20% of the SSD unused.

# gpart create -s GPT ada0
ada0 created
# gpart add -b 1000 -a 4k -t freebsd-zfs -s 95G ada0
ada0p1 added
# gpart show ada0
=>       34  250069613  ada0  GPT  (119G)
         34        966        - free -  (483k)
       1000  199229440     1  freebsd-zfs  (95G)
  199230440   50839207        - free -  (24G)
# zpool create -m /mnt example /dev/ada0p1
# zpool status
  pool: example
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        example     ONLINE       0     0     0
          ada0p1    ONLINE       0     0     0

errors: No known data errors

# mkdir /mnt/dan
# chown dan:dan /mnt/dan

Doing the dd again

Testing with deletes first (not sure if order matters).

[dan@bulldog:~] $ cd /mnt/dan
[dan@bulldog:/mnt/dan] $ ls -l
total 0
[dan@bulldog:/mnt/dan] $ cat ~/bin/ddFileSystem4k
#!/bin/sh

COUNTS="100 200 400 800 1600 3200"

for count in ${COUNTS}
do
  CMD="dd if=/dev/zero of=testing${count} bs=4k count=${count}k"
  echo '$' ${CMD}
  `${CMD}`
  rm testing${count}
done
[dan@bulldog:/mnt/dan] $ ~/bin/ddFileSystem4k
$ dd if=/dev/zero of=testing100 bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.723318 secs (579870099 bytes/sec)
$ dd if=/dev/zero of=testing200 bs=4k count=200k
204800+0 records in
204800+0 records out
838860800 bytes transferred in 1.542371 secs (543877434 bytes/sec)
$ dd if=/dev/zero of=testing400 bs=4k count=400k
409600+0 records in
409600+0 records out
1677721600 bytes transferred in 5.606008 secs (299272064 bytes/sec)
$ dd if=/dev/zero of=testing800 bs=4k count=800k
819200+0 records in
819200+0 records out
3355443200 bytes transferred in 12.277068 secs (273309819 bytes/sec)
$ dd if=/dev/zero of=testing1600 bs=4k count=1600k
1638400+0 records in
1638400+0 records out
6710886400 bytes transferred in 24.819769 secs (270384723 bytes/sec)
$ dd if=/dev/zero of=testing3200 bs=4k count=3200k
3276800+0 records in
3276800+0 records out
13421772800 bytes transferred in 55.058098 secs (243774728 bytes/sec)
[dan@bulldog:/mnt/dan] $

Testing without deleting the previous file

In this test, we do not remove the previous file when dd’ing to the next one.

[dan@bulldog:/mnt/dan] $ ls
[dan@bulldog:/mnt/dan] $ cat ~/bin/ddFileSystem4k
#!/bin/sh

COUNTS="100 200 400 800 1600 3200"

for count in ${COUNTS}
do
  CMD="dd if=/dev/zero of=testing${count} bs=4k count=${count}k"
  echo '$' ${CMD}
  `${CMD}`
done
[dan@bulldog:/mnt/dan] $ ~/bin/ddFileSystem4k
$ dd if=/dev/zero of=testing100 bs=4k count=100k
102400+0 records in
102400+0 records out
419430400 bytes transferred in 0.704095 secs (595701528 bytes/sec)
$ dd if=/dev/zero of=testing200 bs=4k count=200k
204800+0 records in
204800+0 records out
838860800 bytes transferred in 1.452384 secs (577574984 bytes/sec)
$ dd if=/dev/zero of=testing400 bs=4k count=400k
409600+0 records in
409600+0 records out
1677721600 bytes transferred in 6.010437 secs (279134711 bytes/sec)
$ dd if=/dev/zero of=testing800 bs=4k count=800k
819200+0 records in
819200+0 records out
3355443200 bytes transferred in 17.065366 secs (196622984 bytes/sec)
$ dd if=/dev/zero of=testing1600 bs=4k count=1600k
1638400+0 records in
1638400+0 records out
6710886400 bytes transferred in 25.609126 secs (262050582 bytes/sec)
$ dd if=/dev/zero of=testing3200 bs=4k count=3200k
3276800+0 records in
3276800+0 records out
13421772800 bytes transferred in 56.259421 secs (238569338 bytes/sec)
[dan@bulldog:/mnt/dan] $

And finally, a bonnie++

[dan@bulldog:/mnt/dan] $ bonnie++ -s 66000
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bulldog.unix 66000M   184  99 235742  27 127575  16   502  99 278025  12  1594  42
Latency             43866us    2071ms    1815ms   18794us     117ms     227ms
Version  1.97       ------Sequential Create------ --------Random Create--------
bulldog.unixathome. -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency             10677us      76us      87us   21396us      27us      51us
1.97,1.97,bulldog.unixathome.org,1,1360307806,66000M,,184,99,235742,27,127575,16,502,99,278025,12,1594,42,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,43866us,2071ms,1815ms,18794us,117ms,227ms,10677us,76us,87us,21396us,27us,51us
[dan@bulldog:/mnt/dan] $
Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

  3 Responses to “Benchmarking the Samsung SSD 840 PRO Series DXM03B0Q – ZFS”

  1. Conclusion?