Feb 062013
 

Today, I’ll be benchmarking a Samsung SSD 840 PRO Series DXM03B0Q. I’ve had this for a couple of weeks, but I’ve not been ready for it until today.

This drive will not be detected by FreeBSD 9.1-STABLE and before. A relatively recent commit fixed a problem with

When I booted with the SSD installed, I saw this in demsg:

(aprobe3:ahcich5:0:0:0): SETFEATURES ENABLE SATA FEATURE. ACB: ef 10 00 00 00 40 00 00 00 00 05 00
(aprobe3:ahcich5:0:0:0): CAM status: ATA Status Error
(aprobe3:ahcich5:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
(aprobe3:ahcich5:0:0:0): RES: 51 04 00 00 00 40 00 00 00 00 00
(aprobe3:ahcich5:0:0:0): Retrying command
(aprobe3:ahcich5:0:0:0): SETFEATURES ENABLE SATA FEATURE. ACB: ef 10 00 00 00 40 00 00 00 00 05 00
(aprobe3:ahcich5:0:0:0): CAM status: ATA Status Error
(aprobe3:ahcich5:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 04 (ABRT )
(aprobe3:ahcich5:0:0:0): RES: 51 04 00 00 00 40 00 00 00 00 00
(aprobe3:ahcich5:0:0:0): Error 5, Retries exhausted

Eventually, I found http://lists.freebsd.org/pipermail/freebsd-stable/2012-November/070813.html which let me know to patch and install a new kernel. Rather than upgrade to FreeBSD 9.1-STABLE, I manually applied that patch, recompiled the kernel, installed it, and rebooted. After that, the system saw the SSD.

Note that we have this SATA 3.x device connected to a SATA 2.x port. We will run one set of tests, then repeat them on a SATA 3.x port later.

The hardware

We are testing on the following hardware:

  1. motherboard – SUPERMICRO MBD-X9SCL+-F Micro ATX Server Motherboard LGA 1155 Intel C202 DDR3 1333 (manufacturer page) – $179.99
  2. CPU – Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz (3.7GHz Turbo) 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 69W Quad-Core Server Processor – $239.99
  3. RAM – Samsung DDR3-1600 8GB/1Gx72 ECC Samsung Chip Server Memory – M391B1G73BH0-CK0 (16GB for $145.40)
  4. HDD – SAMSUNG Spinpoint F4 HD322GJ/U 320GB 7200 RPM 16MB Cache SATA 3.0Gb/s 3.5″ – $59.99
  5. HDD – Western Digital AV-GP WD3200AVCS 320 GB 3.5′ – $70.99

The SSD being tested is a SAMSUNG 840 Pro Series MZ-7PD128BW 2.5″ 128GB: $139.99.

NOTE: these tests were a SATA II device on a SATA III port.

The drive being tested is not part of the base OS.

The device

ada0: <Samsung SSD 840 PRO Series DXM03B0Q> ATA-9 SATA 3.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 122104MB (250069680 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad6

dd to raw device

[dan@bulldog:~/bin] $ sudo ./ddRawDevice
Password:
dd if=/dev/zero of=/dev/ada0 bs=32k count=30k
30720+0 records in
30720+0 records out
1006632960 bytes transferred in 4.961477 secs (202889785 bytes/sec)
dd if=/dev/zero of=/dev/ada0 bs=64k count=30k
30720+0 records in
30720+0 records out
2013265920 bytes transferred in 8.769450 secs (229577218 bytes/sec)
dd if=/dev/zero of=/dev/ada0 bs=128k count=30k
30720+0 records in
30720+0 records out
4026531840 bytes transferred in 16.543843 secs (243385524 bytes/sec)
dd if=/dev/zero of=/dev/ada0 bs=256k count=30k
30720+0 records in
30720+0 records out
8053063680 bytes transferred in 33.183034 secs (242686176 bytes/sec)
dd if=/dev/zero of=/dev/ada0 bs=1024k count=30k
30720+0 records in
30720+0 records out
32212254720 bytes transferred in 131.759578 secs (244477519 bytes/sec)
dd if=/dev/zero of=/dev/ada0 bs=2048k count=30k
30720+0 records in
30720+0 records out
64424509440 bytes transferred in 260.971294 secs (246864352 bytes/sec)
[dan@bulldog:~/bin] $

That’s between 193-235 MB/s

The diskinfo

Here’s diskinfo:

$ sudo diskinfo -tv /dev/ada0
Password:
/dev/ada0
        512             # sectorsize
        128035676160    # mediasize in bytes (119G)
        250069680       # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        248085          # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        S12PNEACB31368P # Disk ident.

Seek times:
        Full stroke:      250 iter in   0.019320 sec =    0.077 msec
        Half stroke:      250 iter in   0.019065 sec =    0.076 msec
        Quarter stroke:   500 iter in   0.040323 sec =    0.081 msec
        Short forward:    400 iter in   0.019906 sec =    0.050 msec
        Short backward:   400 iter in   0.044391 sec =    0.111 msec
        Seq outer:       2048 iter in   0.066595 sec =    0.033 msec
        Seq inner:       2048 iter in   0.069115 sec =    0.034 msec
Transfer rates:
        outside:       102400 kbytes in   0.406036 sec =   252194 kbytes/sec
        middle:        102400 kbytes in   0.404369 sec =   253234 kbytes/sec
        inside:        102400 kbytes in   0.405638 sec =   252442 kbytes/sec

phybs

Next, we run phybs:

 $ sudo ./phybs -rw -l 1024 /dev/ada0
   count    size  offset    step        msec     tps    kBps

  131072    1024       0    4096       23962    5469    5469
  131072    1024     512    4096       23860    5493    5493

   65536    2048       0    8192       11534    5681   11363
   65536    2048     512    8192       11937    5489   10979
   65536    2048    1024    8192       11555    5671   11343

   32768    4096       0   16384        8022    4084   16338
   32768    4096     512   16384        7858    4169   16678
   32768    4096    1024   16384        6163    5316   21264
   32768    4096    2048   16384        6267    5228   20912

   16384    8192       0   32768        4650    3522   28183
   16384    8192     512   32768        6333    2586   20695
   16384    8192    1024   32768        4685    3496   27973
   16384    8192    2048   32768        4701    3485   27880
   16384    8192    4096   32768        5783    2833   22665

dd to the filesystem

After partitioning and newfs’ing, we have:

 # gpart show ada0 ada0s1
=>       63  250069617  ada0  MBR  (119G)
         63        945        - free -  (472k)
       1008  250068609     1  freebsd  (119G)
  250069617         63        - free -  (31k)

=>        0  250068609  ada0s1  BSD  (119G)
          0  250068609       1  freebsd-ufs  (119G)

NOTE: this is a BSD scheme. We’ll try GPT later.

NOTE: this is not 4K aligned.

$ ~/bin/ddFileSystem
dd if=/dev/zero of=testing32 bs=32k count=300k
307200+0 records in
307200+0 records out
10066329600 bytes transferred in 37.328923 secs (269665685 bytes/sec)

dd if=/dev/zero of=testing64 bs=64k count=300k
307200+0 records in
307200+0 records out
20132659200 bytes transferred in 74.694153 secs (269534608 bytes/sec)

dd if=/dev/zero of=testing128 bs=128k count=300k
307200+0 records in
307200+0 records out
40265318400 bytes transferred in 148.647131 secs (270878544 bytes/sec)

NOTE: it was after this test that the SSD filled up. I then deleted the files and started the tests manually. I also changed the count on the dd command.

$ dd if=/dev/zero of=testing256 bs=256k count=100k
102400+0 records in
102400+0 records out
26843545600 bytes transferred in 98.546941 secs (272393494 bytes/sec)

$ rm testing256

$ dd if=/dev/zero of=testing512 bs=512k count=100k
102400+0 records in
102400+0 records out
53687091200 bytes transferred in 200.797722 secs (267369025 bytes/sec)

$ rm testing512

$ dd if=/dev/zero of=testing1024 bs=1024k count=100k
102400+0 records in
102400+0 records out
107374182400 bytes transferred in 397.175624 secs (270344341 bytes/sec)

$ rm testing1024

$ dd if=/dev/zero of=testing2048 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 396.455642 secs (270835299 bytes/sec)

That varies from 257-259MB/s

bonnie++

And finally, a quick bonnie++:

 $ bonnie++ -s 66000
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.97       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
bulldog.unix 66000M  1139  99 265004  29 117243  12  2255  99 278151  19  5283 117
Latency              7428us   65338us    2367ms    5264us   46124us   12161us
Version  1.97       ------Sequential Create------ --------Random Create--------
bulldog.unixathome. -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency             38808us      14us      16us   45464us      13us      22us
1.97,1.97,bulldog.unixathome.org,1,1360168969,66000M,,1139,99,265004,29,117243,12,2255,99,278151,19,5283,117,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,7428us,65338us,2367ms,5264us,46124us,12161us,38808us,14us,16us,45464us,13us,22us

Trying a different partitioning

For consistency with previous tests, I tried a different partition, but I didn’t expect it to make much of a difference.

[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing32 bs=32k count=300k
307200+0 records in
307200+0 records out
10066329600 bytes transferred in 37.327970 secs (269672570 bytes/sec)

[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing64 bs=64k count=300k
307200+0 records in
307200+0 records out
20132659200 bytes transferred in 74.641693 secs (269724043 bytes/sec)

[dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing128 bs=128k count=300k
307200+0 records in
307200+0 records out
40265318400 bytes transferred in 149.389222 secs (269532954 bytes/sec)

To avoid running out of space, I deleted the existing files.

Then I ran more tests:

$ rm testing*
$ dd if=/dev/zero of=testing256 bs=256k count=100k
102400+0 records in
102400+0 records out
26843545600 bytes transferred in 99.851185 secs (268835523 bytes/sec)

$ rm testing256
$ dd if=/dev/zero of=testing512 bs=512k count=100k
102400+0 records in
102400+0 records out
53687091200 bytes transferred in 197.633598 secs (271649617 bytes/sec)

$ rm testing512
$ dd if=/dev/zero of=testing1024 bs=1024k count=100k
102400+0 records in
102400+0 records out
107374182400 bytes transferred in 397.343478 secs (270230137 bytes/sec)

$ rm testing1024
$ dd if=/dev/zero of=testing1024 bs=1024k count=100k
102400+0 records in
102400+0 records out
107374182400 bytes transferred in 412.471925 secs (260318766 bytes/sec)

Those rates are around 256-259 MB/s.

And for completeness:

$ df -h /mnt
Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/ada0     115G    100G    6.2G    94%    /mnt
Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

  7 Responses to “Benchmarking the Samsung SSD 840 PRO Series DXM03B0Q”

  1. When you deleted files, did you have TRIM enabled? if you newfs and disable trim, are the results different once the disk has been filled?

  2. No, it was not enabled:

    $ sudo tunefs -p /dev/ada0
    Password:
    tunefs: POSIX.1e ACLs: (-a)                                disabled
    tunefs: NFSv4 ACLs: (-N)                                   disabled
    tunefs: MAC multilabel: (-l)                               disabled
    tunefs: soft updates: (-n)                                 enabled
    tunefs: soft update journaling: (-j)                       disabled
    tunefs: gjournal: (-J)                                     disabled
    tunefs: trim: (-t)                                         disabled
    tunefs: maximum blocks per file in a cylinder group: (-e)  4096
    tunefs: average file size: (-f)                            16384
    tunefs: average number of files in a directory: (-s)       64
    tunefs: minimum percentage of free space: (-m)             8%
    tunefs: optimization preference: (-o)                      time
    tunefs: volume label: (-L)                                 
    
  3. Now testing with:

    # newfs -Ut ada0
    Reduced frags per cylinder group from 160280 to 160272 to enlarge last cyl group
    /dev/ada0: 122104.3MB (250069680 sectors) block size 32768, fragment size 4096
    	using 196 cylinder groups of 626.06MB, 20034 blks, 80256 inodes.
    	with soft updates
    super-block backups (for fsck_ffs -b #) at:
     192, 1282368, 2564544, 3846720, 5128896, 6411072 ,,,, etc
    
  4. Now we have:

    # tunefs -p /dev/ada0
    tunefs: POSIX.1e ACLs: (-a)                                disabled
    tunefs: NFSv4 ACLs: (-N)                                   disabled
    tunefs: MAC multilabel: (-l)                               disabled
    tunefs: soft updates: (-n)                                 enabled
    tunefs: soft update journaling: (-j)                       disabled
    tunefs: gjournal: (-J)                                     disabled
    tunefs: trim: (-t)                                         enabled
    tunefs: maximum blocks per file in a cylinder group: (-e)  4096
    tunefs: average file size: (-f)                            16384
    tunefs: average number of files in a directory: (-s)       64
    tunefs: minimum percentage of free space: (-m)             8%
    tunefs: optimization preference: (-o)                      time
    tunefs: volume label: (-L)                                 
    
  5. Oh, wait, I did that on ada0… oh, the whole disk… hmm.

  6. For what it’s worth, here is what i have:

    [dan@bulldog:/mnt/dan] $ ~/bin/ddFileSystem
    dd if=/dev/zero of=testing32 bs=32k count=300k
    307200+0 records in
    307200+0 records out
    10066329600 bytes transferred in 37.423035 secs (268987525 bytes/sec)
    dd if=/dev/zero of=testing64 bs=64k count=300k
    307200+0 records in
    307200+0 records out
    20132659200 bytes transferred in 74.933852 secs (268672418 bytes/sec)
    dd if=/dev/zero of=testing128 bs=128k count=300k
    307200+0 records in
    307200+0 records out
    40265318400 bytes transferred in 149.874833 secs (268659638 bytes/sec)
    dd if=/dev/zero of=testing256 bs=256k count=300k
    
    /mnt: write failed, filesystem is full
    dd: testing256: No space left on device
    166260+0 records in
    166259+0 records out
    43583799296 bytes transferred in 162.323960 secs (268498867 bytes/sec)
    dd if=/dev/zero of=testing1024 bs=1024k count=300k
    
    /mnt: write failed, filesystem is full
    dd: testing1024: No space left on device
    1+0 records in
    0+0 records out
    0 bytes transferred in 0.001512 secs (0 bytes/sec)
    dd if=/dev/zero of=testing2048 bs=2048k count=300k
    dd: testing2048: No space left on device
    1+0 records in
    0+0 records out
    0 bytes transferred in 0.001958 secs (0 bytes/sec)
    [dan@bulldog:/mnt/dan] $ ls
    testing1024 testing128  testing2048 testing256  testing32   testing64
    [dan@bulldog:/mnt/dan] $ ls -l testing*
    -rw-r--r--  1 dan  dan            0 Feb  7 13:32 testing1024
    -rw-r--r--  1 dan  dan  40265318400 Feb  7 13:29 testing128
    -rw-r--r--  1 dan  dan            0 Feb  7 13:32 testing2048
    -rw-r--r--  1 dan  dan  43583799296 Feb  7 13:32 testing256
    -rw-r--r--  1 dan  dan  10066329600 Feb  7 13:25 testing32
    -rw-r--r--  1 dan  dan  20132659200 Feb  7 13:26 testing64
    [dan@bulldog:/mnt/dan] $ rm testing*
    [dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing256 bs=256k count=300k
    
    307200+0 records in
    307200+0 records out
    80530636800 bytes transferred in 296.481387 secs (271621223 bytes/sec)
    [dan@bulldog:/mnt/dan] $ 
    [dan@bulldog:/mnt/dan] $ df -h
    Filesystem            Size    Used   Avail Capacity  Mounted on
    /dev/mirror/gm0s1a    3.9G    715M    2.9G    20%    /
    devfs                 1.0k    1.0k      0B   100%    /dev
    /dev/mirror/gm0s1d    3.9G    169M    3.4G     5%    /var
    /dev/mirror/gm0s1e    3.9G     32M    3.5G     1%    /tmp
    /dev/mirror/gm0s1f    242G     56G    166G    25%    /usr
    /dev/ada3s1           1.8T    181G    1.5T    11%    /mnt
    /dev/ada0             115G     75G     31G    71%    /mnt
    [dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing512 bs=512k count=100k
    
    /mnt: write failed, filesystem is full
    dd: testing512: No space left on device
    63930+0 records in
    63929+0 records out
    33517207552 bytes transferred in 124.612789 secs (268970848 bytes/sec)
    [dan@bulldog:/mnt/dan] $ rm testing
    testing256  testing512  
    [dan@bulldog:/mnt/dan] $ rm testing*
    [dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing512 bs=512k count=100k
    102400+0 records in
    102400+0 records out
    53687091200 bytes transferred in 204.713283 secs (262255045 bytes/sec)
    [dan@bulldog:/mnt/dan] $ rm testing*
    [dan@bulldog:/mnt/dan] $ dd if=/dev/zero of=testing2048 bs=2048k count=50k
    51200+0 records in
    51200+0 records out
    107374182400 bytes transferred in 399.846083 secs (268538788 bytes/sec)
    [dan@bulldog:/mnt/dan] $ df -h /mnt
    Filesystem    Size    Used   Avail Capacity  Mounted on
    /dev/ada0     115G    100G    6.2G    94%    /mnt
    [dan@bulldog:/mnt/dan] $ 
  7. Avoid using dd(1) for raw writes to SSD. The SSD will think those blocks are in use. Without TRIM to tell it otherwise, they will always be marked in use until overwritten. Enough of that, and performance will suffer, possibly badly. As far as I know, the only way to fix that is with a “secure erase” which marks every block on the drive as free. At present, there is no easy way to do a secure erase on FreeBSD.

    I would expect an 840 Pro to go faster than that. Even on SATA II, I’d hope for closer to the 300M/second limit. Certainly that drive is worth putting on a SATA III controller. For comparison, diskinfo says a 256G Plextor M3 does 405M/second raw reads on the SATA III controller on a Z68 motherboard.