I have a Seagate ST2000DM001 sitting in a server. I’ve run some simple tests. Results are below. Tests were run with FreeBSD 9.1 in the hardware listed below.
The hardware
We are testing on the following hardware:
- motherboard – SUPERMICRO MBD-H8SGL-O ATX Server Motherboard (Supermicro link): $224.99
- CPU – AMD Opteron 6128 Magny-Cours 2.0GHz 8 x 512KB L2 Cache 12MB L3 Cache Socket G34 115W 8-Core Server : $284.99
- RAM – Kingston 8GB 240-Pin DDR3 SDRAM ECC Registered DDR3 1600 Server Memory : 4 x $64.99 = $259.96
- PSU – PC Power and Cooling Silencer MK III 600W power supply : $99.99
- SATA card – LSI Internal SATA/SAS 9211-8i 6Gb/s PCI-Express 2.0 RAID Controller Card, Kit (LSI page): $319.99
- HDD for ZFS – Seagate Barracuda ST2000DM001 2TB 7200 RPM 64MB : 8 x $109.99 = $879.92
The drive being tested is not part of the base OS.
The devices
The LSI card:
mps0: <LSI SAS2008> port 0x8000-0x80ff mem 0xfde3c000-0xfde3ffff,0xfde40000-0xfde7ffff irq 28 at device 0.0 on pci1 mps0: Firmware: 14.00.01.00, Driver: 14.00.00.01-fbsd mps0: IOCCapabilities: 185c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,IR>
The drives:
da0 at mps0 bus 0 scbus0 target 2 lun 0 da0: <ATA ST2000DM001-1CH1 CC24> Fixed Direct Access SCSI-6 device da0: 600.000MB/s transfers da0: Command Queueing enabled da0: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C) da1 at mps0 bus 0 scbus0 target 3 lun 0 da1: <ATA ST2000DM001-1CH1 CC24> Fixed Direct Access SCSI-6 device da1: 600.000MB/s transfers da1: Command Queueing enabled da1: 1907729MB (3907029168 512 byte sectors: 255H 63S/T 243201C)
dd to raw device
# dd if=/dev/zero of=/dev/da0 bs=32k count=300000 300000+0 records in 300000+0 records out 9830400000 bytes transferred in 59.699293 secs (164665267 bytes/sec)
That’s 157 MB/s.
The diskinfo
Here’s diskinfo:
# diskinfo -tv /dev/da0 /dev/da0 512 # sectorsize 2000398934016 # mediasize in bytes (1.8T) 3907029168 # mediasize in sectors 4096 # stripesize 0 # stripeoffset 243201 # Cylinders according to firmware. 255 # Heads according to firmware. 63 # Sectors according to firmware. Z1E22F0N # Disk ident. Seek times: Full stroke: 250 iter in 5.527386 sec = 22.110 msec Half stroke: 250 iter in 4.143420 sec = 16.574 msec Quarter stroke: 500 iter in 4.223888 sec = 8.448 msec Short forward: 400 iter in 1.546921 sec = 3.867 msec Short backward: 400 iter in 1.973161 sec = 4.933 msec Seq outer: 2048 iter in 0.144946 sec = 0.071 msec Seq inner: 2048 iter in 0.151692 sec = 0.074 msec Transfer rates: outside: 102400 kbytes in 0.570161 sec = 179598 kbytes/sec middle: 102400 kbytes in 0.606521 sec = 168832 kbytes/sec inside: 102400 kbytes in 1.051001 sec = 97431 kbytes/sec
dd to the filesystem
After partitioning and newfs’ing, we have:
gpart show da0 => 63 3907029105 da0 MBR (1.8T) 63 945 - free - (472k) 1008 3907028097 1 freebsd (1.8T) 3907029105 63 - free - (31k) # gpart show da1 => 63 3907029105 da1 MBR (1.8T) 63 945 - free - (472k) 1008 3907028097 1 freebsd (1.8T) 3907029105 63 - free - (31k)
Next, the dd:
# dd if=/dev/zero of=/mnt/testing bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 19.477729 secs (168233164 bytes/sec) # dd if=/dev/zero of=/mnt/testing bs=32k count=300000 300000+0 records in 300000+0 records out 9830400000 bytes transferred in 52.314242 secs (187910588 bytes/sec) # dd if=/dev/zero of=/mnt/testing3 bs=32k count=300000 300000+0 records in 300000+0 records out 9830400000 bytes transferred in 53.842450 secs (182577130 bytes/sec) # dd if=/dev/zero of=/mnt/testing4 bs=32k count=3000000 3000000+0 records in 3000000+0 records out 98304000000 bytes transferred in 542.925812 secs (181063412 bytes/sec)
That’s 160-179 MB/s
dd with larger blocksize
Let’s try this with a larger blocksize as recommend in the FreeBSD Forums.
$ dd if=/dev/zero of=testing5 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 56.400315 secs (116197932 bytes/sec) $ dd if=/dev/zero of=testing6 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 55.277136 secs (118558965 bytes/sec) $ dd if=/dev/zero of=testing7 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 115.974746 secs (113017708 bytes/sec) $ dd if=/dev/zero of=testing7 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 117.837246 secs (111231384 bytes/sec)
That’s 110-113MB/s. That’s interesting and quite a drop in throughput.
I tried 32k again. 105MB/s I tried the other HDD (/mnt/dan) with 32k. Got 167-171MB/s. Tried 64k and 128k, got similar results:
$ dd if=/dev/zero of=testing8 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 29.725645 secs (110234782 bytes/sec) $ cd /mnt/dan $ ls $ dd if=/dev/zero of=testing8 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 19.141412 secs (171189043 bytes/sec) $ dd if=/dev/zero of=testing9 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 19.522588 secs (167846599 bytes/sec) $ dd if=/dev/zero of=testing10 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 38.222232 secs (171460422 bytes/sec) $ dd if=/dev/zero of=testing11 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 77.925399 secs (168201898 bytes/sec) $ dd if=/dev/zero of=testing12 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 78.633474 secs (166687281 bytes/sec) $ dd if=/dev/zero of=testing13 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 79.988733 secs (163863078 bytes/sec) $ # and now back to the other drive $ cd - /mnt2/dan $ dd if=/dev/zero of=testing13 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 127.045874 secs (103169033 bytes/sec)
OK, what’s up with that? Perhaps it is space?
$ df -h /mnt /mnt2 Filesystem Size Used Avail Capacity Mounted on /dev/da0s1 1.8T 158G 1.5T 10% /mnt /dev/da1s1 1.8T 51G 1.6T 3% /mnt2
Let’s try again after clearing up some space:
$ ls testing13 testing14 testing5 testing6 testing7 testing8 $ rm testing13 testing14 testing5 testing6 testing7 testing8 $ dd if=/dev/zero of=testing16 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 108.984240 secs (120266930 bytes/sec) $ dd if=/dev/zero of=testing15 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 97.468432 secs (134476361 bytes/sec) $ dd if=/dev/zero of=testing17 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 55.928663 secs (117177841 bytes/sec) $ dd if=/dev/zero of=testing18 bs=128k count=400000 400000+0 records in 400000+0 records out 52428800000 bytes transferred in 413.915994 secs (126665316 bytes/sec) $ dd if=/dev/zero of=testing18 bs=128k count=4000000 4000000+0 records in 4000000+0 records out 524288000000 bytes transferred in 3487.452112 secs (150335541 bytes/sec)
In this section, I start writing in bigger and bigger block sizes. I am not sure if space available affects the throughput.
[dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=128k count=400000 ^C [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=128k count=400000 400000+0 records in 400000+0 records out 52428800000 bytes transferred in 413.915994 secs (126665316 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=128k count=4000000 4000000+0 records in 4000000+0 records out 524288000000 bytes transferred in 3487.452112 secs (150335541 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 17.901840 secs (183042637 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=64k count=100000 100000+0 records in 100000+0 records out 6553600000 bytes transferred in 63.549471 secs (103125957 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=128k count=100000 100000+0 records in 100000+0 records out 13107200000 bytes transferred in 121.469473 secs (107905301 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 31.217638 secs (104966301 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing18 bs=32k count=400000 400000+0 records in 400000+0 records out 13107200000 bytes transferred in 121.142230 secs (108196787 bytes/sec) [dan@heckler:/mnt2/dan] $ cd - /mnt/dan [dan@heckler:/mnt/dan] $ cd - /mnt2/dan [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing19 bs=32k count=400000 400000+0 records in 400000+0 records out 13107200000 bytes transferred in 129.736373 secs (101029493 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing20 bs=32k count=100000 100000+0 records in 100000+0 records out 3276800000 bytes transferred in 34.118417 secs (96041970 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing20 bs=256k count=100000 100000+0 records in 100000+0 records out 26214400000 bytes transferred in 162.792326 secs (161029703 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing20 bs=256k count=100000 ^C [dan@heckler:/mnt2/dan] $ ^C [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing21 bs=256k count=100000 100000+0 records in 100000+0 records out 26214400000 bytes transferred in 163.577103 secs (160257148 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing22 bs=512k count=100000 100000+0 records in 100000+0 records out 52428800000 bytes transferred in 274.906209 secs (190715227 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing23 bs=512k count=100000 100000+0 records in 100000+0 records out 52428800000 bytes transferred in 286.437554 secs (183037452 bytes/sec) [dan@heckler:/mnt2/dan] $ dd if=/dev/zero of=testing23 bs=1024k count=100000 100000+0 records in 100000+0 records out 104857600000 bytes transferred in 588.276280 secs (178245501 bytes/sec)
Fastest write was about 181 MB/s.
bonnie++
And finally, a quick bonnie++:
$ bonnie++ -s 66000 Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP heckler.unix 66000M 534 99 161135 31 46990 54 1014 96 167062 25 214.9 9 Latency 16062us 531ms 7654ms 108ms 876ms 3342ms Version 1.97 ------Sequential Create------ --------Random Create-------- heckler.unixathome. -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 30150 65 +++++ +++ +++++ +++ Latency 62048us 36us 46us 169ms 34us 46us 1.97,1.97,heckler.unixathome.org,1,1360023539,66000M,,534,99,161135,31,46990,54,1014,96,167062,25,214.9,9,16,,,,,+++++,+++,+++++,+++,+++++,+++,30150,65,+++++,+++,+++++,+++,16062us,531ms,7654ms,108ms,876ms,3342ms,62048us,36us,46us,169ms,34us,46us
Two dd at once
Why not?
# dd if=/dev/zero of=/mnt/testing4 bs=32k count=3000000 3000000+0 records in 3000000+0 records out 98304000000 bytes transferred in 657.925694 secs (149415049 bytes/sec)
That’s 142 MB/s.
And while that one was running, we were also running:
$ dd if=/dev/zero of=/mnt2/dan/testing4 bs=32k count=3000000 3000000+0 records in 3000000+0 records out 98304000000 bytes transferred in 750.384284 secs (131004876 bytes/sec)
Which is 124 MB/s.
NOTE: the second dd was started about 30-40 seconds after the first one.
So that’s a combined through put of 264 MB/s.
It seems your disk is faster in all spots (expect outside transfer rate).