Feb 112010
 

For the file server, perhaps one of these three combinations:

  1. Seagate Barracuda XT ST32000641AS 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s 4@$300 – $1200 8TB or $150/TB
  2. SAMSUNG F1 RAID Class HE103UJ 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5″ 4@$150 – $600 4TB or $150/TB
  3. Western Digital RE3 WD1002FBYS 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5″ 4@$160 – $640 or $160/TB

Also in this system would be a gmirrored pair of HDD for the base OS. I’d rather run the system off non-ZFS for now. I have two such drives at home (amongst others).

I’m also considering adding a small SSD for L2ARC:

  1. OCZ Vertex Series OCZSSD2-1VTX30GXXX 2.5″ 30GB SATA II MLC $135
  2. Intel X25-V SSDSA2MP040G2R5 2.5″ 40GB SATA II MLC $130

Perhaps this video card: http://www.newegg.com/Product/Product.aspx?Item=N82E16814129042

And perhaps I’ll get 12GB of RAM.

Also considering this HP box: HP ProLiant ML150

Other random products:

  1. LSI SAS3041E-R Storage controller (RAID)- Serial ATA-300 / SAS- 300 MBps
  2. SYBA SD-SA2PEX-2IR PCI Express SATA II Controller Card
  3. ASUS Model PCIE GEN2 SATA6G Expansion Card
  4. SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card
Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

  One Response to “What HDD?”

  1. I did a very similar setup to what you are planning – for years our backup server was a combination of gmirrored drives and hard links for snapshots and rsync instead of bacula. The problem with gmirror is the rebuilds after a crash/power issue plus the time to fsck a ufs partition – i found separating off each mirror as its own fs to be the simplest solution rather than have a set of gmirrors with gstripe on top.

    I then moved the base system over to just a gmirror on ufs with zfs for the main storage – 8 x 500gb drives – this worked reasonably well under 7.x series but rsync did occasionally bring the machine down but with no fsck waiting time it wasnt as much of a pain. Plus the storage pool of drives with zfs is a lot nicer to deal with than the geom layering – we also hit the odd problem of zfs snapshots not being available after the machine had been up for some time and needed a reboot in order to access them (usually the machine would then panic) – fortunately this was only once or twice a month at worst. There is also the added benifit of compressing the zfs fs which has allowed us to cram a lot more onto our aging disks than with ufs.

    Since moving to freebsd 8 we have not had any issues with rsync bring the machine down nor the snapshot issues. Recently we have deployed a similar setup (imap/file/intranet server not just a giant box for backup) but with a cf card to hold /boot and the rest on a zfs mirrored pool with a hot spare – a 4gb slice of each drive has also been setup as gmirrored swap so we should be able to remove any dead drives without any reboots. The flash memory card (/boot) is mounted read only so there is no need for an fsck should there be a powerloss. I am yet to look at replacing rsync with zfs send/receive as this should cut down on bandwidth for file moves and directory renames – plus the deduplication stuff on the horizon i really like the look of.

    Im must admit it does sound like im pushing zfs – my initial thought on it were not great – an fs needing a few gb of ram just to run happily didnt sit well however after a few years of using it now i plan to update all clients setups to the one described above – keeping a close eye on dfly and hammer though as the live mirroring and single system image stuff looks very cool – would be nice to have a layer for dealing with the raid stuff like geom and zpools and not rely on hardware raid – also it is currently missing compression which is a big drawback even with prices of drives so low.