16 hard drives, 1 case

etc6849

Senior Member
I thought I'd share a picture of my custom external storage array I built.  It uses 4 external SFF-8088 SAS connectors connecting to a LSI 9201-16e HBA controller.  I have been gradually collecting enough 4tb drives over the last few years to build this thing.
 
The case was actually sold with room only for 8 internal drives and 3 5.25" drives, but by stealing some parts off another duplicate case (I plan to order replacements when they are back in stock), I turned it into a 16 bay case.
 
Parts:
The all important ATX power controller (picture at the link is not correct, but the part is indeed a Supermicro CSE-PTJBOD-CB1).  This part eliminates the need for a motherboard and costs only $26.
http://www.provantage.com/supermicro-cse-ptjbod-cb1~7SUP9023.htm
 
LSI 9201-16e pci-e 8x HBA controller. I bought a like new one on ebay for $180 with shipping; that's only $11.25 per port!
http://www.lsi.com/products/storagecomponents/Pages/LSISAS9201-16e.aspx
 
Case (usually goes on sale for $59.99 with free shipping; set a price alert at camelegg.com):
http://www.newegg.com/Product/Product.aspx?Item=N82E16811147154
 
16x 4TB 7200rpm (red label) HGST drives (usually goes for $189 with free shipping):
http://www.bhphotovideo.com/c/product/835056-REG/Hitachi_0S03355_4TB_SATA3_3_5_INTERNL.html
 
4x SFF-8088 external cables (expensive cables at $27.84 each):
http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025407&p_id=8184&seq=1&format=2
 
4x SFF-8087 internal cables ($9.23 each):
http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8186&seq=1&format=2
 
2x SFF-8088 to SFF-8087 pci adapter plate (AD2EMSPCI version, but they've raised the price to $44, shop around...):
http://www.addonics.com/products/dual_mini_sas_converter.php
 

Attachments

  • external_storage.jpg
    external_storage.jpg
    75.9 KB · Views: 81
  • external_storage2.jpg
    external_storage2.jpg
    68.5 KB · Views: 95
Very nice!!
 
Yup here using the IBM M1015 (LSI 9220-8i) - updated firmware to make it LSI.  I am though using it with an Asus mITX E35M1--I in a tiny 8 slot case.  It is tight though but the drives are hot swap.  Here the case was more than anything else at around $180. I like the footprint of those SFF cables.  It makes it easy to connect the drives.
 
Curious if  you will be using MS or Linux (or BSD) for your primary OS?
 
Thanks for the compliments!  The IBM M1015 appears to be the most popular choice hands down.  I would have used two of those, but my motherboard only has two PCI-e slots 8x or above and a video card is in one of them :(  I also happened to get a great deal on the LSI 9201 16e (the seller had a bunch, but sold all of them very fast).
 
The primary PC is an HTPC running Windows 7 Premium.  The reason I'm using Windows 7 is for Media Center, but it also runs Premise to control my home.  I also use Flexraid in snapshot mode on the same Windows 7 machine.  
 
Flexraid does drive spanning and snapshot raid (although a real-time raid mode is included too).  Transfer rates are about 160 MB/s in snapshot mode which is plenty fast for watching records, movies etc...  It also has the advantage in that if you lose more drives than the number of parity drives (I have two parity drives right now), you can still access all files on any of the good drives (using any PC).  
 
There's absolutely no actual data striping which I decided wasn't worth the risk (for me anyways).  I just read too many horror stories about data loss with hardware raid.  You can easily remove a drive from the Flexraid array by logging into Flexraids webserver and remount it in Windows to see all of your original files!  As part of the initial set up, you do need generate the parity data.  However, it only took 11 hours to generate the parity for about 21TB of data (I've never used an enterprise solution, but that seems very fast to me)!  After the initial setup Flexraid manages all file changes and has an Update mode that only generates the parity for those changes.  A scheduler is included to make this management automated.
 
If anyone is thinking about buying Flexraid, it is well worth the $59 price.  However, I'd wait for the NZFS (not ZFS) version to come out as it looks even more interesting:
http://www.openegg.org/2012/01/07/announcing-nzfs-not-zfs/
 
Thanks etc6849 on the information about Flexraid.   I have yet to decide what OS to run on my newer NAS box.
 
Still playing with this box as I have other NAS boxes on line.  I have though lost the server PS on it a few months back and replaced it.  It only has two larger fans and two power supply fans; quiet.
 
Yeah the deals on the IBM M1015 are around $75.  That said I don't think anyone has left the IBM firmware on them and it seems like they are all going to the LSL firmware.  I could only fit one PCiE card in the case and it was a bit too tight thought.  I had issues though with motherboards seeing the LSL card relating to the firmware upgrade.  In fact not even sure if the Asus motherboard will work at reflashing it.  It sees the card.  I flashed the M1015 on a Foxconn motherboard.
 
I also had trouble using the 9201-16e LSI card with my Z77 motherboard.  The LSI card is not recognized at boot unless it is in the top PCI-e 16x slot?
 
PS:  Flexraid has a linux version too, but I haven't tried it.
 
Its a great thing to be able to utilize an LSL card for a home NAS.  These cards are built really well. 
 
Yeah here the Asus has only one PCiE slot. 
 
That said too the original Asus Bios didn't see the LSL card unless I intervened during boot time. 
 
I am using a drive connected to one of the SATA ports on the motherboard to boot. 
 
Asus updated the EFI bios and it now sees the LSL card. 
 
Thanks for the details.  I think you're issue sounds similar to my top slot solution.  However, if you can get into the LSI's bios on the other board, try to disable the boot setting if you haven't already (assuming you do not boot from any of the drives).  This may fix things for you.  I know my motherboard keeps changing the boot order every time there's a hardware change, if I enable the LSI card's boot option.
 
The top slot thing is annoying because Intel's bios really wants the graphics card in the first slot, else it doesn't show boot screens/bios (seriously, wtf Intel)!  This means if I ever want to see the boot screens I have to physically remove the video card  :angry2:  I was thinking about one of the newer overclocking boards with multiple electrical 16x (e.g. full 16x not shared) PCI-e 3.0 slots.  I'm definitely going to order the motherboard from Amazon due to the LSI card issues (so I can return it if I have issues).  Sadly, there's no Fry's or Micro-center where I'm at now.
 
If I leave the external graphics card out, I'm stuck with the on-board Intel 4000 graphics which would be fine, but Intel doesn't want to fix an issue with the card not saving the full RGB setting.  This means dark scenes look grayed out.  The GTX-460 Nvidia card looks great (black looks black) and keeps its settings upon a reboot.  In short, if you use HDMI, DO NOT use Intel graphics.  However, if you just use DVI, the picture is beautiful and all the HDMI issues go away.  I wouldn't buy another Intel board either, but that's just my opinion.
 
Thanks etc6849.  Until I updated the EFI bios on the Asus I could only see a quick flash of the LSI boot configuration screen. 
 
I can boot now and get to the LSI configuration menu on the Asus motherboard.  Slow to boot though.
 
I can not though upgrade the firmware unless I do it on the Foxconn Intel motherboard.  But I am now fine with the LSI firmware; such that I will not be changing it for a while if at all.
 
Here is a picture of the LSI card on the motherboard.  There is only one slot.  Its really tight though.
 
 
 

Attachments

  • NAS-3.jpg
    NAS-3.jpg
    121.9 KB · Views: 64
  • Pic-2a.jpg
    Pic-2a.jpg
    85.2 KB · Views: 58
Nice looking case.  I disabled UEFI on my motherboard with the LSI card in the second slot.  Still doesn't recognize the card.  The bios shows that no card is installed?  Still recognizes the card if it's in the top PCI-e slot regardless of UEFI settings.
 
For fun, I even tried disabling all onboard SATA controllers, the LSI card still isn't recognized unless it's in the top slot.  
 
I finally had to remove the GTX-460 card as I tried to watch a movie last night with the GTX-460 in the second slot and the LSI's performance suffered greatly?!?  I tried copying a 40GB ISO and the speed only reached 40MB/s!  I removed the GTX-460 from the second slot and the LSI is back to performing at 180MB/s?!?  I guess I'll be using Intel 4000 graphics for a while.
 
UEFI is a good tip though as I may try an ASUS or Gigabyte board and ditch the intel board.  Thanks!

PS:  Nice looking case.  What model number is it and do they have a 16 drive version?
 
Its been a couple of years that I bought the case from a vendor in China as an eval.  That said I haven't seen any for sale here.  I will look for the vendor and email address. 
 
Are you looking for an mITX board?
 
I have had good luck with BCM motherboards lately.  Purchased a few of these for this or that.
 
etc6849 said:
The top slot thing is annoying because Intel's bios really wants the graphics card in the first slot, else it doesn't show boot screens/bios (seriously, wtf Intel)!  This means if I ever want to see the boot screens I have to physically remove the video card  :angry2:  I was thinking about one of the newer overclocking boards with multiple electrical 16x (e.g. full 16x not shared) PCI-e 3.0 slots.  I'm definitely going to order the motherboard from Amazon due to the LSI card issues (so I can return it if I have issues).  Sadly, there's no Fry's or Micro-center where I'm at now.
 
I would suggest using Supermicro or Intel server motherboards. These won't have the first slot graphics issue, accept ECC memory, and leave off the extras like audio. Perfect for a storage box that will sit in the closet. I'm using the older LSI 1068 based card, but it has been rock solid for years. I recently reinstalled with CentOS 6 and ZFS on Linux. We shall see how reliable it is over my previous install of CentOS 5 with mdadm and ext4.
 
I agree.  It seems LSI cards are more enterprise oriented and server motherboards would always be qualified to work with them (I'm not in IT, but this makes sense).  The reason I'm using an Intel extreme motherboard is the PC is in my home theater and serves other functions.  The extreme series board sounded like a good idea as it had HDMI 1.4 integrated, but with the many complaints online about Intel graphics, you can tell it isn't for picky folks like me.
 
In retrospect, I should have ordered this case, a little better motherboard and avoided the expensive external cables the LSI card requires and just bought another Supermicro AOC-SAS2LP-MV8 controller card.
http://www.amazon.com/gp/product/B007PW1GXG/ref=ox_ya_os_product
 
PS: Pete, this is a great affordable 8 port 600 MB/s per port card that I was using before that never gave me any issues (worked fine in any slot along with my video card, etc...).  It uses a very common Marvell 94xx chip, so it should work with almost any system with an 8x PCI-E 2.0 slot.
http://www.supermicro.com/products/accessories/addon/aoc-sas2lp-mv8.cfm
 
Good choice on the HGST (Hitachi) drives. I have four 1TB Hitachi drives that have been powered on 4.5 years and still running great. Western Digital bought HGST, which in-turn sold the 3.5" desktop line to Toshiba. From what I have read the Toshiba's are identical, but you might pickup a spare HGST while they are still in stock at some places.
 
I did a quick read on the FlexRAID software. I didn't see the details on your setup, but if you aren't using at least 2 drives for parity I would highly recommend it. I would also enable weekly or bi-monthly scrubs of the array. I've scrubbed the drives mentioned above weekly with no ill effects. The scrub will help detect a failing drive before you end up with multiple unnoticed failing drives.
 
Have you considered separating the storage and home theater roles into two separate systems? This might cost a few hundred more but with 64TB raw storage I would want an isolated system. Not to mention ECC memory as a scrub or rebuild of the array is going to process TBs of data. Non ECC memory does not detect bit flips so you have a chance of corrupting your data should one occur especially while calculating the parity. You should read the horror stories on the Linux raid mailing list.
 
Thanks for the info about doing a scrub.  I haven't played too much with the flexraid software.  I use it in cruise control (dummy mode), so a lot of advanced options are hidden.  I also have the array update each night with any file changes.  There are a variety of email alerts you can configure too.  I have it set to email me when the update completes and if a drive fails.  I'm not sure how it knows when a drive is failing though.  I guess it uses the drives SMART data?
 
Separating the two doesn't make sense to me.  Why have a network in between your array and your HTPC's Blu-ray drive?  It doesn't seem as good of a solution for what I'm using it for...  However, if I don't separate them I will never use ECC memory as my processor doesn't support it: http://ark.intel.com/products/65523  I'm still not sure if I need ECC memory yet; I'll have to research it.  I can see your point about error rates when flexraid calculates the parity data.  This kind of scares me...
 
Rest assured I use two parity drives.  It would take too long to re-rip 4TB x 2  worth of Blu-rays (even with four drives going).
 
There are some old posts (2009) about Flexraid, non-ECC memory and creation of the parity data.  
 
The creator says to run verify (which is probably the same thing as what you're calling scrub).  He implies that memory errors would never affect the writing of data to the drive, just the creation of the parity data and any rebuilds that use that data.
 
However, I don't know if this is still current, but I've asked the question again here:  http://forum.flexraid.com/index.php/topic,2169.0.html
 
PS: rsw, good question.  Thanks bringing this up!
 
Back
Top