First NAS Box

  • Hello Everyone, I'm in the process of building a Low Cost NAS sever with OMV. I have a bunch of extra equipment laying around that I don't use right now that is collecting dust and figured I would build an open source NAS to replace the Iomega IX-300d that I have right now. I will be using the IX-300d as an rsync backup NAS server for my new build. The list below is what I plan on build in the next few weeks and would like some input. Please see the following items below that I will be using for my NAS Build. Will be running in Raid 10 on Hardware raid config. ( Not a big fan of software RAID on low powered CPU systems do to the overhead that is required, plus I already have the hardware.)


    Low Cost NAS Build


    (Existing Hardware on Hand)


    Motherboard ($59) = ECS 1037U Based http://www.newegg.com/Product/…aspx?Item=N82E16813135350
    Memory ($35) =4GB DDR3 http://www.newegg.com/Product/…aspx?Item=N82E16820231394
    Raid Controller Card ($50) = Perc5i bought on Ebay.... http://www.ebay.com/itm/Dell-P…ain_0&hash=item3cd8bafe9d
    SAS to Sata Cables (2x $8 = $16) SAS to Sata Adapters http://www.newegg.com/Product/….aspx?Item=9SIA2UG1A47532
    Power Supply ($40) = Corsair 430 http://www.amazon.com/Corsair-…rds=430+watt+power+supply
    Case ($80) = Fractal Node 304 http://www.newegg.com/Product/…04-_-11-352-027-_-Product


    (New Hardware)


    OS SSD ($40) = 1 x 60GB http://www.newegg.com/Product/…aspx?Item=N82E16820226677
    Storage HDD (6 x $53 = $318) = 1TB WD Green Drive http://www.newegg.com/Product/…aspx?Item=N82E16822236070


    Total Investment = $688.00


    From what I have been reading in the Forums and a fan of Debian the above parts should work fine for my Homemade NAS Box based on OMV.


    Please let me know what you guys/gals think, if I should replace any of the existing parts or if they should work fine. I plan on running OwnCloud on this build so I can have access to all my files, music and movies remotely.


    Thanks,
    Floos525

  • ...and can you explain why you want to buy 6x1TB drives? Seems to be pretty limited with just 3TB of Storage, and also uses lots of wattage because you run 6 drives.


    Also, any reason you wanne go Raid10? Do you want to run any VMs that need higher I/O throughput?


    ( Not a big fan of software RAID on low powered CPU systems do to the overhead that is required, plus I already have the hardware.)


    I don't use software raid either, but from my experience, nobody ever moaned here that mdadm would chew up ressources, at all.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • I would not go RAID 0


    Will be running in Raid 10


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Update:


    I have replaced the Perc5i with a Perc6i.
    Order the WD RED Drives.


    System is up and running... Everything went smooth... Very Happy with the system so far.


    Thanks for all the comments and helping me make the right decision on the Hard Drives.


    FYI, This Fractal Case is super quite...

    • Offizieller Beitrag

    I would not buy a Green drive. Get one of their Red NAS drives instead. Everything else should be fine.


    I use a couple of Green drives... they are fine, so long as you don't run them in a RAID. Run them in a RAID, as I painfully discovered, and you will nuke them almost as fast as installing OMV to a flash drive.

  • I use a couple of Green drives... they are fine, so long as you don't run them in a RAID. Run them in a RAID, as I painfully discovered, and you will nuke them almost as fast as installing OMV to a flash drive.


    I run my OMV on a usb flash drive and have 4 x 3TB WD Green drives in a RAID 5 array since January 2012.
    I did run wdidle on the greens to change the spin down time.


    The only issue I've had is from power off during use, smart gives them all a good bill of health (Touch wood)...

    HP N40L microserver | OMV 1.19 (Kralizec) | OMV extras 1.34 | kernel 3.16.0-0.bpo.4-amd64

    • Offizieller Beitrag

    Well, you are luckier than me. The USB install issues are well known. The green drives.. I nuked three in less than a year, before I just did away with the raid.


    No problems since.


    Your experience is definitely not the norm.. Both issues have been hashed over here several times.

  • Well, you are luckier than me. The USB install issues are well known. The green drives.. I nuked three in less than a year, before I just did away with the raid.


    No problems since.


    Your experience is definitely not the norm.. Both issues have been hashed over here several times.


    I think I've been lucky with my flash drives.
    As for the Greens, had you run WDIDLE on the drives before setting them up in a RAID array? Otherwise they spin down every 8 seconds of inactivity!


    Edit: if anyone is interested in the greens spin down issues there is an outline here,

    HP N40L microserver | OMV 1.19 (Kralizec) | OMV extras 1.34 | kernel 3.16.0-0.bpo.4-amd64

    Einmal editiert, zuletzt von the_otherOne ()

    • Offizieller Beitrag

    I think I've been lucky with my flash drives.
    As for the Greens, had you run WDIDLE on the drives before setting them up in a RAID array? Otherwise they spin down every 8 seconds of inactivity!


    Edit: if anyone is interested in the greens spin down issues there is an outline here,


    Never heard of the spindown, so No... I didn't. That may well have been my issue. Regardless, they run fine as singular drives.

  • Ah well the Greens are designed to aggressively park their heads every 8 seconds if not in use, the idea is it saves power.
    Unfortunately in a raid/nas configuration that can increase the load_cycle_count such that within a year you can easily go beyond the 250,000 rating of the drive.


    wdidle3.exe is a firmware flashing utility that allows you to set the interval the drive waits before spinning down. and parking the heads
    I ran this on my drives before setting them up as a raid after reading about the green drives lack of suitability in NAS/Server configurations.


    You can buy reds, surely WD have a valid reason for charging 15% more for "NAS" red drives beyond a simple firmware settings, but my Greens have performed just fine in a NAS since setting them not to park with wdidle3.


    There is an in depth breakdown on hacking the WD greens here.


    As for running off a usb flash drive, I don't recommend it if its avoidable, as others have said its not expensive to add a cheap low capacity HDD/SSD to most systems. In my case (the HPN40L) it would take a bit of modification/playing to set it up. I find it easier to run the flash memory plugin and just clonezilla (drive to drive) my USB flash drive to a back up drive every few months. Worst case if I fry one USB stick I roll back to another USB stick...

    HP N40L microserver | OMV 1.19 (Kralizec) | OMV extras 1.34 | kernel 3.16.0-0.bpo.4-amd64

  • I use a couple of Green drives... they are fine, so long as you don't run them in a RAID. Run them in a RAID, as I painfully discovered, and you will nuke them almost as fast as installing OMV to a flash drive.


    I also use a couple of Greens in a RAID 5 since Years. (Over 5 years already). At first with a QNAP, then switched to OMV. I receive a mail because of one bad sector since a few months but no problems at all. Actually I´m just waiting for one drive to fail to change them for 4TB Reds. :D

  • Waiting to fail is like giving up the data for good.


    Replace it now or you may loose the data completely ;)


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • I also use a couple of Greens in a RAID 5 since Years. (Over 5 years already). At first with a QNAP, then switched to OMV. I receive a mail because of one bad sector since a few months but no problems at all. Actually I´m just waiting for one drive to fail to change them for 4TB Reds. :D


    Did you change the intellipark/spin down time with wdidle3?
    How does your load_cycle_count look in smart?

    HP N40L microserver | OMV 1.19 (Kralizec) | OMV extras 1.34 | kernel 3.16.0-0.bpo.4-amd64

  • Waiting to fail is like giving up the data for good.


    My second name is "Risky" ;)

    Replace it now or you may loose the data completely


    I do not see my RAID 5 as a backup. I have my backup at another USB Drive, connected to my NAS :)

    Did you change the intellipark/spin down time with wdidle3?

    Nope.


    How does your load_cycle_count look in smart?

    Will post that later!

  • How does your load_cycle_count look in smart?


    sda - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 133688
    sdb - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 134149
    sdc - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 133872
    sdd - 193 Load_Cycle_Count 0x0032 158 158 000 Old_age Always - 128313


  • sda - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 133688
    sdb - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 134149
    sdc - 193 Load_Cycle_Count 0x0032 156 156 000 Old_age Always - 133872
    sdd - 193 Load_Cycle_Count 0x0032 158 158 000 Old_age Always - 128313


    I don't know how long your disks have been running but that load cycle count (for WD20EZRX) is getting on for half way to its rated value
    That said, I've seen forum posts of people who have 800K LCC in one year, so maybe it's not a problem for you.

    HP N40L microserver | OMV 1.19 (Kralizec) | OMV extras 1.34 | kernel 3.16.0-0.bpo.4-amd64

    • Offizieller Beitrag

    My raid 5 array of eight Samsung F4 2TB drives at home has LCC ranging from 34 to 128 :) My raid 10 array of four drives at work has LCC of 16.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!