100TB build - Help!

  • Hi Guys!


    i have been using OMV 2.2 for many of builds (over 10), but you can still call me a beginner.
    Now i got a new project for a media house, they require


    1. 100TB and grow upto 250TB as required.
    2. 60-100 users simultaneously moving video files (VFX) to work on Nuke Studio
    3. Require minimum transfer speed of 200 MB/s -- No bottlenecks (MAIN CONFUSION)


    I thought of using the below specification:



    Xeon E3 v1220 V3 Quad Core - 3.1GHz
    8GB GB ECC RAM
    LSI MegaRAID 9240-8i -- ( For RAID 5)
    Norco 3U 16 Bays
    12 x 8TB Segate Enterpise HDD or WD SE -- Not sure yet, have to check with the vendor.
    10Gtek Intel 82599ES NIC X520-DA2 -- Please suggest other options for best in performance.



    Any suggestion? where do I need to make any changes?

  • Im no expert but form what Ive been reading everywhere, that setup and drives are dangerous as hell.


    From your specs you look to want to do Raid 5 on 12 8TB Drives? That is asking for a massive data lost.
    To start you are planning to do a Raid 5 of 12 drives, all good that you get around 92% usable space that is awesome, but you get a lot of failure points as well. I would use AT LEAST a Raid 6 on that setup.


    On top of the fact that Raid 5 is considered to be outdated for the last 3 to 4 years, if it happens for a drive to fail, the risk of getting a unrecoverable read error from one of the drives is fairly high with so many drives and gigantic drives.


    Just a small article about URE
    http://www.raidtips.com/raid5-ure.aspx


    And a good blog post about it:
    https://standalone-sysadmin.co…e-b06d9b01ddb3#.2h4gm3j9q


    So looking at the drives you were considering, I looked into Seagate Entreprise NAS, Seagate Enterprise Helium, WD Gold, an they all have a Max URE of 1 sector per 10^15. Basically, on a SINGLE drive, worst case scenario it will happen once every 1000TB read.
    But you have 12 Drives and in case of a Drive Failure, your data is reliant of 11 of them, so that error every 1000TB quickly turned to 90TB (1000TB/11) , so every 90 TB read from those drives can have an error. Considering you have an Array as big, I would say that is fairly risky and the possibility of your entire Array become useless is high.


    Again, Im just a home enthusiast, Im no expert on this.
    On top of this, something ryecoaaron said to me made a lot of sense knowing all the risks of Raid 5 (from bitrot, URE, multiple drive failures, etc), Raid 5 is not backup. So consider having a cold storage backup, maybe something with those 8TB Drives in Raid 1 or 10.


    I would talk with that media house for the need of having such a big Hot storage. I would make a fast Hot-Storage, something like 30TB or less and than have a big Cold save storage as backup in the back, maybe those 100TB.


    Consider looking into LinusTechTips setup, they using 10Gbe networking, multiple servers where their Hot-Storage has been around 20TB in SSDs on a server with 100TB of cold using Seagate Drives with a diferent server.
    Currently they are using I think 24 Intel 750 1.2TB Nvme drives (LOOL so fast and expensive) on their Hot-Storage and he is setting up a 1PetaByte cold storage array using FreeNAS and GlusterFS.
    And they do OffSite backup of some of that data.


    So yeah, reconsider the needs and risks very well with that amount of storage and people using it.


    The big thing to ask is really: How important is your data and how much of it as you willing to loose?
    That way you can decide what to build better.


    Hope it helps.


  • Thank You for your support. I will definitely look into the point that you highlighted and work on it.


    Will get back with new changes to the setup.


    I am very excited & nervous as it's my first Big Setup.


    Sent from my XT1060 Dev Ed

  • please read about ZFS and ZFS features ( deduplication, snapshot), and test if posible in your NAS before to pass to production.


    PD: Install de ZFS-pluging and create a ZFS pool in RaidZ2 is really easy and you can test speed and avaiability (test what hapen if one disk fail, and you replace it by one new).


    Find info about ZFS is really easy, but this is a good start point: https://dl.dropboxusercontent.com/u/57989017/9.10/FreeNAS Guide 9.10.pptx.zip


    despite about FeeeNAS references, ZFS works very well on OMV.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!