Build a Big Capacity NAS (24TB)

  • Hi,


    I currently need to store a large number of files.
    My need is estimated at more than 24TB useful.
    4 hard drives of 8TB in Raid 5 and I will have to be good!


    I already have a nas with OMV, but it is built with an old motherboard mini itx and therefore an old Bios.
    Can not connect hard drives larger than 2.2TB.


    I am therefore obliged to mount a second machine.


    After some research, I established a configuration, I would like to have your opinion on the subject.
    What will count is the storage capacity and not the simultaneous access.


    Memory: Crucial 4GB DDR4 x 2 for 8GB in total
    Motherboard: ASUS PRIME A320M-K90 (μATX)
    Case: Fractal Design Node 804 (capacity: 8 HDD 3.5 ")
    Processor: AMD Athlon 200 GE
    HDD: Seagate Barracuda 8TB x 4 for a capacity of 32TB


    OMV will be installed on a USB key.


    The motherboard has already 4 SATA ports and if I want to expand it will suffice to add a controller board 4 ports SATA.
    The case can hold 8 HDD 3.5 ".
    I chose the processor because it has low power consumption. the Tdp of this processor is 35 watts


    My questions :
    Do you think that these choices are correct in relation to my use ?
    If not, what could you advise me ?
    I am wondering about the performance of this CPU. Will it be powerful enough to saturate Gigabit Ethernet ?


    Thank you for your help.


    PixCD

  • 4 hard drives of 8TB in Raid 5 and I will have to be good!

    Which kind of RAID do you intend to use - mdadm, hardware-RAID etc.? The problem with mdadm RAID with disk drives of that size is, that it takes ages in case of a rebuild. If availability is not a big case for you should look for another solution. For private use Snapraid/MergerFS may be the better option. In case of RAID you should also check ZFS.


    Just my two cents.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Thank you for your reply cabrio_leo .This Nas will be used in a family setting, the availability of data is not a priority. in case of disk failure, I can afford to wait for the data to be rebuilt, even if it lasts 2 days! ;)I just hope this processor will be powerful enough ...

  • Yes, i will use 20-22 TB,

    What is your intended backup solution? You are aware of 'RAID is not backup? Would you go skydiving without a parachute?' (Quote @geaves) :D

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • I know that Raid is not a backup solution. :)
    the backup media already exists.


    My current system is composed of several disks of different size and it becomes difficult to store and save in a correct way.


    The purpose of this Nas is precisely to harmonize my system (4HDD for data installed on the nas and 4 HDD for backup installed on a separate machine)


    To arrive at this result I will have to wait and play with my current hard drives, because I have the necessary storage, but that implies a large number of disks and a lot of manipulations !

  • My sugest is do not install OMV on a USB (XigmaNAS have this option and in OMV is possible to do, but not recomended if you want to be safe in home


    My other suggest is that consider to use ZFS and RAIDZ1 if you do not want to re-copy all the data in case of disk fail , for other hand if your NAS is a backup then a ZFS pool is enought, you need to recopy all the data in case of one disk fail, but have the ability to grow your ZFS pool 1 disk each time.

  • My other suggest is that consider to use ZFS and RAIDZ1 if you do not want to re-copy all the data in case of disk fail , for other hand if your NAS is a backup then a ZFS pool is enought, you need to recopy all the data in case of one disk fail, but have the ability to grow your ZFS pool 1 disk each time.

    Your post was not addressed to me, but allow me to chime in.


    I´m sorry, but I didn´t understand anything what you wrote. ;( Could you please explain again what you mean with "re-copy all the data in case of one disk fail"? Do you mean for a NAS as backup solution a "basic" pool consisting out of one disk without any redundancy shall be created? And is therefore expandable later by another basic pool out of one device?

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Your post was not addressed to me, but allow me to chime in.
    I´m sorry, but I didn´t understand anything what you wrote. ;( Could you please explain again what you mean with "re-copy all the data in case of one disk fail"? Do you mean for a NAS as backup solution a "basic" pool consisting out of one disk without any redundancy shall be created? And is therefore expandable later by another basic pool out of one device?

    YEs, if your NAS is a backup, you have 2 copies of your data (original in your PC and Backup in your NAS), in this case redundancy on NAS can be an option not mandatory.


    create a pool of 4 disk without redundancy on NAS have some advantages and ovious disaventages.


    advantages:


    1 - You can use all disk as data disk , no space lost on redundancy https://docs.oracle.com/cd/E19…819-5461/gaynr/index.html
    2 - You can easy grow your ZFS pool adding only a disk (zpool add disk) https://docs.oracle.com/cd/E19…819-5461/gazgw/index.html



    Disaventages:


    1 - If one disk fail you lose entire pool, so when you repair/atach a disk you need to recreate the pool from scrath, and re-copy all data from your PC (remember that still you have one copy of your data=original data).



    PD: This is my actual data backup strategy

  • If I understood correctly...


    I recap, I have today a workstation equipped with several hard drives of different capacity. On this workstation, I do not have disk aggregation or Raid. they are just disks installed in NTFS.


    My problem is that my current backups are "messy". To be more precise I work with photos type files, files in Raw, Tiff, and jpg format take up a lot of space, and as I have a "HUGE" collection of files it starts to become difficult to store everything and save it correctly.


    So I thought to mount a Nas to store my backup and landing has several problems:


    - To mount 4 disks of 8TB in RAID 5 allows me to have a single filesystem and allows me to classify my files on a single logical unit and thus prevents me from "scattering" my files on several hard disks. the 24TB of useful data will be more than enough for my current needs. If tomorrow I need more space it is possible to add an extra disk and increase the size.


    - The second interesting point is that the Raid 5 allows, in case of "failure disk" on a HDD to restore the data if I replace the "FAIL DISK" with a new one.


    To begin, i imagined, my data is stored on my workstation and my Nas keeps my backups with 4 hard drives in RAID5.


    It's not stupid to remove redundancy, it saves 1/3 of the usable space, but given the volume, I think it will be faster to rebuild the data of a disk than to make a copy of all the data.


    Once my nas is mounted and my Raid 5 disks are ok, I will install in my workstation 4 new disks of 8TB in raid 5 in replacement of my current disks. I will have like that an equivalence between the data and the backup.


    After I'm wrong maybe .....

  • Please take this article into account: What are the different widely used RAID levels and when should I consider them?


    This sentence about RAID 5 is particularly noteworthy: "Perhaps the most critical issue with RAID 5 arrays, when used in consumer applications, is that they are almost guaranteed to fail when the total capacity exceeds 12TB. This is because the unrecoverable read error (URE) rate of SATA consumer drives is one per every 1014 bits, or ~12.5TB."

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • It will fail if you happen to occur URE during RAID5 fail and replacement. Since he has backups, RAID5(Raidz1) is perfectly fine. You (or anyone else) shouldnt be frightening people every step of the way.

  • You (or anyone else) shouldnt be frightening people every step of the way.

    I personally would not use RAID5 especially because of the that danger. RaidZ is completely different because there is a checksum and in case of a drive failure only the used data size must be resilvered instead of the whole disk in classic RAID.


    Btw: It was not inteded to scare anyone. But if someone is planning a NAS of that big size (for private use) he should know the possible risks.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    2 Mal editiert, zuletzt von cabrio_leo ()

    • Offizieller Beitrag

    If tomorrow I need more space it is possible to add an extra disk and increase the size.

    Yes, but there is a caveat to this, Raid5 will only allow for 1 drive failure and drives this size the rebuild will take a llllooooonnnnggg time, could even be a week :) another option is to add another drive and use 5 in Raid6 this would allow for 2 drive failures.


    Having been on the end of a failed drive in a Raid5 only for another drive to start to fail during a rebuild is ball breaking and the drives were smaller.


    I can empathise with anyone who wants to use a Raid setup for large storage it can simply make sense, another option is Raid10 there are some users on here who use this, but that reduces the available size down to 16TB.


    Another option is ZFS which for drives this size would probably be a better option I've used it in the past but with mixed results, but that's an opinion :)


    I would suggest you go with what you are comfortable with, but if you go with the software raid please don't pull a drive to simulate a drive failure :) it simply doesn't work like that, you have to instruct mdadm what to do.


    Whatever you choose good luck with it :thumbup:

  • If you're testing mdadm failure, you SHOULD pull the drive out, just because you are testing hardware failure. With ZFS it is different,because you can't just plug in existing drive,you need to wipe it clean and then add it.
    And ZFS advantage is faster resilvering than mdadm, and scrubbing(i know mdadm has scrubbing,but zfs's is much better).
    As for the resilver time,i guess for those size, it could take 2-3 days.

    • Offizieller Beitrag

    If you're testing mdadm failure, you SHOULD pull the drive out, just because you are testing hardware failure.

    If you pull a drive on a running mdadm raid i.e. hot plug the raid will do nothing it will still display as clean, if you reboot, the raid comes back as inactive and does not display in the gui.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!