Do I need RAID?

  • Hi all,


    I wanted to hear your opinion. I've read about RAID for sometime but I still not sure. "Should I use RAID?" I only use my NAS to store files like movies,music,picture and etc. Im quite new in this NAS world. ?(


    This is HDD configuration :


    1 x 120GB Samsung SSD : used for OS
    4 x 2 TB WD Green : 2 used for store file & 2 used for backup



    Thanks.

  • That is a decision you have to take!


    If you want use all your hard drive capacity and if one or more drives fails and the data on that drive(s) is not relevant so go for it, but if you want some redundancy on data you have to loose some capacity creating some type of raid array and then if a drive fails you had the chance to get the data intact and replace the failed drive.

  • You say you are using two 2TB drives to store files and two 2TB drives for backup. Does this mean you are backing up the data drives to the backup drives, or are you using the backup drives to backup other systems/files in your home?


    If you are using the two backup drives to backup the two data drives, then you already have redundancy. I doubt you would benefit from any speed increase of RAID striping, based on your description of use. So really, RAID offers very little benefit in your scenario and it complicates things when it comes time to add more disks.


    The only real-world benefit you would realize from RAID would be that you would have real-time redundancy. This is a double-edged sword though, because corruption in the filesystem, deleted files, etc. would occur to the entire array, not just one disk. This is why RAID is not a replacement for backups. If you would be sacrificing your two backup drives to build a RAID array, then the answer is a resounding NO!


    I've been in the industry for many years, and I have seen many people go through the fascination with RAID. It almost always develops into a love/hate thing. RAID is not all what it is cracked up to be. In very specific circumstances it is the best solution. For everything else, it is often the worst.


    Having said this, I do like the benefits of a RAID 1 (mirror) on my OS drive. In your situation, that's where I would use RAID.


  • That is, quite possibly, the worst article on RAID that I have ever read. I does an adequate job of highlighting the benefits of RAID, and completely ignores all of the potential pitfalls.


    For example, in the section on hot spares it explains what a hot spare does. What it fails to mention is that it is very common for RAID (5,6) rebuilds to fail. A failed rebuild results in 100% data loss. A hot spare guarantees that the rebuild process will begin without anyone's intervention. Bad, bad, bad idea. As long as the RAID is in a degraded state, one has time to backup it up, image it, or whatever before attempting the rebuild. Hot spares are the worst idea ever implemented in RAID and they are responsible for more lost data than any other single feature in a system. Period.


    The author also recommends hardware RAID over software RAID. Unless you are an enterprise user with local supply of identical spare RAID controllers, hardware RAID is a terrible idea. If the controller dies, the array is down until you can find that identical replacement part. If the controller is integrated on your mother board, same issue. If you purchased the controller or motherboard even 1 year ago, chances are you will have a lot of trouble finding an identical replacement. Software RAID is completely independent from the hardware and by far the best choice for anyone reading "PCWorld".


    I could go on...

  • Since you are using 2 Drives for Storage and another 2 for Backup, I would setup a Raid 10 array. which is a Mirrored Strip.


    I don't mean to be combative, but I am wondering why you would recommend this. What benefit(s) would the OP gain by doing so, in your opinion?


    He would have exactly the same amount of storage space, but no backup. Sure the storage would be redundant, but that isn't the same thing as a backup because a mirror is just as vulnerable as a single disk to things like file system corruption, accidental deletion, etc.


    Encouraging someone to implement a RAID instead of a backup is very bad advice.

  • What it fails to mention is that it is very common for RAID (5,6) rebuilds to fail.


    Can you back that statement up with some sources? You are the very first one to mention this. I never heard of such statement.


    A failed rebuild results in 100% data loss.


    How can it? The rebuild happens on just the replaced device, the data on the other disks stays untouched. If there is a error the RAID would still continue to work.


    A hot spare guarantees that the rebuild process will begin without anyone's intervention.


    Thats correct.


    Bad, bad, bad idea.


    Again, source, articels, etc.?


    As long as the RAID is in a degraded state, one has time to backup it up, image it, or whatever before attempting the rebuild.


    This is possible but to me as much of risk as a normal rebuild.


    Hot spares are the worst idea ever implemented in RAID and they are responsible for more lost data than any other single feature in a system. Period.


    Sources


    because a mirror is just as vulnerable as a single disk to things like file system corruption


    Thats not 100% true. Not sure if mdadm supports scrubbing on Raid1 though.


    accidental deletion


    Thats 100% correct, one of the reasons I do not recommend it either.


    Encouraging someone to implement a RAID instead of a backup is very bad advice.


    100% sign.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Hi David,


    My day job for the past 15 years has been working in data centres full of enterprise hardware, most of it containing customer data. My statements are based on that experience and that of my many colleagues.


    I have seen RAID arrays fail so many times I lost count. I have also seen RAID rebuilds fail many times. The drives are deployed at the same time. When a drive fails, it's partners are also quite long in the tooth and it is not uncommon to have more drives fail during this process. RAID 6 helps, of course, but when dealing with arrays of 14 drives or more, the risks increase exponentially. The sustained stress of rebuilding the arrays is what we assume is the primary cause of subsequent drive failures. This type of experience with RAID is not uncommon. A quick google will provide you with similar experiences and opinions, for example http://blog.open-e.com/why-a-h…-hard-disk-is-a-bad-idea/


    A rebuild thrashes disks far more than a backup. I've seen rebuilds thrash drives for several days continuously. Often when this happens we recommend that the customer auhtorize us to simply build the array from scratch and restore from backups/images. It typically takes much less time than a rebuild.

  • I have also seen RAID rebuilds fail many times.


    I've been a couple of years around here and I yet happen to see a dregraded array fail to rebuild. The only exception to that am I who had a second failed drive on a RAID5 array, but luckily, thanks to the awesome LSI support, I could restore the array. Else I've not seen fail a single mdadm rebuild here.


    The drives are deployed at the same time.


    Didn't you say you work in a enterprise environment? Aren't those drives in each Storage 'Pod' from different revisions by default?


    When a drive fails, it's partners are also quite long in the tooth and it is not uncommon to have more drives fail during this process.


    This process, so you call it, is the same to me, wether I rebuild or Backup. The Backup may have a higher chance of Data survival, but thats what RAID6 in Enterprise Environments is for, isn't it?


    RAID 6 helps, of course, but when dealing with arrays of 14 drives or more, the risks increase exponentially.


    May I throw in the concept of Backlaze here, are you familiar with it? They run 45 drives in 3x15 Drive Raid6 Sets per 4HU. They have one day each week where they go through their Datacenter(s) and replace failed drives.


    The sustained stress of rebuilding the arrays is what we assume is the primary cause of subsequent drive failures.


    Again, to my knowledge, even a software raid can rebuild an array near the hard drive speed, thus the sustained stress should be very similiar. The only difference I see is that on a Backup (for this example lets consider it to be a 3 disks raid5) only the two hard disks need to be read and two thirds of the missing drive need to be recreated while on a rebuild the the last third would need to have the parity recalculated of course.


    A rebuild thrashes disks far more than a backup.


    With that little more sustain, calculated above, I would rather see the problem elsewhere, like drives overheating - But I doubt that this would happen in a Enterprise Environment at all.


    I've seen rebuilds thrash drives for several days continuously.


    A growth may take that long, but I would scratch my head big time if a rebuild would take longer than a couple of hours.


    It typically takes much less time than a rebuild.


    And thats the best part of it, you have to have a backup, which we both gonna advice to everybody, aren't we? (And the less time part is only true if that backup is on HDD Storage and not on tape, right?)


    Please don't mind me jeopardizing what you say, you pretty sure have your experience. It just sounds a bit unrealistic to me that a rebuild would have so much more failure rate than a backup, since I have not yet experienced a rebuild problem. The only exception beeing me, as I said above.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • 1) RAID is no backup
    2) With many drives always go with double redundance.
    3) when 2 applies take different manufactores


    If u stick to this raid is not bad at all.


    It it would be as bad as u say it would not be used that much

  • In order to avoid a p!ssing match, I'll simply say we each have our own experiences and thus we base our decisions and advice on that.


    RAID has its place, but it is over-used and misused by a lot of people, IMO. That's all I'll say on the subject.

  • If you want headaches use RAID. If you use it without backing up your data you are an idiot. Most people here do not need the increased read/write speeds. SATA 3 is plenty fast.


    Khaliq, you do not need to use RAID. If you want pooling of your data there are alternative like the union file systems plugin. You could use Greyhole or symlinks also.

    • Offizieller Beitrag

    Even though I have had flawless experiences with raid, I still generally recommend against it. It definitely seems like people who don't run their server 24/7 with battery backup and no hard drive spindown have significantly more problems with raid arrays.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'am running my NAS (Boxes) since years, full backup and battery backup off course. NEVER had any issues with rebuild, growing etc. Lucky me...

    | HP Microserver N54L | 8GB RAM (ECC) | BIOS-Mod | 6 Disks - 40GB Intel SSD (Erasmus), 40GB Intel SSD (VMStore), 2TB Seagate NAS HDD (Data), 2TB WD Red (DataBackup), 6TB WD Red (Rec), 6TB WD Red (RecBackup) | Delock PCI-E USB 3.0 (Art. 89315) | deleyCON Dual DS 2x USB 3.0, 1x 2,5" & 1x 3,5" HDD | APC Back-UPS ES 700G | rsync to remote site | NFS | SMB/CIFS | SSH | omv-extras.org - VirtualBox |

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!