Rebuilding OMV - how to organize storage?

  • Hi guys,


    I now was absent this forum for a long time and need some consulting on storage myself.


    I for myself live still in the old world, where Raid+LVM+FS is a great thing. I still believe in the strength of it and in the pros it comes with. Of cause I also know the downsides of it. In other words, I am expert level knowhow on the old school thing.


    However, I am not sure if that is today still the best way for the use case of my NAS. So I am looking into some consulting from you guys, about how to setup my new NAS (of cause OMV :) ).


    The NAS contains today out of 4x4 TB WD Red forming a Raid5 without spare. I only use it for private use and store converted BDs, movies, pictures etc. pp. on it. It has a very small section for other personal data on it. The personal data are regulary backuped, the media files are currently not at all. I do rely on redundancy as user errors (aka delete), are not possible from anyone else then me :)


    However I like to put in now larger disks. Those disks will contain the same profile as before and the same usage. Of cause the current setup is very space efficient, as I only lose 1 disk.


    Question:
    What other setup will give me redundancy, maybe with snapshotting offering some protection against massive deletes (I do mistakes) or beloved Ransomeware.


    I am open for any suggestions and will then investigate into the solutions.


    If you could answer with a short recommendation and the main benefits would be greate.


    Highly appreciate your feedback


    Ser

    Everything is possible, sometimes it requires Google to find out how.

  • Hi,
    I'm not an expert and my answer is probably not, what you are looking for, however I would like to share some information with regards to Raid5 with you:
    Some time ago I have read an article about Raid5, disksize and the rebuild-szenario. Statistical calculations indicate, that if you are going beyond a certain disk size (I believe that was 4TB) and a certain amount of disks (I believe 5), the chance of getting read- or probably write-errors during rebuild is quite high, and therefore the risk, that the rebuild fails is quite high as well. The higher the capacity of the disk and the higher the number of disks, the bigger the risk of loosing all disks during rebuild (especially because in most cases sthey are of the same age).
    From real experience of a friend, who works as an IT guy, I can tell: They had a 8 Drive NAS with 4TB disks, all brand new, all designed for 24/7 operation. After app. 8 month, the first disk failed. The disk was exchanged, rebuild went well. A few days later, the next disk failed. The story ends up with successfull rebuilds, but 6 disks had to be replaced, because one after the other died.
    A second smaller NAS with 4x4 TB disks in Raid5 had a disk failure. Rebuild with the new disk failed, because the second disk went down. The only way to rescue the data was a trick. By using harddisk-sentinel the relevant error-counters of the disks have been reset to zero and a backup of the raid was started. The backup was inkremental, because some counters had to be reset from time to time...
    I personally don't rely only on Raid5. My OMV NAS has 4x4TB, however I rsync everything from time to time to a second machine.
    Thought, that might be interesting to share :=))

    • Offizieller Beitrag

    (Regrets for the length. Writing this took on a life of it's own.)
    _____________________________________________________


    I have to agree with MBGucky:
    Back in the day, when we were running hardware RAID controllers with (+)10GB drives (large back then), with 4 drive arrays (3 + parity), RAID made sense. (Also, budgets were not an issue. Complete sets of drives were on the shelf.) Since a striped volume could be read faster than a simple volume, there was a significant performance boost in network transaction handling as well. However, in modern times, with a good number of bottlenecks removed, (new drive interfaces, faster drives, etc.) the performance question has changed.


    To my way of thinking, the traditional reasons for using RAID are nowhere near as compelling as they once were and, arguably, RAID has significant downsides that are growing as drives are getting larger. MBGucky is correct on two different fronts. The "write hole" susceptibility of hardware RAID, in a dirty or abnormal shutdown, can't be taken lightly. Also, given the shear size of modern disks, even a low "compensated" Bit Error Rate (1x10 -16) will ensure that errors are slowly written to the array at the hardware level. Note that file systems, operating above the controller, won't recognize errors created and written at the hardware level.


    So what does RAID actually give you?
    In a networked, server farm environment with hundreds of unforgiving users, RAID "assists" server admin's with maintaining 24x7 server op's. If we're talking about relatively new equipment, RAID would provide some protection from the small but very real chance that a new drive might fail prematurely. (We referred to it as "infant mortality".) In such an environment, where the numbers of drives was large, RAID did improve overall "up time" numbers.
    On the other hand, in a home NAS, drives tend to be used until they fail. With aging drives, a drive rebuild can (and will) cause additional drive failures. (The rebuild / restripe process is a drive "torture test" for the remaining drives that are, typically, "geriatric".) So, at home, RAID would tend to give one a false sense of security which, given the size of modern drives and the length of the rebuild process, may precipitate a multi-drive failure. At that point, it's too late.


    In a home or small business environment, considerations are not the same:
    At home, at best, RAID adds some "fault tolerance". RAID provides some protection (not absolute) from the weakest link in a PC hardware and it's greatest probability of failure - the "Hard Drive". RAID gives Zero protection from other possibilities like the loss of a MOBO, a power supply, or an event that takes out the entire RAID array (a virus is but one example). A power surge can take out the entire box. (And while it's off the topic; this is why I have "whole house" surge suppression on my power panel. I want to protect my PC's and the rest of my electronics.)


    Again, I have to agree with MBGucky, RAID is not backup.


    - A simple first level of backup would be a USB3 external drive that is continuously copying changes to your shares, from your internal data drive, with Rsync.
    - At the second level, a full backup of your boot drive is not a bad idea either. When something goes wrong, it beats rebuilding the OS from scratch.


    Moving on, I have OMV on an R-PI, with a 4TB WD "My passport" USB powered drive that's doing what I described above. It's Rsync'ing my file server in a schedule where data is no more than a week old. That gives me an addition level:
    - A full platform backup. (I also have a SD card backup of the R-PI boot drive.)
    If my primary server fails (Windows based for now), I simply activate the R-PI's Rsync'ed shares with SAMBA (preconfigured) and I'm back in business in a matter of minutes, using the exact same share names. (The R-PI is also doing the job on a tight power budget, at 12 to 15 watts.)


    Going further than that:
    I have an old, nearly obsolete, 3rd server that's "cold" or off. I fire it up once every 2 months or so, sync it to the online server, and shut it down. It's about 10 years old now and it's been hop scotching through time. This level (cold) provides protection from voltage surges, preserves data from viruses or ransomware, and gives the best overall protection from hard drive failures.
    **This is the scenario where a hard drive will last a l-o-n-g time. 24x7 op's will kill a drive in 4 to 5 years. Running hard drives for a few hours, every 2 or 3 months, well,,, I have drives that are over 10 years old and doing fine.**


    So, I have data backup, OS backup, platform backup, and disaster backup (surges, viruses, ransomware). Essentially, I can go back in time to 2 different intervals, if something corrupts my real time data.
    (BTW: The cost of drives is what it is, but full redundancy doesn't have to be expensive. An R-PI3 or, better yet, an Odriod-XU4 is more than capable of coping and moving files around, and they'll work fine with WD's 4TB external drives.
    ______________________________________________________


    There are some who love RAID. Lately it seems to be gamer's who, in pursuit of the peak in performance, will risk running RAID 0 for faster drive throughput. Setting that aside, I can't see a compelling reason to use RAID in a small office or home environment.


    It's entirely up to you but it comes down to priorities:
    Is your priority capacity (noting that enormous drives are relatively cheap these days), performance (realistically, your environment is just a few users), 24x7 up time (again, not really necessary in a SOHO), or data preservation?


    My priority is "preservation" which translates to "redundancy". I have e-mail and data that goes back to Windows 95. Also there are photo's, a music collection, and other items that I refuse to allow to slip away in some chance accident. Accordingly, I'm going with 2 levels of data AND platform redundancy, with protection that goes back in time. I'm using BTRFS for it's check sums and protection against "bit rot". Along the same lines, I'm going to give ZFS a serious look but that's on the back burner. (It seems to have a significant learning curve.)


    Your priorities may be different and, as you know, there's no absolute right or wrong way to do anything. This is offered as food for thought.


    Regards.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!