Storage configuration recommendation?

  • Hi all,


    I'm planning to install my first OMV system and after reading numerous guides it looks to be a good choice! I'm coming from an xpenology system and previously had used FreeNAS/NAS4Free.


    I would like some advice on storage setup for my system. Previously I had 8x 2TB drives in RAID5 and for the new build would like to add 2x 3TB drives. My thoughts are to run the OMV OS with all plugins, configurations etc on the 2x 3TB drives as a RAID mirror, and then create a RAID 5 volume for the 8x 2TB drives to store my network shares.


    Would I be best in that case to create a RAID1 volume in BIOS to install OMV, then create a RAID 5 volume within OMV from the other drives? Or am I better/is it possible to create a smaller partition on the RAID1 volume for OMV then add the remaining storage of the RAID1 volume to my RAID5 volume in OMV?


    I'm open to any suggestions, my main priority is data redundancy for drive failure, I don't mind the wait time of a rebuild if a disk needs to be replaced as long as everything is 'automated'.


    Thanks in advance!

  • No backup strategy/implementation?

    Not too worried about backup beyond the RAID array, I have my photos stored on an external drive and the RAID array should be enough redundancy for the rest of the data, if it was lost due to theft or something else unforseen it would be annoying but not the end of the world.

    Automatic rebuild in OMV ins't supported, it is on Synology and WD,but here you need to do it manually.

    Oh, OK. So if I had a drive failure and had notifications/reports turned on, I would be emailed if there was a drive failure, then I could replace the faulty drive and begin the rebuild/repair from the webgui?

  • backup beyond the RAID array


    Backup and RAID are two completely different things and have nothing in common. The former is about data protection and the latter about availability.


    Why do home users always give a sh*t about data protection while wasting ressources for availability (also called 'business continuity') they don't need at home anyway?

  • But if it has RAID1, one disk failure is not terrible

    Not even true. I dealt more than once with systems relying on a primitive mirror like RAID-1 suffering from data corruption after it has been discovered that one of the drives started to fail already days and weeks ago. It's simply naive to think HDDs would die immediately since usually they don't.


    The primitive RAID attempts and especially RAID-1 is so useless these days when we have way better options available (also taking care of data integrity like a ZFS mirror or btrfs raid1 mode).


    Anyway: it's a matter of fact that storage professionals take care about

    • data protection (doing backups with versioning in various places)
    • data integrity (fighting bit rot)
    • data availability (ensure 'business continuity')

    while home users for whatever bizarre reasons only take care about what's least interesting to them: availability and this RAID stuff.

  • @Mr.Grape & @tkaiser
    Clearly you have issues with RAID after browsing these forums, and seem to be very knowledgeable on the subject, so I'd be interested in your opinions on best use scenario.


    I have used ZFS with nas4free and freenas in the past and replaced a faulty drive in that scenario with minimal disruption or data loss. I have also used RAID5 in my Xpenology system also with a failed drive that was replaced with minimal disruption and data loss.


    My NAS usage scenario is for:
    -keeping personal and important files (photos and documents) which are also backed up onto an external drive and partially in cloud storage
    -for storing my media collection of music and converted DVD's
    -for temporarily storing numerous computer backups for a short period of time (approx 1 month) for my computer repair business


    I have external backups of my important files (documents and pictures) both onsite and offsite for if something drastic happens. My media collection I am happy to just have redundancy as it is quite a large collection that would require alot of additional storage for backups and if I lose (although frustrating and time consuming) I could convert again if need be. The customer backups are there for peace of mind but technically are not my responsibility IF something happened, and again would require alot of additional storage to keep 'backed up'.


    So with approximately 10TB of fluctuating data and the info above, what would you suggest as a 'good solution' for what I need to do? I currently have 8 2TB drives in a RAID5 array handling this and as I say I have had to replace drives over the years without losing the data and only sacrificing 2TB of storage. To back it all up I would need an equal amount of backup storage despite your claims of 'wasted' storage in a RAID array.


    I'm asking this objectively as I am open to alternative solutions, but so far you have only given reasons not to do it based on personal experience and knowledge, but haven't offered a better alternative solution?

  • I have used ZFS ... in the past


    ZFS is a more modern approach for old problems. ZFS adds two features:

    • data integrity checking and self-healing at the filesystem layer ('checksums' and scrubbing)
    • snapshots and through this a pretty primitive but effective mechanism for data protection (if you always keep at least a few snapshots corruption or loss of data due to 'human or software failure' is very unlikely)

    RAID-5 also provides some sort of data integrity below the filesystem layer (the parity information that is used to rebuild the RAID in case of a missing disk is somewhat similar to a 'checksumming' filesystem above. But this is one layer below the filesystem so a RAID scrub can 'repair' stuff at the block device layer that results in a filesystem corruption above -- been there, repaired that multiple times)


    My rant above was about RAID-1 which is IMO just an insane waste of disk space these days since not even providing data integrity features unlike RAID-5 or RAID-6 (the latter the only sane recommendation with large drives if you do not want to go the ZFS route). Unfortunately forum users are even encouraged by other forum users to implement this insane BS especially on incapable hardware.


    I was asking as first answer in this thread about a backup strategy/implementation and fortunately you have one. So everything fine (except single redundancy with RAID-5 and large disks not being a great idea and missing some protection due to no snapshot functionality used)

  • Thanks @tkaiser for the thorough explanation, and I do agree about a RAID1 mirror being a waste of space with the potential of data corruption as opposed to a backup plan to an external drive with rsync or similar.


    So my understanding is that your recommendation would be to create a ZFS RAIDZ volume over RAID-5, which I understand can be done in OMV with a plugin? I've seen some info about that browsing the forums so I'll look into that.

  • my understanding is that your recommendation would be to create a ZFS RAIDZ volume over RAID-5


    No, for the simple reason that AFAIK RAIDz on Linux still does not implement 'sequential resilver'. When you replace a drive in a RAID-5 or RAID-6 the rebuild will run in a linear/sequential fashion and sequential IO is something HDDs are quite good in. With RAIDz the sequence of filesystem objects matters and with arrays that have been filled over time you run in a resilver (rebuild) IO pattern that is largely random IO (which is something all HDDs suck).


    So if one disk is dying caused by age and then such a random IO resilver starts this is much more stress for the remaining array members compared to a traditional RAID (or RAIDz on other platforms than Linux -- in Solaris for example this has been solved long ago).


    Usually we only use ZFS on large filers and also usually we avoid RAIDz and follow this rule: http://jrs-s.net/2015/02/06/zf…e-mirror-vdevs-not-raidz/


    I recently set up a RAIDz3 for archive/backup purposes but only after some extensive tests with multiple scrubs and resilvers to ensure performance doesn't suck too much (which is the case). With large HDDs and a bunch of them I wouldn't choose anything below triple redundancy since this system is not backed up to another any more.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!