2 RAID 1 sets with 4 discs

  • Ladies and gentlemen,


    i have a HP microserver with 4 discs (2 x 4 TB, 2x 3 TB bit different manufactor (Seagate / WD) ) running on Debian and using greyhole for redundancy (kind of).
    System is running on a 60 GB SSD.
    Now, i want to migrate to OMV.


    Question: I it possible to have 2 RAID 1 sets (both 4 TB discs in set one, both 3 TB discs in set 2) to have an overall space of 7 TB?


    Thanks and best regards


    Juergen

  • I personally consider RAID 1 (mdraid) always worst choice possible. I would either create two RAID10 mdraids out of those disks and then put a multi device btrfs on top (with compression=lzo) to end up with 7TB disk space. Or create 2 zmirrors out of each disks and combine them to one pool (should be as fast as the above or even faster). Or simply use btrfs own RAID1 functionality with all 4 disks to get the same amount of space (btrfs uses a different approach than mdraid so you can combine disks of different size).


    Anyway: how do you backup those 7 TB?

  • Hi,


    thanks for the fast reply.
    To be honest, i do not know much (or even less ;) ) about btrfs.
    I want a simple solution which is directly supported by a system (in this case OMV). Also, i want to have a chance to put the disc into another PC to try to rescue the date if the RAID fails.
    That's why i used greyhole for the last years. Simple, easy but not very elegant.


    A backup is not necessary for all of the data on the drives. I want a RAID just in case one drive fails (which i allready had).
    Backup for the most important data is done with rdiff-backup on a RaspPi with some TByte HD.
    For the rest of the data (movies and stuff like this).... if they are gone, they are gone ;)
    I know a RAID is no replacement for a backup in case of deletion of data by accident. That's why i backup my important data onto the RasPi. If all my movies are gone because i delete them by accident, i will call myself "Stupid" and that's it. Bad luck, my fault!


    Thanks and best regards,



    Juergen

  • Ok, close to zero 'data integrity' and 'data protection' requirements but only 'one continous large storage pool of 7TB size with some redundancy'?


    I would still either use 'zmirrors as vdev' or btrfs own RAID 1 mode to achieve this goal but have to admit that I fail to understand the complexity aspect when being new to OMV from my perspective dealing with this stuff daily :)


    So stepping back now in the hope others make suggestions that are deeper integrated into OMV.

  • Still, with 2 RAID1 arrays, you are wasting half of your drives for no reason. If you want the drives to be readable outside of the array, you could go with SnapRAID and mergerfs. This would only waste 1 drive.

  • Hi,
    I'm exactly in the same situation.


    I've built a NAS with 4 HDD (4 To) and 1 SSD for the system. I need storage space (for datas of not really critical importance, like movies). So i would like to buid two RAID1 arrays (2x4 To) on my system.


    So I've tried : the first RAID1 is OK. But after reboot of the system, the second RAID1 disappears in the raid management, i've tried several times. So, i wonder if it is possible to create 2 RAID1 on an OMV system, or if i do something wrong ?


    I must indicate that i'm a beginner with OMV, debian... :D


    Thanks and best regards.

  • Thanks !


    OK, I'm discovering ZFS. Sorry, i'm a noob, but what is the utility of this plugin ?


    I understand (after reading other threads) that the idea of creating 2 raid1 arrays is quite a nonsense because of losing storage space. But i still wonder why the second raid1 I create disappears ? Or if OMV allows it (I can't be the one to ask the question :/ ) ? Maybe i need to create a dedicated thread for this question, tell me if I have to.

  • ZFS. ... I've read that it requires a lot of memory ?

    Nope. That's just what FreeNAS forum moderators tell their users since they're tired of people not reading the fine print, then running in trouble and asking the same questions again and again.


    The only thing that needs vast amounts of RAM with ZFS is deduplication. If you don't need this, you're fine. Then 2 'issues' remain:

    • ZFS has some defaults how to use the available RAM for buffers/caches. If you're running very low on RAM those need adjustments
    • ZFS on Linux does memory allocation in a way that's different to all other filesystems so that's where the urban myth originates from that 'ZFS eats up all your memory'. 'Free memory' in Linux is nothing you want to have, usually all free memory is used by filesystem buffers/caches (kernel), with ZFS it's exactly the same but another process claims the memory

    If you're new to ZFS (or anything other that does stuff slightly differently -- e.g. btrfs) some time spent on learning the basics is needed :)

  • OK thanks ! Indeed, I have to learn... and to determine what's the best solution for me (searching a user friendly solution for beginner... oops that's called a NAS from market :saint: ).


    Sorry, I'm still on the same question, but what kind of error do I when the second raid1 array disappears after a reboot of OMV ?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!