Request: Possibility for several Snapraid disk groups / config files

  • Hi,


    I want to extend my NAS with a group of new disks, and I just realized that the snapraid plugin only allows for one drive pool, so I would have to integrate the new drives into the existing one. This is not my favourite choice because the "old" pool consists of 5 disks with 8TB each, and the new one will consist of 5 18TB disks. With one single pool I have the following options:


    - 5x8TB + 4x18TB data + 18TB Parity, which is discouraged by the snapraid developers because of the imbalance between data and parity drives;

    - 5x8TB + 3x18TB data + 2x18TB Parity, which means that I could only use 60% of the new disk space, and only gives a minimal increase in protection from drive failure.


    So what I would like is one pool 4+1 with the 8TB drives and another one with 4+1 18TB drives, which gives me 10T more usable space and is only marginally less safe - if two disks from the same pool die simultaneously.


    So my question is if the snapraid plugin could be extended to allow for several groups / config files. Pretty please?

    • Offizieller Beitrag

    Another option would be to make a single group with this configuration:

    4x8TB + 4x18TB data

    1x8TB + 1x18TB parity

    The first 8TB parity drive would protect the entire 8TB drives along with the first 8TB of each 18TB drive. The second parity drive would protect all of the data on all of the disks.

    I've seen similar setups, but I'd suggest checking out the possibility here https://sourceforge.net/p/snapraid/discussion/ before doing so. They will know better than anyone whether this configuration would be stable.

    • Offizieller Beitrag

    my question is if the snapraid plugin could be extended to allow for several groups / config files. Pretty please?

    It already has been. It was in the testing repo that you can't enable anymore. No one used it to tell me if things were working right and most complained that the changes broke their snapraid-diff scripts.


    wget https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/debian/pool/main/o/openmediavault-snapraid/openmediavault-snapraid_6.2.1_all.deb

    sudo dpkg -i openmediavault-snapraid_6.2.1_all.deb

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Another option would be to make a single group with this configuration:

    4x8TB + 4x18TB data

    1x8TB + 1x18TB parity

    The first 8TB parity drive would protect the entire 8TB drives along with the first 8TB of each 18TB drive. The second parity drive would protect all of the data on all of the disks.

    I've seen similar setups, but I'd suggest checking out the possibility here https://sourceforge.net/p/snapraid/discussion/ before doing so. They will know better than anyone whether this configuration would be stable.

    OK, I will ask the snapraid guys if this is possible. Up to now, I always read that the parity disk has to be at least as big as the biggest data disk.

  • It already has been. It was in the testing repo that you can't enable anymore. No one used it to tell me if things were working right and most complained that the changes broke their snapraid-diff scripts.


    wget https://github.com/OpenMediaVault-Plugin-Developers/packages/raw/master/debian/pool/main/o/openmediavault-snapraid/openmediavault-snapraid_6.2.1_all.deb

    sudo dpkg -i openmediavault-snapraid_6.2.1_all.deb

    If I use this package, will it update later when omv-snapraid is updated?


    Also, since it was in the "testing" repo: Is this stable / safe to use?

    • Offizieller Beitrag

    If I use this package, will it update later when omv-snapraid is updated?

    yep.


    Is this stable / safe to use?

    There shouldn't be any risks to data. You can't schedule a diff but all of the manual command (sync, etc) seem to work just fine. The plugin was just changed to support multiple arrays and split parity. This involved creating multiple snapraid config files for each array.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • yep.


    There shouldn't be any risks to data. You can't schedule a diff but all of the manual command (sync, etc) seem to work just fine. The plugin was just changed to support multiple arrays and split parity. This involved creating multiple snapraid config files for each array.

    OK, great. I'm going to install it and report how it works. This will take a few weeks though, as I haven't bought the hardware yet.

  • I installed the package, but the last scheduled sync crashed my system. (It ran for hours even though there were no big changes, and the whole system was unresponsive even from the terminal, but the HDD led was constantly on.)


    I tried to run a manual "check", but it told me there was no parity disk.


    Looking at the configuration file, I noticed a peculiarity in the line for the parity disk: It says "4-" in front of parity, which I guess shouldn't be there.



    Also, I'm confused by the "Parity Num" column in the "Snapraid Drives" tab: Why is it theat two of the data drives have the same value, but the other ones are different?


    • Offizieller Beitrag

    Also, I'm confused by the "Parity Num" column in the "Snapraid Drives" tab: Why is it theat two of the data drives have the same value, but the other ones are different?

    The parity num column is for supporting split parity. The import process is hard and I might be able to improve it but you should edit the disk and change the parity number to 1. Your existing scheduled task will not work either. The scheduled diff script in that version of the plugin doesn't support multiple arrays. You will have to run checks manually until the script can be updated.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK. I can change the parity number for the parity disk, but not for the other ones. Will this work?

    Two more things:


    1. I wasn't running a diff, but a sync when it crashed. Should I also not run scheduled syncs?


    2. How about those funny "4-parity" entries in the config file? Should I leave them like that or change them?

    Edit: Ah, OK, that disappeared after changing the parity number to 1.

    • Offizieller Beitrag

    OK. I can change the parity number for the parity disk, but not for the other ones. Will this work?

    The parity number on non-parity drives is ignored.

    I wasn't running a diff, but a sync when it crashed. Should I also not run scheduled syncs?

    How are you running them?

    How about those funny "4-parity" entries in the config file? Should I leave them like that or change them?

    You should never edit a config file since the changes would be overwritten by the plugin. If you change the parity number of the parity drive, it should "fix" the config file.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Like this:

    That won't work anymore. Well, it is using the old snapraid.conf file. It would look something like this now:

    snapraid -c /etc/snapraid/omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf sync

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hmm, I started a "snapraid check" job (with the config file you mentioned), and it crashed the system after a few hours.


    Then I tried a "snapraid sync" (also with the new config file), and it crashed during/after reading the ".content" file


    In both cases, the whole system became unresponsive (while still permanently accessing disks). Is that normal? IMO, snapraid might crash if there is an error somewhere, but it should take the whole system down...


    I'm thinking whether I should rebuild the whole parity disk...

    • Offizieller Beitrag

    You didn't use the exact config file I posted, did you?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    I figured that was the default for the first array.

    The uuid in the filename is randomly generated. Did you use the actual filename in your /etc/snapraid directory?

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Not sure why your system is crashing then. Maybe it does make sense to create a new parity. But I don't use snapraid myself.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK, it's been a while, but now I have installed the new system with two Snapraid arrays. Manual snapraid sync and check jobs work fine so far (without rebuilding the cache, the errors really seem to result from the old system).


    BTW, in the new OMV installation, the config file for the first array is called omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf again, so it was no accident that it was the same for both of us.


    Now I have three remaining questions:


    - If I were to to run scheduled tasks using the config files for the different arrays, as in

    snapraid -c /etc/snapraid/omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf sync ,

    this should work, right?


    - About the "scrub" configuration in the "Snapraid settings": Should I disable them also (as with the scheduled diff)?


    - The "send mail" checkbox in the settings is checked, but I haven't received any mails after the manual syncs / checks. Do I have to do something else? I have setup my mail account in "System Notification"

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!