SnapRaid "z-parity" instead of 3-parity

  • Hello,


    I apologize in advance if my question has been answered in some form before, or is not located in the correct section.


    The problem I am trying to solve is that, I have an old Supermicro chassis with AMD Opteron processor, and this CPU does not support SSSE3 instructions. That is a major problem for 3 or more parity drives, as SnapRaid uses it to accelerate calculation. At the moment my system with 20 Data drives and 4 parity HDDs takes 4 days to do scrubs etc...


    There is one option for poor schmucks like me, that is to get down to 3 parity drives, and select "z-parity" for the 3rd drive. This "z-parity" option was added in SnapRaid for old CPUs to accelerate 3 parity drive calculation.


    I have not seen this option available on the OMV SnapRaid plugin, maybe I missed it, but I am sure that would be a bit of a headache to implement only to support ancient hardware. Although I wouldn't complain if someone as this to the plug-in.


    So my question is: how can I go about to modify snapraid.conf without making a mess with the plug-in database? And make sure my change won't be reverted by OMV behind my back?


    Thank you!

  • Looking further into the Shell script in charge of creating the SnapRAID configuration file, I didn't see anything about z-parity. So I am planning to modify the script like follows on my instance of server. This is simpler for me to do it this way as I am familiar with Shell script, but not so much with whatever language is used for the GUI. And also z-parity is supposed to be faster than 3-parity no matter what, so I figured I may as well force it when there are only 3 parity drives total.


    path: /usr/share/openmediavault/mkconf/snapraid
    The modification is done in the IF statement creating the lines for the parity files (should be the correct line numbers here under)


    For reference the original version of the file used on my system is as follows:



    I have not tested it yet, and if someone more familiar with the system has a remark/correction etc let me know.


    Usual disclaimer as it is not tested, and I can't guarantee that it won't wreck your machine... Hell I can't guarantee it won't wreck MY server

    Einmal editiert, zuletzt von Husker () aus folgendem Grund: Issue with script edit, will be updated later

    • Offizieller Beitrag

    I have not tested it yet, and if someone more familiar with the system has a remark/correction etc let me know.
    Usual disclaimer as it is not tested, and I can't guarantee that it won't wreck your machine... Hell I can't guarantee it won't wreck MY server

    At least that's a fair warning.

    At the moment my system with 20 Data drives and 4 parity HDDs takes 4 days to do scrubs etc...

    Since I'm sure you've been over the options and have probably tested a few, perhaps you could help me understand why you went this route? What would be wrong with breaking the 20 data to 4 parity down to 4 each sets of 5 data to 1 parity? The 20 to 4 setup would be more fault tolerant, allowing more than 1 failed drive, but scrubs would have to faster in multiple 5 to 1 setups where scrubs might be set to run and finish afterhours. (Or so it would seem.)

  • Hey,


    I simply didn't think of that possibility, you are making a good point. My main concern originally was not to stray far from the comfort of the GUI, and not having too many manual changes... I really wanted to have one tidy and simple system with its glorious 24 drives. But I guess that's a reality check for me.


    Following up on your message I looked a bit around and saw a few subjects on the SnapRAID forum referring to that:
    https://sourceforge.net/p/snap…/1677233/thread/58d5a134/
    https://sourceforge.net/p/snap…/1677233/thread/15ecf531/


    At the moment I am going to try this setup with 3 parity drives with the last one using z-parity, and see how it performs. And if it's not good enough I'll see with just loosing the SnapRAID plugin and doing multiple config file/process instances as you suggested. I'll keep updating the thread with my findings.


    Thank you for throwing some ideas in there, I appreciated it!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!