[HowTo] SnapRAID in OMV

  • Yes it only runs on boot but the script does not exit. So, this is what you want. The script does the scheduling for you and that is what the settings in the plugin configure.

    This script just needs to be started. It will take care of the rest.

    Sorry, but this is confusing. My OMV is already running and I have no plans to reboot. I created a job to run weekly for "/usr/sbin/omv-snapraid-diff"
    Is this all I need to do to not have to reboot?
    Thanks

    • Official Post

    Sorry, but this is confusing. My OMV is already running and I have no plans to reboot. I created a job to run weekly for "/usr/sbin/omv-snapraid-diff"
    Is this all I need to do to not have to reboot?

    I had it wrong. I thought the script just needed to be started once. This is why I said it should be started at boot. This didn't mean you needed to reboot.


    But after looking at the script more, you really need to run the job daily.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Great, thanks.


    Also I am on Stoneburner and want to upgrade to Snapraid v11. I was thinking I could just copy the v11 binary to /usr/bin/snapraid and that should be it. Am I mistaken?

    • Official Post

    Also I am on Stoneburner and want to upgrade to Snapraid v11. I was thinking I could just copy the v11 binary to /usr/bin/snapraid and that should be it. Am I mistaken?

    No idea. It is compiled on OMV 3.x/Debian Jessie and you want use it on OMV 2.x/Debian Wheezy. I just put the amd64 version in the stoneburner testing repo.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ok, so I took the time to read through this whole thread to try and grasp all the knowledge I could before posting some dumb questions already answered. Just hoping to fully iron this out and clear up an issue I think I have using SR and MergerFS.


    • I should be setting up a daily scheduled task to run /usr/sbin/omv-snapraid-diff? - this will run the daily sync, and scrub every X days as set by "Scrub Frequency", correct?
    • Delete Threshold - is there any best practice on what to set this as? Or some rough gauge for this? And should it be set other than 0 for "production" use once I've finished doing initial syncs and testing?
    • What does the Auto Save option do? I'm not sure I understand the idea of saving the state? Is this in case the process doesn't finish it saves a state to resume from?
    • I'm using UFS-MergerFS to pool 3 drives into a single mount point (Disk1-3 (4TB each), pooled into Storage1, and Disk 4 is for parity) - being that I'm using a UFS, will there be an issue with 3 copies of the content files existing? Since it pools storage and presents it as a single point, I'd imagine there may be an issue there, no?
    • Official Post

    I should be setting up a daily scheduled task to run /usr/sbin/omv-snapraid-diff? - this will run the daily sync, and scrub every X days as set by "Scrub Frequency", correct?

    yes


    Delete Threshold - is there any best practice on what to set this as? Or some rough gauge for this? And should it be set other than 0 for "production" use once I've finished doing initial syncs and testing?

    I don't use snapraid but I would use the default. Experiment with it once you get your system working the way you want.


    What does the Auto Save option do? I'm not sure I understand the idea of saving the state? Is this in case the process doesn't finish it saves a state to resume from?


    Yes, it saves the state to resume from if your machines crashes so it doesn't have to start over.

    I'm using UFS-MergerFS to pool 3 drives into a single mount point (Disk1-3 (4TB each), pooled into Storage1, and Disk 4 is for parity) - being that I'm using a UFS, will there be an issue with 3 copies of the content files existing? Since it pools storage and presents it as a single point, I'd imagine there may be an issue there, no?

    There might be an issue viewing the content files but you really have no reason to look at a content file from the pool. Snapraid will look at each content file from the individual mountpoints since you shouldn't present the pool to snapraid. So, snapraid won't care that the file are being pooled.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron thanks for all the replies. That's helpful to understand fully the details. Thanks for making such an awesome plugin and continuing to update/support it even though you don't use it! ;)


    @Solo0815 thanks for helping to support the plugin with your intense knowledge of snapRAID. I'm a recent convert from FreeNAS and the ZFS nazi's, happy to find a less intense alternative.


    I guess the only question now then is more about MergerFS and how it handles duplicate file presence on drives when pooling them together. I'll do some digging/search in some other areas to better understand. Thanks a bunch guys and keep up the awesome work!!

  • Ok, not totally out of the woods.


    I understand snapRAID should really only be used for data that doesn't change often. For the most part the data stored doesn't, but one directory is mounted on a docker host for persistent storage. It does take things like logs, and files in the running docker systems are changing and thus causing changes on the local filesystem as well. When I tried to finish running a full sync, I'm getting messages about unexpected time changes on the file, re-run when not being modified, etc and a WARNING! You cannot modify files during a sync.


    Am I SOL for trying to use snapRAID if I'm going to have this one location? Or should I be just creating an exclusion rule to ignore that location? I'd like to still do something to help protect that directory as honestly thats one of the most important aspects on this machine that I'd need a copy of right away if it fails (a day or two old isn't a problem, but don't want to have to reconfigure ALL my app settings).

  • Thanks. Anyone else have a similar situation using something that works well? Maybe just an rsync job at disk level to make 2 more copies of the data on the physical disks? Or create some sort of archived backup instead?

  • Sorry, one more side question as I'm finally done with my full sync - where does the email address To field get populated from? I see the option to "Send Mail" when the script runs, but who does it actually send to? I also saw when scheduling the task the send output, but that would be to the user running the script which I understood to be root. Should I be modifying something here?

    • Official Post

    where does the email address To field get populated from? I see the option to "Send Mail" when the script runs, but who does it actually send to?

    It uses the recipients in the notifications tab.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    thanks for the guide!


    I'd have to suggestions to that:
    1) add a note to add a cronjob doing the sync and scrub.
    (I'd even prefer if the plugin did this. One feels protected, but only with this cronjob one really is)
    2) add a note explaining that we are not using pools of snapraid (and why), but mergerfs instead. Maybe also suggest settings for mergerfs.


    Regards,
    Hendrik

    • Official Post

    1) add a note to add a cronjob doing the sync and scrub.
    (I'd even prefer if the plugin did this. One feels protected, but only with this cronjob one really is)

    I added a button to create the scheduled job. All you have to select is if you want emails or not.


    2) add a note explaining that we are not using pools of snapraid (and why), but mergerfs instead. Maybe also suggest settings for mergerfs.

    I removed the snapraid pooling function. I didn't add a note about mergerfs/unionfilesystem because not every one wants/needs to pool their drives and I didn't what to write. Open to suggestions for the info tab.


    version 3.5 of the plugin in the repo now.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • 3 questions in one here:


    #1
    I recall an earlier thread that I read through in here that outlined an issue whereby the report from the daily job was showing an increment of 2 in the days since last scrub calculation. I'm seeing this same behavior again and was wondering if this is a new bug, regression, or something else. Emailed log output to show:
    SnapRAID SYNC Job finished - Sun Jan 8 00:00:36 EST 2017
    ----------------------------------------
    SnapRAID SCRUB-Cycle count (7) not met (2). No scrub was run. - Sun Jan 8 00:00:36 EST 2017


    SnapRAID SYNC Job finished - Mon Jan 9 00:00:04 EST 2017
    ----------------------------------------
    SnapRAID SCRUB-Cycle count (7) not met (4). No scrub was run. - Mon Jan 9 00:00:04 EST 2017


    SnapRAID SYNC Job finished - Tue Jan 10 00:00:06 EST 2017
    ----------------------------------------
    SnapRAID SCRUB-Cycle count (7) not met (6). No scrub was run. - Tue Jan 10 00:00:06 EST 2017
    ----------------------------------------------------------------------------------------------------------
    #2
    I'm seeing at the end of my output the below. What exactly is the wait time relevant to? Disk wait time to read blocks or fix blocks? And is the file errors output telling me how many file errors were repaired, identified, missed?


    100% completed, 457228 MB accessed in 0:40
    disk1 0% |
    disk2 0% |
    disk3 89% | ******************************************************
    parity 0% |
    raid 5% | ***
    hash 4% | **
    sched 0% |
    misc 0% |
    |______________________________________________________________
    wait time (total, less is better)


    14677 file errors
    0 io errors
    0 data errors
    ----------------------------------------------------------------------------------------------------------
    #3
    I'm seeing some EXTREMELY fast Sync job times. As in 1 second fast. Is there something improperly set, or am I missing that running a DIFF first realistically will reduce the time to complete the sync to 1 second?


    Changes detected [A-1071,D-70,M-0,C-0,U-6938] -> there are deleted files (70) but delete threshold (0) is disabled. Running SYNC Command
    SnapRAID SYNC Job started - Wed Jan 11 00:00:06 EST 2017
    ----------------------------------------
    Self test...
    Loading state from /media/97974d61-46e4-43fa-a535-54a31b4faec2/snapraid.content...
    Scanning disk disk1...
    Scanning disk disk2...
    Scanning disk disk3...
    SnapRAID SYNC Job finished - Wed Jan 11 00:00:07 EST 2017

    • Official Post

    I'm seeing this same behavior again and was wondering if this is a new bug, regression, or something else.

    Didn't remember or didn't know about that. Looking at the script, there are two lines that increment. I don't understand why line 416 is in there since it is in an IF statement. I would assume the counter should increment every day without an IF statement. Try removing line 416 and see if it fixes it.


    2 - No idea about this


    3 - The logs say sync is starting. The settings must not be right for that.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Following up here in case it can help others, and to ask for anyone else who might be able to help on others:


    • @ryecoaaron Changing the line you mentioned (416) fixed the script. It seemed to iterate by only 1 after that change. Is this something that needs to be pushed to the release? Happy to try and test/validate again if there is value in it.
    • Anyone able to help explain what that output is illustrating? I now see this in the Sync output as well at the end, and would like understand what this was time readout is relevant to. Will be searching on SnapRAID site as well.
    • Sync seemed to have a problem when I tried to run it manually. It outlined a bunch of different files that were "missing" and should be fixed first. Alternatively, if expected, could be force run. After I manually forced a run, it completed. Once that completion cycle worked, the following evening the script ran fine without an issue and produced a much better output report. So if anyone is seeing a similar issue, try running the sync manually and adhere to the warning messages. ;) Output wasn't extremely long, only about 2 minutes to run, but actually showed output as well of percentage processed etc.
    • Delete Threshold - seems to be a safety mechanism. After doing some searching, I've found other outlining what the purpose of the delete threshold in this script is for. I saw a post from the initial author (I believe what this script is based on) - outlining this function to help detect large delete problems and/or more drastically, a missing disk. So it seems I may need to set the threshold a bit higher than anticipated, to start with, or adjust it after seeing a few more days of the diff script and seeing my "average" range of acceptable. And know in the back of my head, if I hit a larger Diff threshold via nightly report, that I will need to manually run a Sync the next day. Maybe a quick script locally that I can use via automator to kick off that Sync instead of logging in to console and fining it starting it etc. If anyone has already scripted this, please do let me know! (Still learning my scripting)
      Number of deleted files (71) exceeded threshold (25). NOT proceeding with sync job.
      Please run sync manually if this is not an error condition
    • Official Post

    Is this something that needs to be pushed to the release?

    I would think so. If we can get one more person to test it, I will remove that line.

    omv 7.4.0-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!