[HowTo] SnapRAID in OMV

  • Here's the script I was using.

    Thanks buddy, will test it tonight.


    If works good I guess we (actually you, if it's your work) could make a merge request to have it updated for everybody.


    EDIT: Looks like it's just an older version of the official one, no edits at all. It exactly matches this version of the script. Are you sure you got yours?


    EDIT2:
    I am currently testing this script that seems to be less spammy and bit more structured, but the script does not terminate correctly and goes on even if it's done. I'm dumb in bash so I don't know how to fix it.
    Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
    If you want to test it, also install python-markdown.

  • Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
    If you want to test it, also install python-markdown.


    If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'


    Today it ran fine, but yesterday it hung.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'


    Today it ran fine, but yesterday it hung.

    lol, I did, and I am the other user, the one with another dog in the picture.


    I have found a good script which is not spammy at all, comes from here but these are the changes that I included


    - Adapted for standard parity (original script is made for split parity)
    - Integrated changes of user sburke to make it working with Debian 10 (original script does not work on Debian 10)


    I've tested it in my OMV5 VM and works fine.


    NOTES

    - You can configure sync rules, but by default it always forces a sync
    - It can pause Containers so they don't mess up, and restore them when finished. I disabled it but if you want to use this feature switch MANAGE_SERVICES=0 to 1 and in SERVICES= list all containers.
    - In OMV5 the script it self does send emails (you can specify your address in EMAIL_ADDRESS=). You might want to disable this feature or the one from the scheduled job on OMV.




    The output itself is quite nice. I've kept the whole output, it's not syncing any personal data.

    • Offizieller Beitrag

    The only reason I could come up with for using 2 disk SNAPRAID, would be for bit rot protection. Also SNAPRAID, and a simple filesystem like EXT4, will work reasonably well with USB connected drives.


    (Setting bit rot protection aside, which is a big deal in itself, creating a simple mirror with Rsync provides roughly the same benefits.)

    I have set up just this kind of backup strategy:

    • one mirrored backup using Rsync.
    • two disks set up in SnapRAID, one of them data, and the other parity. Both of them content.

    It seemed logical to me, as I can easily fit what I am currently doing on one 8TB disk.
    Everything worked well. I have an automatic scheduled Rsync twice a week on the mirrored backup, and I have run SnapRAID sync once, and everything went well, except for two details (maybe three) so I am looking for some advice/clarification before I proceed into the unknown.

    • My SMART notified me that my Rsync mirror had bad sectors. In the process of unreferencing/unmounting the disk for replacement I discovered that...
    • I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.
    • I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.

    My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed. Thanks.

  • I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.


    Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.

    • Offizieller Beitrag

    I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.

    You have me here - I'm not sure I understand what happened. Are files missing from your Rsync destination? (I suppose you want those files?)

    I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.

    On your Rsync disks, I'm not sure if you're talking about the source disk or the destination. If your source disk is still good, you're good to go. So, in speculation, was it your source disk that started to fail?


    It seems as if you're asking about restoring the source disk with "SNAPRAID". I haven't done a full disk restore before. I haven't had to. With multiple backups, after I do significant work, I manually run a backup to insure that new data is in at least two places. (If I forget, the automated processes take care if ti.)

    My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed.

    There's a recovery process in the SNAPRAID MANUAL, in section 4.4. The questions above, would only make me ask you more questions.
    - If you have at least one good backup, why would you want to do a SNAPRAID restore?
    - Restored data would only be as current as the last SYNC operation. ((You would know, better than I, if there's a compelling reason to go back to that state of your date.)) It would seem as if you'd be losing work in any case.


    If I was in your place:
    If I had at least one good clean backup, on another platform, and if there's a compelling reason, I might try a SNAPRAID restore. In any case, my inclination would be to take the lowest risk safest path possible. That means I wouldn't do anything with or try to reuse / fix the disk with bad sectors before the replacement disk arrives.


    When you have your replacement disk and it's restored with your backup, even if the backup is a bit out of date, you could mount the failing disk and see what missing files (new work) can rescued from it using something like Midnight commander.

    • Offizieller Beitrag

    I’m sorry. I guess I tried to cram too much in one post. Edit: Looking at what’s below, I’m afraid I’m about to do it again.


    I’m using three SATA disks but only two are in SnapRAID: one Data and the other Parity, both Content. Both of those are physically okay and SnapRAID appeared to run fine the two times I ran sync and scrub about a week ago. The third disk is a mirrored backup of the Data drive just mentioned, via Rsync, per your guide on p. 64. It is not part of the SnapRAID array.


    The third disk, about a week ago showed up with bad sectors so today I am going to swap it out for a new disk that just arrived and Rysync it from the first drive. All is well up to this point. No data is really in jeopardy. I even have a remote backup on another machine.


    When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.


    What I am wondering about is how to start sync, scrub, etc. and what to expect from error output and what to do with it. I have corrected the exclusion rule (AppData) to point to the Data disk.


    Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.


    Despite tons of reading up, both at SnapRAID and on this forum, I’m not sure of the steps to begin again sync, scrub, fix, etc.

    • Offizieller Beitrag

    When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.

    The SNAPRAID rule is probably why the RSYNC disk wouldn't unmount. Clear the rule and it should unmount. If not look in filesystems to see if the disk is "referenced".

    Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.

    You're still good, if your SNAPRAID data disk is OK. The SYNC updates the parity drive and the content file(s) only, and it becomes (in a sense) your new backup as the completion of the SYNC operation.


    If you're not doing a SYNC manually, on a regular basis, you might think about automating a SYNC command to run once or twice a month. It can be done in scheduled tasks with something like the follow command: snapraid touch; snapraid sync -l snapsync.log
    But note there are other housekeeping commands you should consider running, before running the next SYNC command.


    The order I run is:
    snapraid touch; snapraid sync -l snapsync.log


    snapraid -p 100 -o 13 scrub
    snapraid -e fix


    The first command does a touch that fixes the annoying "0" second time error thing. It then runs a sync and directs output to a log named snapsync.log


    A few days before the next SYNC operation, I run the scrub.
    Then the fix command is run one day after the scrub, to fix issues (bit-rot, etc.) found in the scrub.
    In Scheduled Tasks, all are set up all to send an E-mail of their output.
    (*And note that, even with the above, you can manually run a new SYNC operation, after doing a lot of work that you want to insure is backed up.*)


    That's how I do it. Others may have other ideas.

  • I think if you updated that exclusion rule, you can remove the bad disk and replace it. Snapraid should still function normally with the other 2 drives.


    I would say you have a rather odd setup. Why bother using parity when you only have 1 data disk? Do you plan to add more disks in the future?

    • Offizieller Beitrag

    Why bother using parity when you only have 1 data disk? Do you plan to add more disks in the future?

    The SnapRAID is to protect against data corruption and the Rsync is for a true backup. I have 8TB disks and they are running at about 23% capacity. I will probably add disks some day.

  • I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.


    Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.

    Thanks but don't worry, I found a better script in the meantime, you can get it in my previous posts. Works quite well.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

    • Offizieller Beitrag

    @crashtest Thanks for the information, especially the order and explanation of the commands.


    Regarding not being able to unmount the failed destination drive, I knew it was a reference issue but I “knew” nothing was referenced to it. Finally I picked up a stray comment somewhere about SnapRAID rules. Sure enough I had set to exclude the AppData folder on the SnapRAID Data disk but had mistakenly written the Rsync Destination disk into the rule. The first sync I did showed a bunch of AppData lines in the output. I should have suspected something from the git-go, but I didn’t have the experience to know what it meant.


    @jollyrogr At post 552 of this thread @crashtest stated a two-disk SnapRAID was possible, so I tried it. Here was my thinking:

    • I want bit rot protection.
    • I only have three 8TB disks.
    • I’m not currently running close on space plus I didn’t want (or need) UnionFS to complicate the process.
    • I don’t want to forfeit my full-disk mirror via Rsync.
  • it seems since one of the latest OMV updates (i am on 4.1.35 at the moment) i am not getting email notifications on the SnapRaid schedulled sync/diff. Email notifications are enabled in the plugin settings and also i am getting emails on available updates, so the notification function as such is working normally. anyone else noticed this behaviour?

    SuperMicro CSE-825, X11SSH-F, Xeon E3-1240v6, 32 GB ECC RAM, LSI 9211-8i HBA controller, 2x 8 TB, 1x 4 TB, 1x3TB, MergerFS+SnapRAID

    Powered by Proxmox VE

  • I have installed SnapRaid on OMV5 with unionFS where I put 3 disks in a pool and 1 disk as parity.

    I see a lot to configure in the GUI and I see that I can schedule a diff.


    But there is no real info what to do? first. I was / am expecting that there would be also some schelded sync and scrub or am I missing this?


    It seems that I first have to move my 17TB of data before I can do sync.


    Is there some 'what to do next' after it has been configured?

    5x HP Microserver Gen8, 4x with OMV. (3x OMV4 and 1x OMV5)

    (Busy with migrating to 1 NAS) Puffer: 4x3TB RAID5; Nemo:4x3TB RAID5; Shark: 4x2TB RAID5 and Whale: 4x10TB UNIONFS with SNAPRAID

  • You should probably read the snapRAID manual and FAQ before trying to use the program.


    https://www.snapraid.it/

    Been there done that.


    I know what I can do, but OMV5 has a plugin, I just need to know the scope what the plugin can do. It is not stated (I could not find it) what I need to do after I manually sync / scrub / and setup a diff


    That is in the scope of the OMV5 Snapraid usage and not in the scope of the faq of snapraid.


    So in short:

    do I need manually do things or has the OMV5 plugin for Snapraid all the features in the GUI to have it running unattended or does it need weekly manually maintenance (Cause the diff can be scheduled)

    5x HP Microserver Gen8, 4x with OMV. (3x OMV4 and 1x OMV5)

    (Busy with migrating to 1 NAS) Puffer: 4x3TB RAID5; Nemo:4x3TB RAID5; Shark: 4x2TB RAID5 and Whale: 4x10TB UNIONFS with SNAPRAID

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!