Here's the script I was using.
[HowTo] SnapRAID in OMV
-
-
Here's the script I was using.
Thanks buddy, will test it tonight.
If works good I guess we (actually you, if it's your work) could make a merge request to have it updated for everybody.
EDIT: Looks like it's just an older version of the official one, no edits at all. It exactly matches this version of the script. Are you sure you got yours?
EDIT2:
I am currently testing this script that seems to be less spammy and bit more structured, but the script does not terminate correctly and goes on even if it's done. I'm dumb in bash so I don't know how to fix it.
Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
If you want to test it, also install python-markdown. -
OK. It's possible I grabbed the wrong file. I'll look try to look again today.
-
-
Script comes from here, I only removed 'wait' instructions otherwise would not run on Debian 10/OMV5 (adviced by the author).
If you want to test it, also install python-markdown.
If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'Today it ran fine, but yesterday it hung.
-
If you read the posts at the bottom of the page where the script is you'll see there are two reports (one mine) of it hanging at the end of the run on: 'python -m markdown /tmp/snapRAID.out'
Today it ran fine, but yesterday it hung.
lol, I did, and I am the other user, the one with another dog in the picture.
I have found a good script which is not spammy at all, comes from here but these are the changes that I included
- Adapted for standard parity (original script is made for split parity)
- Integrated changes of user sburke to make it working with Debian 10 (original script does not work on Debian 10)I've tested it in my OMV5 VM and works fine.
NOTES
- You can configure sync rules, but by default it always forces a sync
- It can pause Containers so they don't mess up, and restore them when finished. I disabled it but if you want to use this feature switch MANAGE_SERVICES=0 to 1 and in SERVICES= list all containers.
- In OMV5 the script it self does send emails (you can specify your address in EMAIL_ADDRESS=). You might want to disable this feature or the one from the scheduled job on OMV.The output itself is quite nice. I've kept the whole output, it's not syncing any personal data.
Code
Display More##[COMPLETED] DIFF + SYNC + SCRUB Jobs (SnapRAID on **REDACTED**) SnapRAID Script Job started [Thu Jan 9 20:22:00 CET 2020] ---------------------------------------- ##Preprocessing Testing that all parity files are present. All parity files found. Continuing... ---------------------------------------- ##Processing ###SnapRAID TOUCH [Thu Jan 9 20:22:00 CET 2020] Checking for zero sub-second files. No zero sub-second timestamp files found. ###SnapRAID DIFF [Thu Jan 9 20:22:00 CET 2020] Loading state from /srv/dev-disk-by-label-DATI/snapraid.content... Comparing... update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/config.v2.json update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hosts update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hostname update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/resolv.conf update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83-json.log update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/resolv.conf.hash update docker-install/containers/bbed6403e1bc693f1ddbf39e30ec23f6d7cd667c16c5e8300721c6978906bc83/hostconfig.json update docker-install/volumes/metadata.db update docker-install/network/files/local-kv.db update docker-install/buildkit/cache.db update docker/portainer/data/portainer.db update docker/portainer/data/config.json 24046 equal 0 added 0 removed 12 updated 0 moved 0 copied 0 restored There are differences! DIFF finished [Thu Jan 9 20:22:01 CET 2020] **SUMMARY of changes - Added [0] - Deleted [0] - Moved [0] - Copied [0] - Updated [12]** There are deleted files. The number of deleted files, (0), is below the threshold of (50). SYNC Authorized. There are updated files. The number of updated files, (12), is below the threshold of (500). SYNC Authorized. ###SnapRAID SYNC [Thu Jan 9 20:22:01 CET 2020] Self test... Loading state from /srv/dev-disk-by-label-DATI/snapraid.content... Scanning disk disco-a... Using 14 MiB of memory for the file-system. Initializing... Resizing... Saving state to /srv/dev-disk-by-label-DATI/snapraid.content... Saving state to /srv/dev-disk-by-label-PARITY/snapraid.content... Verifying /srv/dev-disk-by-label-DATI/snapraid.content... Verifying /srv/dev-disk-by-label-PARITY/snapraid.content... Syncing... Using 16 MiB of memory for 32 cached blocks. disco-a 56% | ********************************** parity 0% | raid 3% | * hash 0% | sched 39% | *********************** misc 0% | |_____________________________________________________________ wait time (total, less is better) SYNC_JOB--Everything OK Saving state to /srv/dev-disk-by-label-DATI/snapraid.content... Saving state to /srv/dev-disk-by-label-PARITY/snapraid.content... Verifying /srv/dev-disk-by-label-DATI/snapraid.content... Verifying /srv/dev-disk-by-label-PARITY/snapraid.content... SYNC finished [Thu Jan 9 20:22:04 CET 2020] ###SnapRAID SCRUB [Thu Jan 9 20:22:04 CET 2020] Self test... Loading state from /srv/dev-disk-by-label-DATI/snapraid.content... Using 13 MiB of memory for the file-system. Initializing... Scrubbing... Using 24 MiB of memory for 32 cached blocks. SCRUB_JOB--Nothing to do SCRUB finished [Thu Jan 9 20:22:04 CET 2020] ---------------------------------------- ##Postprocessing SnapRAID SMART report: Temp Power Error FP Size C OnDays Count TB Serial Device Disk ----------------------------------------------------------------------- 0 - - SSD 0.0 - /dev/sda disco-a 0 - - SSD 0.0 - /dev/sdb parity 0 - - SSD 0.0 - /dev/sdc - - - - n/a - - /dev/sr0 - The FP column is the estimated probability (in percentage) that the disk is going to fail in the next year. Probability that at least one disk is going to fail in the next year is 0%. Spinning down disks... Spindown... Spundown device '/dev/sdb' for disk 'parity' in 36 ms. Spundown device '/dev/sda' for disk 'disco-a' in 39 ms. All jobs ended. [Thu Jan 9 20:22:05 CET 2020] Email address is set. Sending email report to **REDACTED** [Thu Jan 9 20:22:05 CET 2020] ---------------------------------------- ##Total time elapsed for SnapRAID: 0hrs 0min 5sec
-
The only reason I could come up with for using 2 disk SNAPRAID, would be for bit rot protection. Also SNAPRAID, and a simple filesystem like EXT4, will work reasonably well with USB connected drives.
(Setting bit rot protection aside, which is a big deal in itself, creating a simple mirror with Rsync provides roughly the same benefits.)
I have set up just this kind of backup strategy:
- one mirrored backup using Rsync.
- two disks set up in SnapRAID, one of them data, and the other parity. Both of them content.
It seemed logical to me, as I can easily fit what I am currently doing on one 8TB disk.
Everything worked well. I have an automatic scheduled Rsync twice a week on the mirrored backup, and I have run SnapRAID sync once, and everything went well, except for two details (maybe three) so I am looking for some advice/clarification before I proceed into the unknown.- My SMART notified me that my Rsync mirror had bad sectors. In the process of unreferencing/unmounting the disk for replacement I discovered that...
- I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.
- I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.
My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed. Thanks.
-
-
All of these scripts seem to be based on this one which I have been using for the last five years:
-
I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.
Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.
-
I had inadvertently created an exclude rule (AppData) referencing the Rsync mirror disk instead of the SnapRAID Data disk.
You have me here - I'm not sure I understand what happened. Are files missing from your Rsync destination? (I suppose you want those files?)
I have been adding a great number of photos to my data disk, as well as deleting duplicates. Since the bad sectors showed up I stopped the Rsync (waiting on a replacement disk), and I have stopped anything SnapRAID. Even though I have a full backup on a second machine, I want to wait to do anything until I get the new mirror backup disk installed and backed up.
On your Rsync disks, I'm not sure if you're talking about the source disk or the destination. If your source disk is still good, you're good to go. So, in speculation, was it your source disk that started to fail?
It seems as if you're asking about restoring the source disk with "SNAPRAID". I haven't done a full disk restore before. I haven't had to. With multiple backups, after I do significant work, I manually run a backup to insure that new data is in at least two places. (If I forget, the automated processes take care if ti.)
My main question is how do I reset everything to that initial SnapRAID sync? Is that necessary? Is it possible? Without shedding blood? I would like a little nudge as to the right way to proceed.
There's a recovery process in the SNAPRAID MANUAL, in section 4.4. The questions above, would only make me ask you more questions.
- If you have at least one good backup, why would you want to do a SNAPRAID restore?
- Restored data would only be as current as the last SYNC operation. ((You would know, better than I, if there's a compelling reason to go back to that state of your date.)) It would seem as if you'd be losing work in any case.If I was in your place:
If I had at least one good clean backup, on another platform, and if there's a compelling reason, I might try a SNAPRAID restore. In any case, my inclination would be to take the lowest risk safest path possible. That means I wouldn't do anything with or try to reuse / fix the disk with bad sectors before the replacement disk arrives.When you have your replacement disk and it's restored with your backup, even if the backup is a bit out of date, you could mount the failing disk and see what missing files (new work) can rescued from it using something like Midnight commander.
-
-
I’m sorry. I guess I tried to cram too much in one post. Edit: Looking at what’s below, I’m afraid I’m about to do it again.
I’m using three SATA disks but only two are in SnapRAID: one Data and the other Parity, both Content. Both of those are physically okay and SnapRAID appeared to run fine the two times I ran sync and scrub about a week ago. The third disk is a mirrored backup of the Data drive just mentioned, via Rsync, per your guide on p. 64. It is not part of the SnapRAID array.
The third disk, about a week ago showed up with bad sectors so today I am going to swap it out for a new disk that just arrived and Rysync it from the first drive. All is well up to this point. No data is really in jeopardy. I even have a remote backup on another machine.
When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.
What I am wondering about is how to start sync, scrub, etc. and what to expect from error output and what to do with it. I have corrected the exclusion rule (AppData) to point to the Data disk.
Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.
Despite tons of reading up, both at SnapRAID and on this forum, I’m not sure of the steps to begin again sync, scrub, fix, etc.
-
When I unmourned the bad mirrored disk, it wouldn’t unmount. I discovered that I had inadvertently created a SnapRAID rule pointing to that disk.
The SNAPRAID rule is probably why the RSYNC disk wouldn't unmount. Clear the rule and it should unmount. If not look in filesystems to see if the disk is "referenced".
Combine that with the fact that I have been adding, deleting, renaming, moving tons of photo files in my Data disk since I last ran a sync.
You're still good, if your SNAPRAID data disk is OK. The SYNC updates the parity drive and the content file(s) only, and it becomes (in a sense) your new backup as the completion of the SYNC operation.
If you're not doing a SYNC manually, on a regular basis, you might think about automating a SYNC command to run once or twice a month. It can be done in scheduled tasks with something like the follow command: snapraid touch; snapraid sync -l snapsync.log
But note there are other housekeeping commands you should consider running, before running the next SYNC command.The order I run is:
snapraid touch; snapraid sync -l snapsync.logsnapraid -p 100 -o 13 scrub
snapraid -e fixThe first command does a touch that fixes the annoying "0" second time error thing. It then runs a sync and directs output to a log named snapsync.log
A few days before the next SYNC operation, I run the scrub.
Then the fix command is run one day after the scrub, to fix issues (bit-rot, etc.) found in the scrub.
In Scheduled Tasks, all are set up all to send an E-mail of their output.
(*And note that, even with the above, you can manually run a new SYNC operation, after doing a lot of work that you want to insure is backed up.*)That's how I do it. Others may have other ideas.
-
I think if you updated that exclusion rule, you can remove the bad disk and replace it. Snapraid should still function normally with the other 2 drives.
I would say you have a rather odd setup. Why bother using parity when you only have 1 data disk? Do you plan to add more disks in the future?
-
-
Why bother using parity when you only have 1 data disk? Do you plan to add more disks in the future?
The SnapRAID is to protect against data corruption and the Rsync is for a true backup. I have 8TB disks and they are running at about 23% capacity. I will probably add disks some day.
-
I checked my scheduled tasks and the report I attached to this thread was the report I was using. I think I was using the script unmodified but didn't remember because it has been so long since I used it.
Looking at the script, all the output from the sync and scrub commands are directed to $TMP_OUTPUT variable which is included in the email. If it's possible to only include partial command output in the email, that might be beyond my bash capabilities.
Thanks but don't worry, I found a better script in the meantime, you can get it in my previous posts. Works quite well.
-
@crashtest Thanks for the information, especially the order and explanation of the commands.
Regarding not being able to unmount the failed destination drive, I knew it was a reference issue but I “knew” nothing was referenced to it. Finally I picked up a stray comment somewhere about SnapRAID rules. Sure enough I had set to exclude the AppData folder on the SnapRAID Data disk but had mistakenly written the Rsync Destination disk into the rule. The first sync I did showed a bunch of AppData lines in the output. I should have suspected something from the git-go, but I didn’t have the experience to know what it meant.
@jollyrogr At post 552 of this thread @crashtest stated a two-disk SnapRAID was possible, so I tried it. Here was my thinking:
- I want bit rot protection.
- I only have three 8TB disks.
- I’m not currently running close on space plus I didn’t want (or need) UnionFS to complicate the process.
- I don’t want to forfeit my full-disk mirror via Rsync.
-
-
-
it seems since one of the latest OMV updates (i am on 4.1.35 at the moment) i am not getting email notifications on the SnapRaid schedulled sync/diff. Email notifications are enabled in the plugin settings and also i am getting emails on available updates, so the notification function as such is working normally. anyone else noticed this behaviour?
-
I have installed SnapRaid on OMV5 with unionFS where I put 3 disks in a pool and 1 disk as parity.
I see a lot to configure in the GUI and I see that I can schedule a diff.
But there is no real info what to do? first. I was / am expecting that there would be also some schelded sync and scrub or am I missing this?
It seems that I first have to move my 17TB of data before I can do sync.
Is there some 'what to do next' after it has been configured?
-
-
You should probably read the snapRAID manual and FAQ before trying to use the program.
-
You should probably read the snapRAID manual and FAQ before trying to use the program.
Been there done that.
I know what I can do, but OMV5 has a plugin, I just need to know the scope what the plugin can do. It is not stated (I could not find it) what I need to do after I manually sync / scrub / and setup a diff
That is in the scope of the OMV5 Snapraid usage and not in the scope of the faq of snapraid.
So in short:
do I need manually do things or has the OMV5 plugin for Snapraid all the features in the GUI to have it running unattended or does it need weekly manually maintenance (Cause the diff can be scheduled)
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!