snapraid isn't in the top list because it completed. But there is something wrong because the script doesn't end properly.
Do you see any reference to markdown in the top list?
Is there anything related in /tmp?
snapraid isn't in the top list because it completed. But there is something wrong because the script doesn't end properly.
Do you see any reference to markdown in the top list?
Is there anything related in /tmp?
snapraid isn't in the top list because it completed. But there is something wrong because the script doesn't end properly.
Do you see any reference to markdown in the top list?
Is there anything related in /tmp?
Alles anzeigenPut a valid email address in for EMAIL_ADDRESS=
You can use a fully qualified email address or a bare username on the system that gets mail and that you read that user's mail.
Make sure you have installed the specified requirements package python-markdown
I suggest also installing the discount package and making the below change in the script:
Change this line from:
$MAIL_BIN -a 'Content-Type: text/html' -s "$SUBJECT" "$EMAIL_ADDRESS" < <(python -m markdown $TMP_OUTPUT)
Change to:
$MAIL_BIN -a 'Content-Type: text/html' -s "$SUBJECT" "$EMAIL_ADDRESS" < <(markdown $TMP_OUTPUT
Previously, I had not yet installed the discount package, nor made the modification you suggested under function send_mail(){
So I just installed discount, modified the script, re-ran, and no difference. No reference to markdown in top. I don't see anything relevant in /tmp except snapRAID.out
Here are the contents of /tmp
root@OpenMediaVault:/tmp# ls -la
total 24
drwxrwxrwt 9 root root 280 Mar 6 12:15 .
drwxrwxr-x 20 root root 4096 Jan 30 12:30 ..
-rw------- 1 root root 204 Mar 5 14:50 bgoutput03P0tr
-rw------- 1 root root 204 Mar 5 14:57 bgoutputXu3KFo
-rw------- 1 root root 2029 Mar 5 14:50 bgstatusrm7mHj
-rw------- 1 root root 2029 Mar 5 14:57 bgstatusXn0Tve
drwxrwxrwt 2 root root 40 Feb 28 12:46 .font-unix
drwxrwxrwt 2 root root 40 Feb 28 12:46 .ICE-unix
-rw-r--r-- 1 root root 940 Mar 6 12:24 snapRAID.out
drwx------ 3 root root 60 Feb 28 12:46 systemd-private-dce2cf87c3be49f6be589cf4c7cb8b97-chrony.service-LaHgeT
drwx------ 3 root root 60 Feb 28 12:46 systemd-private-dce2cf87c3be49f6be589cf4c7cb8b97-systemd-resolved.service-0uUfGR
drwxrwxrwt 2 root root 40 Feb 28 12:46 .Test-unix
drwxrwxrwt 2 root root 40 Feb 28 12:46 .X11-unix
drwxrwxrwt 2 root root 40 Feb 28 12:46 .XIM-unix
Alles anzeigen
Here is the output of top once the script freezes. The only difference is snapraid will show until the script freezes.
top - 12:24:28 up 5 days, 23:38, 2 users, load average: 0.00, 0.08, 0.10
Tasks: 361 total, 1 running, 344 sleeping, 1 stopped, 15 zombie
%Cpu(s): 0.0 us, 0.1 sy, 0.0 ni, 99.9 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 15961.0 total, 4524.2 free, 1108.8 used, 10328.0 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 14079.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10 root 20 0 0 0 0 I 0.3 0.0 7:01.96 rcu_sched
1438 root 20 0 1762216 60352 5636 S 0.3 0.4 4:13.78 dockerd
4843 root 20 0 0 0 0 I 0.3 0.0 0:15.99 kworker/0:2-events
20772 root 20 0 11376 3848 3092 R 0.3 0.0 0:01.92 top
1 root 20 0 170724 9028 5600 S 0.0 0.1 0:42.68 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.11 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
8 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
9 root 20 0 0 0 0 S 0.0 0.0 1:05.70 ksoftirqd/0
11 root rt 0 0 0 0 S 0.0 0.0 0:01.72 migration/0
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0
14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1
15 root rt 0 0 0 0 S 0.0 0.0 0:01.80 migration/1
16 root 20 0 0 0 0 S 0.0 0.0 0:22.26 ksoftirqd/1
18 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/1:0H-kblockd
19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2
20 root rt 0 0 0 0 S 0.0 0.0 0:01.87 migration/2
21 root 20 0 0 0 0 S 0.0 0.0 0:25.05 ksoftirqd/2
23 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/2:0H-kblockd
24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3
25 root rt 0 0 0 0 S 0.0 0.0 0:01.85 migration/3
26 root 20 0 0 0 0 S 0.0 0.0 0:14.29 ksoftirqd/3
28 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/3:0H-kblockd
29 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/4
30 root rt 0 0 0 0 S 0.0 0.0 0:01.69 migration/4
31 root 20 0 0 0 0 S 0.0 0.0 0:36.10 ksoftirqd/4
33 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/4:0H-kblockd
34 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/5
35 root rt 0 0 0 0 S 0.0 0.0 0:01.79 migration/5
36 root 20 0 0 0 0 S 0.0 0.0 0:23.94 ksoftirqd/5
38 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/5:0H-kblockd
39 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/6
40 root rt 0 0 0 0 S 0.0 0.0 0:01.80 migration/6
41 root 20 0 0 0 0 S 0.0 0.0 1:26.54 ksoftirqd/6
43 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/6:0H-kblockd
44 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/7
45 root rt 0 0 0 0 S 0.0 0.0 0:01.79 migration/7
46 root 20 0 0 0 0 S 0.0 0.0 2:04.05 ksoftirqd/7
48 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/7:0H-kblockd
49 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/8
50 root rt 0 0 0 0 S 0.0 0.0 0:01.86 migration/8
51 root 20 0 0 0 0 S 0.0 0.0 3:12.11 ksoftirqd/8
53 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/8:0H-kblockd
54 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/9
55 root rt 0 0 0 0 S 0.0 0.0 0:01.84 migration/9
56 root 20 0 0 0 0 S 0.0 0.0 0:12.75 ksoftirqd/9
58 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/9:0H-kblockd
59 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/10
60 root rt 0 0 0 0 S 0.0 0.0 0:01.79 migration/10
61 root 20 0 0 0 0 S 0.0 0.0 0:12.54 ksoftirqd/10
63 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/10:0H-kblockd
64 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/11
65 root rt 0 0 0 0 S 0.0 0.0 0:01.81 migration/11
Alles anzeigen
Look at the end of the /tmp/snapRAID.out file. There may be clues in it.
Look at the end of the /tmp/snapRAID.out file. There may be clues in it.
I did, the output is identical to the console output from the script.
Is the script getting far enough to actually send email?
Is the script getting far enough to actually send email?
No it is not. And actually it seems snapraid may not be ending correctly. Here is my putty log from the first run of the script, when there were actually differences which snapraid needed to sync.
You'll see the script halts while snapraid is running. snapraid dropped out of the process list during this first run and the script froze.
Nothing else ever starts in the process list, only snapraid, which then terminates and the script freezes.
Snapraid always runs fine manually or via the builtin omv-snapraid-diff script
I've attached snapScript2.sh. I've appended .txt to the name so it would allow me to upload.
root@OpenMediaVault:/home/scripts# ./snapscript2.sScript2.sh
SnapRAID Script Job started [Sat 06 Mar 2021 02:54:29 AM EST]
----------------------------------------
##Preprocessing
###Stop Services [Sat 06 Mar 2021 02:54:29 AM EST]
./snapScript2.sh: line 379: /opt/sophos-av/bin/savdstatus: No such file or directory
###Remove Zero Byte NFOs [Sat 06 Mar 2021 02:54:29 AM EST]
Removing any 0 byte .nfo's before SnapRAID exeuction.
find: /mnt/volume/media: No such file or directory
----------------------------------------
##Processing
###SnapRAID TOUCH [Sat 06 Mar 2021 02:54:29 AM EST]
Checking for zero sub-second files.
Found 8645 files with zero sub-second timestamp.
Running TOUCH job to timestamp. [Sat 06 Mar 2021 02:54:59 AM EST]
Loading state from /srv/dev-disk-by-label-1TBblack01/snapraid.content...
Setting sub-second timestamps...
----expected errors on a bunch of files I clipped out, as these were moved after the last snapraid diff------
Error opening file '/srv/dev-disk-by-label-WD2TBblackFAEX01/Storage/_Downloads/Overclocking Tools/MSIAfterburnerSetup/4.6.2/MSIAfterburnerSetup462.exe'. No such file or directory.
-------------
Saving state to /srv/dev-disk-by-label-1TBblack01/snapraid.content...
Saving state to /srv/dev-disk-by-label-2TBblack01/snapraid.content...
Saving state to /srv/dev-disk-by-label-2TBgreen01/snapraid.content...
Saving state to /srv/dev-disk-by-label-3TBred01/snapraid.content...
Saving state to /srv/dev-disk-by-label-4TBPurple/snapraid.content...
Saving state to /srv/dev-disk-by-label-WD2TBblackFAEX01/snapraid.content...
Saving state to /srv/dev-disk-by-label-6TBHitachi/snapraid.content...
Saving state to /srv/dev-disk-by-label-2TBblack03/snapraid.content...
Verifying /srv/dev-disk-by-label-1TBblack01/snapraid.content...
Verifying /srv/dev-disk-by-label-2TBblack01/snapraid.content...
Verifying /srv/dev-disk-by-label-2TBgreen01/snapraid.content...
Verifying /srv/dev-disk-by-label-3TBred01/snapraid.content...
Verifying /srv/dev-disk-by-label-4TBPurple/snapraid.content...
Verifying /srv/dev-disk-by-label-WD2TBblackFAEX01/snapraid.content...
Verifying /srv/dev-disk-by-label-6TBHitachi/snapraid.content...
Verifying /srv/dev-disk-by-label-2TBblack03/snapraid.content...
Verified /srv/dev-disk-by-label-2TBblack01/snapraid.content in 6 seconds
Verified /srv/dev-disk-by-label-2TBblack03/snapraid.content in 6 seconds
Verified /srv/dev-disk-by-label-6TBHitachi/snapraid.content in 7 seconds
Verified /srv/dev-disk-by-label-1TBblack01/snapraid.content in 8 seconds
Verified /srv/dev-disk-by-label-2TBgreen01/snapraid.content in 10 seconds
Verified /srv/dev-disk-by-label-4TBPurple/snapraid.content in 11 seconds
Verified /srv/dev-disk-by-label-3TBred01/snapraid.content in 11 seconds
Verified /srv/dev-disk-by-label-WD2TBblackFAEX01/snapraid.content in 12 seconds
Using 1375 MiB of memory for the file-system.
Alles anzeigen
SnapRaid will not sync via the script because you have SYNC_WARN_THRESHOLD=-1
Read the explanation above that setting and set the value accordingly.
Not sure changing that will solve all the problems, but it should get further.
SnapRaid will not sync via the script because you have SYNC_WARN_THRESHOLD=-1
Read the explanation above that setting and set the value accordingly.
Not sure changing that will solve all the problems, but it should get further.
# Set number of warnings before we force a sync job.
# This option comes in handy when you cannot be bothered to manually
# start a sync job when DEL_THRESHOLD is breached due to false alarm.
# Set to 0 to ALWAYS force a sync (i.e. ignore the delete threshold above)
# Set to -1 to NEVER force a sync (i.e. need to manual sync if delete threshold is breached)
#SYNC_WARN_THRESHOLD=3
SYNC_WARN_THRESHOLD=-1
Let me know if I am interpreting this wrong. The way I understand this is, it will only not sync if the delete threshold specified is exceeded.
On the first run, I made sure the delete threshold was set above the number of files deleted.
I do not want a sync forced if the delete threshold is exceeded, which is the whole purpose of using this custom script, because that is exactly what is broken with the built-in omv-snapraid-diff script.
The built-in script will run regardless if the delete threshold is exceeded, which will cause data loss if the deletions were not intended.
A sync should run since the delete threshold is not breached / exceeded.
If the delete threshold is exceeded, a sync will never run, until I investigate and do a manual sync. This is the exact behavior I want.
That being said, it would be nice if the built-in script was fixed to work correctly.
I suggest changing that setting to a positive value and run the script again. Use a large value so you can be sure it will not sync if the delete threshold is reached unless the script is run that large value number of times.
You can also try another script. I have used this one in the past.
If you are convinced the built in script is broken, then report a bug against it.
Alles anzeigenI suggest changing that setting to a positive value and run the script again. Use a large value so you can be sure it will not sync if the delete threshold is reached unless the script is run that large value number of times.
You can also try another script. I have used this one in the past.
Externer Inhalt gist.github.comInhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.
If you are convinced the built in script is broken, then report a bug against it.
Ok, I determined the script was hanging at the end of snapraid diff. Then I started looking at the comments for the script on the github page (something I should have done in the first place). - https://gist.github.com/mtompk…be36064c237da3f39ff5cc49d - I now see that you made some comments on the github page over a year ago. Based on your comments, I actually see that you were having the exact same issue I'm having.
You, along with a few others, worked through getting a final working version, by doing the following.
Alles anzeigenPut a valid email address in for EMAIL_ADDRESS=
You can use a fully qualified email address or a bare username on the system that gets mail and that you read that user's mail.
Make sure you have installed the specified requirements package python-markdown
I suggest also installing the discount package and making the below change in the script:
Change this line from:
$MAIL_BIN -a 'Content-Type: text/html' -s "$SUBJECT" "$EMAIL_ADDRESS" < <(python -m markdown $TMP_OUTPUT)
Change to:
$MAIL_BIN -a 'Content-Type: text/html' -s "$SUBJECT" "$EMAIL_ADDRESS" < <(markdown $TMP_OUTPUT)
After doing all this, the script is currently running without hanging so far. It made it through snapraid diff, snapraid sync, and is currently scrubbing the array according to my parameters. I will report back if it has any issues once it gets to the markdown section and sends out the email. But I remain optimistic at this point! Thank you for helping me get to this point!
I've attached the working version for anyone who runs into this.
I forgot about the wait commands. But the rest was all good.
I occasionally get a failure to send the mail and by running markdown on the output file in /tmp it's caused by something it doesn't like in the file, but I don't recall ever finding out what it was.
You may want to upload your version of the script as a fork.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!