Thank you, @Adoby. I was able to get the script working using your trick.
At 3 AM, snapcript is run as root using this command:
cd /usr/sbin && /bin/bash ratsscript.sh
And then ratsscript.sh looks like this (may be overkill, but it works for me):
@Adoby, thanks for taking the time to respond. All of the scripts are located in /usr/sbin. I just looked again and didn't notice any reference to /usr/bin, if you could point that out I will correct it. To be honest, I don't know anything about the "proper" location for scripts so I just put them in the location of the SnapRaid plugin's script (that I am not currently using). The Scheduled job is set up as:
It appears that OMV takes this information and automatically adds an entry in the cron.d file mentioned in the error message. The contents of that file are:
I have no problem using the command line via SSH, but I prefer to use OMV built-in features whenever possible since I know the OS can overwrite certain config files. Also, since everything is located on the OS drive, there should be no issues with running from a non-executable location. As I mentioned, if I manually run the script from the Scheduled Job tab, everything works perfectly fine. I just can't figure out why cron can't locate the file when it is clearly located in the correct place and is executable.
Recently, I began using a third-party SnapRAID script in place of the one packaged with the SnapRAID plugin. I used this on its own, called at 2AM through a Scheduled Job, for a week or so, until it began failing due to not having enough available memory. I realized that shutting down all my Docker containers freed up sufficient memory for the script to run, so I am now attempting to automate the whole process.
I have created a "master" script that executes three individual scripts to shut down all Docker containers, then run the SnapRAID script, and finally, to start the Docker containers again. I added a Scheduled Job to run this master script at 3AM every day. When I test it by running manually from the Scheduled Job tab, all three scripts are called in order and I receive output indicating successful completion of the process. However, at 3AM, I receive an email with the error message:
/var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: 4: /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: /usr/sbin/ratsscript.sh: not found
Attached are the scripts: master (ratsscript.sh), shut down Docker (ratsscript1.sh), run SnapRAID (snapScript.sh, not my script), and start Docker (ratsscript2.sh). I am absolutely a Linux beginner and probably overlooked something simple. Any guidance would be appreciated. (Added .txt extensions to allow upload here).
Edit: Pasting my scripts here -
https://pastebin.com/8mW02Uhy (script 1)
https://pastebin.com/G3rjgWpc (script 2)
After rebooting the server, same behavior returned, and disabling quotas again fixed the issue. Again, can anybody advise on how to permanently disable quotas that will persist through a reboot?
Well, I spoke to soon, because NZBGet has begun reporting the "out of space" errors again. Will have to do some more digging.
Edit: Upon more research, disk quotas seem to be the issue. I have never established any quotas since installing OMV, but I noticed NZBGet throwing an error regarding exceeding disk quotas. So, I disabled them using sudo quotaoff -a and re-ran a few downloads that had immediately before been unable to unpack. This time, all three separate downloads unpacked successfully. Is there a more permanent way to ensure that quotas do not get enabled automatically on a reboot or other system event? Can I simply remove the quota arguments in the mntent sections of the config.xml file?
@trapexit I played around in the terminal trying to figure out a command that would cause an error and had no luck. Moves, copies, unpacking, etc. all worked fine, and the issues continued to be experienced only within Docker containers.
For the containers that were affected, I removed the mapping of a storage container and mapped the path instead, and it seems the problem may be resolved (at least for now, I have been downloading new media for a couple of days without issue). So either the Docker plugin isn't playing nice with MergerFS, or vice versa.
Your post of the fstab entry is truncated. Don't use nano for this, try cat instead
Whoops, edited. And this wouldn't fit in my previous post.Codeoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/ade951dd0e8f5b7119cccddaac734b2a08db609474ad77b99e8d05f3a0b36edb/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/487d99959ec09c63add4ec7c7639defaff8beb6668e7ef57db1c5bc22db4215f/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/8ea79bc4014b12acdacb74db01a06130e9712c3076fcaabb69e48fecbfe0f3a1/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/510e6be772064fd62df2a5620ee3ddf7a415da2587fc2f125276a715459a41c7/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/cfb106e20749a59868102ff3e5b305282ae7d3c4290fea6b9b8e70bc36ebf9b3/mergedshm 64M 0 64M 0% /var/lib/docker/containers/50c005696bc78bc6acfd3bd29985d5e50fc0a581ad1af461ec7fdc2efe7c3720/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/375fcb7c1f6cafd39dc23655ad557b6adb64d1f9605adc517be8ba0a796b9324/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/a2d86ff87e3cac7ab7472990dd15ce8666e657160d92d2c786fc6e30bdb8101b/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/e31f357fa3738607b350a43cf9863d0e91afcbbe1b6092dce0f7e898d4709491/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/57129d214670c370355f2f82531b745105463deee86933c16c53bb0e9b94faa4/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/014f7b14e32e7c552a2e435388ad3d4a8989eb9fb759abe9c4688d2e886aad2b/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/b56f93e33fd23ece3c5f8d89962b787c1c0ea137e58b48968778ddc62c9962b6/mergedshm 64M 0 64M 0% /var/lib/docker/containers/4d716ff44ec7dc01f7c066b1e006fc55f206c6715ed02b7ac1d33263aa0c70fc/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/526e0cd6461eed89758eea69b3770ea2b2eaa554093109e89e3e7715cf4ee901/mergedshm 64M 4.0K 64M 1% /var/lib/docker/containers/5718d146e23e0f4c2e33bbb2b1c521c5f83ffc511566ed8480b5770e635662d3/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/68ee4c513ddfe951547b6e1c47c0b91892ae75e0f8f4965aa472edb8b0c4d1f3/mergedoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/c3fff96d65bdd15a24ebea982f39103f0d9284258aad5adc142582eee7099322/mergedshm 64M 0 64M 0% /var/lib/docker/containers/901d6d631ea9e696d4240c569e04e969921f0829e3bb817b2fdc886d7d645918/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/40d21743aca65595f31093efa78dd0a070a37cf4347dd5fdadb3828078c0549f/mergedshm 64M 0 64M 0% /var/lib/docker/containers/0e4eb40f7911cb0a88a849ff09091f80b49948f3af2454474bbcc15bfda105dd/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/3db9692b7bda1e69ed9f515929ffeb841d82c6d40a2524552d3fc4fefffa5995/mergedshm 64M 4.0K 64M 1% /var/lib/docker/containers/0b3d306eac3dc115dc2d6174e49b5b8eaa324e97cf59f785d3b99002d19eaaf4/mounts/shmoverlay 108G 8.2G 94G 9% /var/lib/docker/overlay2/10e5a635c8cec040fa2ffb921708587688ac1a78425a9128e7b3699cdac5c3bb/mergedshm 64M 0 64M 0% /var/lib/docker/containers/cc8911ac1405d6c77522e5c5149f6115d3152b3e02b858776baf35f5e0ab31e1/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/526d4a276ac6128a206da8f87b6a5c16caec4729e7c0d2d0228dc6dd746a2672/mounts/shm
Thank you @trapexit. I didn't want to waste your time pouring through my data if I found another culprit. However, so far, I've been unable to make any progress, so here we are:
I am not sure how to run strace on a Docker container that's running and resulting in these errors. Any simple command I could run in terminal that could be traced instead?CodeCodeCode/srv/dev-disk-by-label-b1:/srv/dev-disk-by-label-b2:/srv/dev-disk-by-label-b3:/srv/dev-disk-by-label-b4:/srv/dev-disk-by-label-a1:/srv/dev-disk-by-label-a2:/srv/dev-disk-by-label-a3:/srv/dev-disk-by-label-a4 /srv/ceb94f6f-2407-4c37-9eb3-a737c3af08cf fuse.mergerfs defaults,allow_other,use_ino,dropcacheonclose=true,category.create=mfs,minfreespace=20G 0 0
> I can't seem to find instructions for capturing info in the mergerfs docs
> it appears as though data has begun accumulating on disk a4
The drives seem to have similar % usage. Regardless... if it's going to one drive then it's probably due to your config (path preservation or a policy which is targeting that branch).
I just deleted and re-created a new mergerfs pool again, and immediately after mapping everything to the new pool, data is able to be downloaded/unpacked via nzbget and hash checks no longer fail in rtorrent/rutorrent. The strange part is that it seems like the data was able to be downloaded, it was just the unpacking/verification stages that were failing.
I have no doubt that there is some issue external to mergerfs that is causing this behavior, I just don't know where to begin.
Unfortunately, there is nothing I can do without additional information. If it's saying you're out of space then something is returning that. The only time mergerfs explicitly returns ENOSPC is when all drives become filtered and at least one reason was minfreespace.
Next time an error occurs please gather the information as mentioned in the docs or at least `df -h`.
I must be overlooking something because I can't seem to find instructions for capturing info in the mergerfs docs.
Edit: For some reason, it appears as though data has begun accumulating on disk a4 exclusively instead of spreading across the disks as intended.
Here is the output from df -hCodeoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/68099a0c5263c839e83ab0c11e41f16d17b9d201d6cf11701f2c23ab6e6227db/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/3002c53e239177ad4763ef146daf2d210fb33653d0c234aeb7562178ddd37f22/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/cfb106e20749a59868102ff3e5b305282ae7d3c4290fea6b9b8e70bc36ebf9b3/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/e3b3ab963f27de18bad0de89add2a24d98734fb34b3306a744562920b56dbbbb/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/8ea79bc4014b12acdacb74db01a06130e9712c3076fcaabb69e48fecbfe0f3a1/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/4af58910ec5218bb0f673cec41b6dce829faa12750d2a5faccd93bb0cf8f8c0d/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/5d91da61359ef9e3a4a2e6c97e164870893d7760fe4d19aaf20043af376f3b61/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/487d99959ec09c63add4ec7c7639defaff8beb6668e7ef57db1c5bc22db4215f/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/94ab641e2a014280547be32e57e1052d82da51b3c0ef567fdf68db51acfbeb8b/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/ade951dd0e8f5b7119cccddaac734b2a08db609474ad77b99e8d05f3a0b36edb/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/510e6be772064fd62df2a5620ee3ddf7a415da2587fc2f125276a715459a41c7/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/6a2740c45f0b77df46d3ff666a1311ae1e53bae714c6c9d9b43e5d5b3ee8ea36/mergedoverlay 108G 8.0G 94G 8% /var/lib/docker/overlay2/971f4fc964949623ec54ab137d84dd61882f866a3cac29614813aba8da59da3b/mergedshm 64M 0 64M 0% /var/lib/docker/containers/50c005696bc78bc6acfd3bd29985d5e50fc0a581ad1af461ec7fdc2efe7c3720/mounts/shmshm 64M 8.0K 64M 1% /var/lib/docker/containers/a93171a356e68b6b1243ac7eb2c71e7ad5a6dd6b32bae16699cd9d96662f1ab5/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/23654ac3d6abab9b53003b0aedd995d1c4f5bf2d952196dbc1a79645fc2a7631/mounts/shmshm 64M 4.0K 64M 1% /var/lib/docker/containers/3a947eb809954e56fd7389060b06a53661872b237b0688d50effb9c310b01aa4/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/375fcb7c1f6cafd39dc23655ad557b6adb64d1f9605adc517be8ba0a796b9324/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/9ab985236f39303f95e64a8cfa3c15e5b804b236bc5bb227f43a96b3406f1f39/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/57129d214670c370355f2f82531b745105463deee86933c16c53bb0e9b94faa4/mounts/shmshm 64M 4.0K 64M 1% /var/lib/docker/containers/cdab31646db3d467e1ca5506eda210109bbb50e9b8fee08e7b7d40a5eac77877/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/e31f357fa3738607b350a43cf9863d0e91afcbbe1b6092dce0f7e898d4709491/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/a2d86ff87e3cac7ab7472990dd15ce8666e657160d92d2c786fc6e30bdb8101b/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/1011c988d4f0f7369214f3d8072a95940ee3e1cd07010db33bc6d84bac043ada/mounts/shmshm 64M 0 64M 0% /var/lib/docker/containers/cc7f72e186bc456ce026124f34d44f4b87a4a995cca2909b6ef18ef57de189da/mounts/shm
The recreation "fixing" the issue doesn't make sense. mergerfs doesn't interact with your data. It's a proxy / overlay.
Could you provide the settings you're using? It's really not possible to comment further without such details.
Are you using the drives out of band of mergerfs? Are drives filling?
Thanks for chiming in. It doesn't make sense to me, either, but I'm not sure what else could be the cause.
The drives are used ONLY in context of the pool, nothing writes data to any of the individual drives. At this point, I am unable to download any new data to ensure that the pool is still filling per policy, but up to this point, yes, all data was written to the drives as I would expect based on the policy I had selected:
I've made a MergerFS pool that spans the entire size of 8 drives. In this pool is a single parent folder called "Share," under which all my storage and media files reside in sub-folders. This "Share" folder is mapped as an SMB share, and it is also mapped to a Docker storage container to which my other Docker containers have access.
I did a fresh install of OMV 4.x about a month ago and recreated this setup. After about a week, I noticed that my NZBGet container was giving Unrar errors indicating that there was not enough free space to unpack downloads (Unrar error 5). Then, ruTorrent was giving hash errors, that when a new file was downloaded, the subsequent hash check was returning missing pieces (Hash check on download completion found bad chunks, consider using \"safe_sync\"). Eventually, I found that by deleting the MergerFS pool and recreating a new one resolved both problems. So, I did that, updated the mappings for Docker, and all was right in the world again. This lasted about two weeks, and once again, overnight, my NZBGet and ruTorrent downloads are failing with the same issues.
I don't want to continue this cycle of having to delete and re-create a new pool every few weeks. I don't know if there's something that triggers this corruption. Any ideas on how to identify the problem? I have run fsck on all drives with no filesystem errors to be found thus far.
You have to either exclude those particular files from being backed up or stop the files from changing during the SnapRAID sync process (shut down those processes until the sync is done, then start them again).
i edited that "noexec" part of code before i even posted here, with no changes, after restart was the same (and yes i did check if the changes were made)
PS: while editing 'noexec' i found that code was present on all my HDD, so even if i delete from all HDD, i had same problem
Can you show us screenshots of your Docker settings? I can't watch @TechnoDadLife's video right now, but it is not as simple as editing your fstab file because OMV will revert these changes based on certain system events. See this thread for the exact steps (again, I apologize if this is what was already suggested).
Is Plex installed with the plugin or in Docker? If Docker, is it installed on your system drive?
Have you tried viewing and, if necessary, changing the permissions on your music folder? The resetperms add-on makes this very simple to do from the OMV web UI.
Better DO NOT USE filesystem labels.
Is there a different system you'd recommend? All my data disks contain one partition of max size, and I use the filesystem label to indicate where the drive is physically in my case (A1 is top row, first column, C3 is third row, third column, etc.).