Posts by flvinny521

    Thank you, @Adoby. I was able to get the script working using your trick.


    At 3 AM, snapcript is run as root using this command:


    cd /usr/sbin && /bin/bash ratsscript.sh


    And then ratsscript.sh looks like this (may be overkill, but it works for me):


    Code
    cd /usr/sbin && /bin/bash ratsscript1.sh
    cd /usr/sbin && /bin/bash snapscript.sh
    cd /usr/sbin && /bin/bash ratsscript2.sh

    @Adoby, thanks for taking the time to respond. All of the scripts are located in /usr/sbin. I just looked again and didn't notice any reference to /usr/bin, if you could point that out I will correct it. To be honest, I don't know anything about the "proper" location for scripts so I just put them in the location of the SnapRaid plugin's script (that I am not currently using). The Scheduled job is set up as:


    Screenshot 2019-08-22 at 8.52.00 AM.png


    It appears that OMV takes this information and automatically adds an entry in the cron.d file mentioned in the error message. The contents of that file are:


    Bash
    #!/bin/sh -l
    # This configuration file is auto-generated.
    # WARNING: Do not edit this file, your changes will be lost.
    /usr/sbin/ratsscript.sh


    I have no problem using the command line via SSH, but I prefer to use OMV built-in features whenever possible since I know the OS can overwrite certain config files. Also, since everything is located on the OS drive, there should be no issues with running from a non-executable location. As I mentioned, if I manually run the script from the Scheduled Job tab, everything works perfectly fine. I just can't figure out why cron can't locate the file when it is clearly located in the correct place and is executable.



    Code
    rats@ratsvault:/$ find /usr/sbin -name "rats*"
    /usr/sbin/ratsscript1.sh
    /usr/sbin/ratsscript.sh
    /usr/sbin/ratsscript2.sh
    rats@ratsvault:/$

    Recently, I began using a third-party SnapRAID script in place of the one packaged with the SnapRAID plugin. I used this on its own, called at 2AM through a Scheduled Job, for a week or so, until it began failing due to not having enough available memory. I realized that shutting down all my Docker containers freed up sufficient memory for the script to run, so I am now attempting to automate the whole process.


    I have created a "master" script that executes three individual scripts to shut down all Docker containers, then run the SnapRAID script, and finally, to start the Docker containers again. I added a Scheduled Job to run this master script at 3AM every day. When I test it by running manually from the Scheduled Job tab, all three scripts are called in order and I receive output indicating successful completion of the process. However, at 3AM, I receive an email with the error message:


    /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: 4: /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: /usr/sbin/ratsscript.sh: not found


    Attached are the scripts: master (ratsscript.sh), shut down Docker (ratsscript1.sh), run SnapRAID (snapScript.sh, not my script), and start Docker (ratsscript2.sh). I am absolutely a Linux beginner and probably overlooked something simple. Any guidance would be appreciated. (Added .txt extensions to allow upload here).


    Edit: Pasting my scripts here -


    https://pastebin.com/kgs7kd9V (master)


    https://pastebin.com/8mW02Uhy (script 1)


    https://pastebin.com/G3rjgWpc (script 2)

    Well, I spoke to soon, because NZBGet has begun reporting the "out of space" errors again. Will have to do some more digging.


    Edit: Upon more research, disk quotas seem to be the issue. I have never established any quotas since installing OMV, but I noticed NZBGet throwing an error regarding exceeding disk quotas. So, I disabled them using sudo quotaoff -a and re-ran a few downloads that had immediately before been unable to unpack. This time, all three separate downloads unpacked successfully. Is there a more permanent way to ensure that quotas do not get enabled automatically on a reboot or other system event? Can I simply remove the quota arguments in the mntent sections of the config.xml file?

    @trapexit I played around in the terminal trying to figure out a command that would cause an error and had no luck. Moves, copies, unpacking, etc. all worked fine, and the issues continued to be experienced only within Docker containers.


    For the containers that were affected, I removed the mapping of a storage container and mapped the path instead, and it seems the problem may be resolved (at least for now, I have been downloading new media for a couple of days without issue). So either the Docker plugin isn't playing nice with MergerFS, or vice versa.


    This works:



    This doesn't:


    Your post of the fstab entry is truncated. Don't use nano for this, try cat instead


    Whoops, edited. And this wouldn't fit in my previous post.



    Thank you @trapexit. I didn't want to waste your time pouring through my data if I found another culprit. However, so far, I've been unable to make any progress, so here we are:


    I am not sure how to run strace on a Docker container that's running and resulting in these errors. Any simple command I could run in terminal that could be traced instead?


    Code
    uname -a
    Linux ratsvault 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64 GNU/Linux
    Code
    And the entry in fstab...
    /srv/dev-disk-by-label-b1:/srv/dev-disk-by-label-b2:/srv/dev-disk-by-label-b3:/srv/dev-disk-by-label-b4:/srv/dev-disk-by-label-a1:/srv/dev-disk-by-label-a2:/srv/dev-disk-by-label-a3:/srv/dev-disk-by-label-a4 /srv/ceb94f6f-2407-4c37-9eb3-a737c3af08cf fuse.mergerfs defaults,allow_other,use_ino,dropcacheonclose=true,category.create=mfs,minfreespace=20G 0 0


    I just deleted and re-created a new mergerfs pool again, and immediately after mapping everything to the new pool, data is able to be downloaded/unpacked via nzbget and hash checks no longer fail in rtorrent/rutorrent. The strange part is that it seems like the data was able to be downloaded, it was just the unpacking/verification stages that were failing.


    I have no doubt that there is some issue external to mergerfs that is causing this behavior, I just don't know where to begin.

    Unfortunately, there is nothing I can do without additional information. If it's saying you're out of space then something is returning that. The only time mergerfs explicitly returns ENOSPC is when all drives become filtered and at least one reason was minfreespace.


    Next time an error occurs please gather the information as mentioned in the docs or at least `df -h`.


    I must be overlooking something because I can't seem to find instructions for capturing info in the mergerfs docs.


    Edit: For some reason, it appears as though data has begun accumulating on disk a4 exclusively instead of spreading across the disks as intended.


    Here is the output from df -h


    The recreation "fixing" the issue doesn't make sense. mergerfs doesn't interact with your data. It's a proxy / overlay.


    Could you provide the settings you're using? It's really not possible to comment further without such details.


    Are you using the drives out of band of mergerfs? Are drives filling?


    Thanks for chiming in. It doesn't make sense to me, either, but I'm not sure what else could be the cause.


    The drives are used ONLY in context of the pool, nothing writes data to any of the individual drives. At this point, I am unable to download any new data to ensure that the pool is still filling per policy, but up to this point, yes, all data was written to the drives as I would expect based on the policy I had selected:


    Clipboard-1.jpg

    I've made a MergerFS pool that spans the entire size of 8 drives. In this pool is a single parent folder called "Share," under which all my storage and media files reside in sub-folders. This "Share" folder is mapped as an SMB share, and it is also mapped to a Docker storage container to which my other Docker containers have access.


    I did a fresh install of OMV 4.x about a month ago and recreated this setup. After about a week, I noticed that my NZBGet container was giving Unrar errors indicating that there was not enough free space to unpack downloads (Unrar error 5). Then, ruTorrent was giving hash errors, that when a new file was downloaded, the subsequent hash check was returning missing pieces (Hash check on download completion found bad chunks, consider using \"safe_sync\"). Eventually, I found that by deleting the MergerFS pool and recreating a new one resolved both problems. So, I did that, updated the mappings for Docker, and all was right in the world again. This lasted about two weeks, and once again, overnight, my NZBGet and ruTorrent downloads are failing with the same issues.


    I don't want to continue this cycle of having to delete and re-create a new pool every few weeks. I don't know if there's something that triggers this corruption. Any ideas on how to identify the problem? I have run fsck on all drives with no filesystem errors to be found thus far.

    i edited that "noexec" part of code before i even posted here, with no changes, after restart was the same (and yes i did check if the changes were made)
    PS: while editing 'noexec' i found that code was present on all my HDD, so even if i delete from all HDD, i had same problem


    Can you show us screenshots of your Docker settings? I can't watch @TechnoDadLife's video right now, but it is not as simple as editing your fstab file because OMV will revert these changes based on certain system events. See this thread for the exact steps (again, I apologize if this is what was already suggested).

    Better DO NOT USE filesystem labels.


    Is there a different system you'd recommend? All my data disks contain one partition of max size, and I use the filesystem label to indicate where the drive is physically in my case (A1 is top row, first column, C3 is third row, third column, etc.).