Beiträge von flvinny521

    Zoki I just meant that after mounting the FS in the GUI and clicking to accept the changes, there is no update to fstab. Also, the new FS does not appear in /srv/.


    I have not tried to manually mount anything, I'll look that up and give it a try.


    Edit - could this have anything to do with the filesystem paths changing from when the OS was installed? In other words, OMV is installed on /dev/sdb*, and that probably keeps changing as I connect new drives to the motherboard.


    Final edit - Mounting the drive manually actually does work (the drive is immediately accessible through the /srv/sda1 mount point I selected), but the FS is not displayed anywhere in the OMV GUI. Also., after rebooting, the /srv/sda1 mount point is empty when viewing in the terminal. Mount shows that the mount point is no longer in use:


    Zoki - Here is fstab prior to mounting a "troublesome" filesystem:



    After mounting one, fstab looks identical, so it appears that the new filesystem is never committed. Since I started running into trouble, I have only been connecting a single drive at a time. Some more information that may or may not mean anything:


    I'm using the Proxmox kernel. I have most of the drives (but not all) connected through an SAS expander, but all the drives have this issue, even the ones connected directly to a mobo SATA port. greno, using an Incognito window or new browser does not resolve the issue.

    Thanks greno, I'll give that a shot in a couple of hours, but the fact that SSH connections are rejected would lead me to believe there's more going on. Also, some additional info:


    If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

    Good evening, I recently upgraded my server hardware and decided I would start with a fresh installation of OMV 6. I've overcome a few minor issues along the way, but I seem to be stuck at mounting the existing filesystems from my 10 data drives (all of which were created in previous versions of OMV).


    I installed OMV6 with only the system drive plugged in, and since then, have been able to mount the filesystem on my secondary SSD. All my other drives are HDDs , and immediately upon mounting any of their filesystems, the GIU turns unresponsive with a "502 - Bad Gateway" error, and I can no longer SSH into the machine. If I manually shut the server down, log back in to the GUI and revert the changes, then everything is fine.


    I'd appreciate any tips you could provide to get those filesystems working again!


    Edit: If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

    I would be glad if it would be possible to select percentage-based policies (by now I see eppfrd, msppfrd and pfrd available).


    Thank you for the fantastic work on OMV and kind regards :)


    Have you tested any of these percentage-based policies yet? I generally have used MFS, but had been hoping for percentage-based writing for a while. I guess this is not exactly the same as writing to whichever disk has the lowest percentage used, since it still seems to choose the disk(s) to which to write based on gross free space.

    Thank you, @Adoby. I was able to get the script working using your trick.


    At 3 AM, snapcript is run as root using this command:


    cd /usr/sbin && /bin/bash ratsscript.sh


    And then ratsscript.sh looks like this (may be overkill, but it works for me):


    Code
    cd /usr/sbin && /bin/bash ratsscript1.sh
    
    
    cd /usr/sbin && /bin/bash snapscript.sh
    
    
    cd /usr/sbin && /bin/bash ratsscript2.sh

    @Adoby, thanks for taking the time to respond. All of the scripts are located in /usr/sbin. I just looked again and didn't notice any reference to /usr/bin, if you could point that out I will correct it. To be honest, I don't know anything about the "proper" location for scripts so I just put them in the location of the SnapRaid plugin's script (that I am not currently using). The Scheduled job is set up as:



    It appears that OMV takes this information and automatically adds an entry in the cron.d file mentioned in the error message. The contents of that file are:


    Bash
    #!/bin/sh -l
    # This configuration file is auto-generated.
    # WARNING: Do not edit this file, your changes will be lost.
    /usr/sbin/ratsscript.sh


    I have no problem using the command line via SSH, but I prefer to use OMV built-in features whenever possible since I know the OS can overwrite certain config files. Also, since everything is located on the OS drive, there should be no issues with running from a non-executable location. As I mentioned, if I manually run the script from the Scheduled Job tab, everything works perfectly fine. I just can't figure out why cron can't locate the file when it is clearly located in the correct place and is executable.



    Code
    rats@ratsvault:/$ find /usr/sbin -name "rats*"
    /usr/sbin/ratsscript1.sh
    /usr/sbin/ratsscript.sh
    /usr/sbin/ratsscript2.sh
    rats@ratsvault:/$

    Recently, I began using a third-party SnapRAID script in place of the one packaged with the SnapRAID plugin. I used this on its own, called at 2AM through a Scheduled Job, for a week or so, until it began failing due to not having enough available memory. I realized that shutting down all my Docker containers freed up sufficient memory for the script to run, so I am now attempting to automate the whole process.


    I have created a "master" script that executes three individual scripts to shut down all Docker containers, then run the SnapRAID script, and finally, to start the Docker containers again. I added a Scheduled Job to run this master script at 3AM every day. When I test it by running manually from the Scheduled Job tab, all three scripts are called in order and I receive output indicating successful completion of the process. However, at 3AM, I receive an email with the error message:


    /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: 4: /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: /usr/sbin/ratsscript.sh: not found


    Attached are the scripts: master (ratsscript.sh), shut down Docker (ratsscript1.sh), run SnapRAID (snapScript.sh, not my script), and start Docker (ratsscript2.sh). I am absolutely a Linux beginner and probably overlooked something simple. Any guidance would be appreciated. (Added .txt extensions to allow upload here).


    Edit: Pasting my scripts here -


    https://pastebin.com/kgs7kd9V (master)


    https://pastebin.com/8mW02Uhy (script 1)


    https://pastebin.com/G3rjgWpc (script 2)

    Well, I spoke to soon, because NZBGet has begun reporting the "out of space" errors again. Will have to do some more digging.


    Edit: Upon more research, disk quotas seem to be the issue. I have never established any quotas since installing OMV, but I noticed NZBGet throwing an error regarding exceeding disk quotas. So, I disabled them using sudo quotaoff -a and re-ran a few downloads that had immediately before been unable to unpack. This time, all three separate downloads unpacked successfully. Is there a more permanent way to ensure that quotas do not get enabled automatically on a reboot or other system event? Can I simply remove the quota arguments in the mntent sections of the config.xml file?

    @trapexit I played around in the terminal trying to figure out a command that would cause an error and had no luck. Moves, copies, unpacking, etc. all worked fine, and the issues continued to be experienced only within Docker containers.


    For the containers that were affected, I removed the mapping of a storage container and mapped the path instead, and it seems the problem may be resolved (at least for now, I have been downloading new media for a couple of days without issue). So either the Docker plugin isn't playing nice with MergerFS, or vice versa.


    This works:



    This doesn't:


    Your post of the fstab entry is truncated. Don't use nano for this, try cat instead


    Whoops, edited. And this wouldn't fit in my previous post.



    Thank you @trapexit. I didn't want to waste your time pouring through my data if I found another culprit. However, so far, I've been unable to make any progress, so here we are:


    I am not sure how to run strace on a Docker container that's running and resulting in these errors. Any simple command I could run in terminal that could be traced instead?


    Code
    uname -a
    
    
    Linux ratsvault 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64 GNU/Linux
    Code
    And the entry in fstab... 
    
    
    /srv/dev-disk-by-label-b1:/srv/dev-disk-by-label-b2:/srv/dev-disk-by-label-b3:/srv/dev-disk-by-label-b4:/srv/dev-disk-by-label-a1:/srv/dev-disk-by-label-a2:/srv/dev-disk-by-label-a3:/srv/dev-disk-by-label-a4 /srv/ceb94f6f-2407-4c37-9eb3-a737c3af08cf fuse.mergerfs defaults,allow_other,use_ino,dropcacheonclose=true,category.create=mfs,minfreespace=20G 0 0


    I just deleted and re-created a new mergerfs pool again, and immediately after mapping everything to the new pool, data is able to be downloaded/unpacked via nzbget and hash checks no longer fail in rtorrent/rutorrent. The strange part is that it seems like the data was able to be downloaded, it was just the unpacking/verification stages that were failing.


    I have no doubt that there is some issue external to mergerfs that is causing this behavior, I just don't know where to begin.

    Unfortunately, there is nothing I can do without additional information. If it's saying you're out of space then something is returning that. The only time mergerfs explicitly returns ENOSPC is when all drives become filtered and at least one reason was minfreespace.


    Next time an error occurs please gather the information as mentioned in the docs or at least `df -h`.


    I must be overlooking something because I can't seem to find instructions for capturing info in the mergerfs docs.


    Edit: For some reason, it appears as though data has begun accumulating on disk a4 exclusively instead of spreading across the disks as intended.


    Here is the output from df -h


    The recreation "fixing" the issue doesn't make sense. mergerfs doesn't interact with your data. It's a proxy / overlay.


    Could you provide the settings you're using? It's really not possible to comment further without such details.


    Are you using the drives out of band of mergerfs? Are drives filling?


    Thanks for chiming in. It doesn't make sense to me, either, but I'm not sure what else could be the cause.


    The drives are used ONLY in context of the pool, nothing writes data to any of the individual drives. At this point, I am unable to download any new data to ensure that the pool is still filling per policy, but up to this point, yes, all data was written to the drives as I would expect based on the policy I had selected:


    I've made a MergerFS pool that spans the entire size of 8 drives. In this pool is a single parent folder called "Share," under which all my storage and media files reside in sub-folders. This "Share" folder is mapped as an SMB share, and it is also mapped to a Docker storage container to which my other Docker containers have access.


    I did a fresh install of OMV 4.x about a month ago and recreated this setup. After about a week, I noticed that my NZBGet container was giving Unrar errors indicating that there was not enough free space to unpack downloads (Unrar error 5). Then, ruTorrent was giving hash errors, that when a new file was downloaded, the subsequent hash check was returning missing pieces (Hash check on download completion found bad chunks, consider using \"safe_sync\"). Eventually, I found that by deleting the MergerFS pool and recreating a new one resolved both problems. So, I did that, updated the mappings for Docker, and all was right in the world again. This lasted about two weeks, and once again, overnight, my NZBGet and ruTorrent downloads are failing with the same issues.


    I don't want to continue this cycle of having to delete and re-create a new pool every few weeks. I don't know if there's something that triggers this corruption. Any ideas on how to identify the problem? I have run fsck on all drives with no filesystem errors to be found thus far.