Posts by flvinny521


    Do you have any updates on this issue? I am experiencing the same problem.

    On OMV6 I have the same problem. The /run/php folder is not present after rebooting the system.

    The only plugin I'm using is the remotemount plugin. When I delete it, it is working and /run/php is still present after rebooting.

    For now I'm using a cronjob to reinstall php7.4-fpm @reboot. Not nice, but working for now.

    Has anyone an idea why the folder is deleted?



    Any update on this? I'm having the same problem now, and I am also not using the flashmemory plugin (OMV6 installed on an NVME drive).

    It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.


    Thanks for chiming in. While I had the mergerfs plugin installed, I hadn't actually created a pool with it yet as all the filesystems that were going to be used in the pool weren't able to be mounted without the issues I was running into as discussed in the thread.


    Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...

    Well my disk was giving some errors regarding the superblock having an invalid journal and a corrupt partition table, so I used GParted to wipe the OS drive and install OMV6 once again. This time I did everything EXCEPT install the flashmemory plugin and have had no issues whatsoever. I think this is the likely culprit by process of elimination. Thanks for spending so much time working through this with me.


    ryecoaaron, any idea how flashmemory would render my root drive read-only?

    (Edit - See below, not fixed as I had hoped) Since I had some time to kill and nothing to lose, I did a fresh installation of OMV 6. I followed almost the exact same process, but this time, I was able to mount all my filesystems without issue. Either the whole thing was a fluke, or one one of the following things is what caused the error (I didn't do any of these before mounting the filesystems, unlike the first time when I experienced all the problems):


    1. changing my kernal to Proxmox and removing the non-Proxmox kernel
    2. Installing omv-extras and the following plugins: flashmemory, mergerfs, resetperms, snapraid, and symlinks


    Edit - Well, now I am unable to gain access to the GUI (Error 500 - Internal Server Error, Failed to Connect to Socket). This time I installed omv-extras and all the plugins listed above AFTER everything was mounted. I have no evidence to support this, but I feel like it may be flashmemory. I noticed that it was not running (red status on the dashboard), realized I never rebooted after installing, so I rebooted to see if the service would run. Immediately I was faced with this new issue.


    I found this thread which sounded similar, and tried the command that was suggested there:


    Code
    dpkg --configure -a
    dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system


    And then, to test this, did the following:

    Code
    mkdir test
    mkdir: cannot create directory ‘test’: Read-only file system


    So, somehow my root filesystem has been turned read-only. Thoughts?

    Thanks votdev, I checked the log and these are all the entries from the time I rebooted until mounting the filesystem in the GUI and before I accepted the changes. Nothing here stands out to my eyes: https://pastebin.com/6cyfDV4k.


    After clicking the check box and confirming the changes, resulting in the errors described earlier, a great deal of the end of the log is actually gone completely. The timestamp for the latest entries is a full 5 hours earlier than the previous log: https://pastebin.com/1KwKLXrq.

    See the output below (shortened for sanity). Afterwards, I rebooted and tried to mount again, same issue.

    Code
    omv-salt stage run prepare
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 6 (changed=5)
    Failed:    0
    ------------
    Total states run:     6
    Total run time:  16.266 s


    Code
    omv-salt stage run deploy
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 2 (changed=2)
    Failed:    0
    ------------
    Total states run:     2
    Total run time:  31.393 s

    Thanks for the heads up that mount is only temporary and does not persist through a reboot. I am open for suggestions on where to go from here.


    One of the drives is brand new with no data on it, so I tried a few things since there was no risk of data loss. First, the drive itself was visible in the drives section of the GUI, so I tried to create a new filesystem on that drive, and it didn't show up in the drop-down menu. I assume this is because the existing FS was being detected. I then wiped the drive and created a new filesystem directly in OMV6 (this was much faster than OMV5 on a 14TB drive, by the way). This newly created filesystem could also NOT be mounted. I experienced the exact same issues as all the others.

    Zoki I just meant that after mounting the FS in the GUI and clicking to accept the changes, there is no update to fstab. Also, the new FS does not appear in /srv/.


    I have not tried to manually mount anything, I'll look that up and give it a try.


    Edit - could this have anything to do with the filesystem paths changing from when the OS was installed? In other words, OMV is installed on /dev/sdb*, and that probably keeps changing as I connect new drives to the motherboard.


    Final edit - Mounting the drive manually actually does work (the drive is immediately accessible through the /srv/sda1 mount point I selected), but the FS is not displayed anywhere in the OMV GUI. Also., after rebooting, the /srv/sda1 mount point is empty when viewing in the terminal. Mount shows that the mount point is no longer in use:


    Zoki - Here is fstab prior to mounting a "troublesome" filesystem:



    After mounting one, fstab looks identical, so it appears that the new filesystem is never committed. Since I started running into trouble, I have only been connecting a single drive at a time. Some more information that may or may not mean anything:


    I'm using the Proxmox kernel. I have most of the drives (but not all) connected through an SAS expander, but all the drives have this issue, even the ones connected directly to a mobo SATA port. greno, using an Incognito window or new browser does not resolve the issue.

    Thanks greno, I'll give that a shot in a couple of hours, but the fact that SSH connections are rejected would lead me to believe there's more going on. Also, some additional info:


    If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

    Good evening, I recently upgraded my server hardware and decided I would start with a fresh installation of OMV 6. I've overcome a few minor issues along the way, but I seem to be stuck at mounting the existing filesystems from my 10 data drives (all of which were created in previous versions of OMV).


    I installed OMV6 with only the system drive plugged in, and since then, have been able to mount the filesystem on my secondary SSD. All my other drives are HDDs , and immediately upon mounting any of their filesystems, the GIU turns unresponsive with a "502 - Bad Gateway" error, and I can no longer SSH into the machine. If I manually shut the server down, log back in to the GUI and revert the changes, then everything is fine.


    I'd appreciate any tips you could provide to get those filesystems working again!


    Edit: If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

    I would be glad if it would be possible to select percentage-based policies (by now I see eppfrd, msppfrd and pfrd available).


    Thank you for the fantastic work on OMV and kind regards :)


    Have you tested any of these percentage-based policies yet? I generally have used MFS, but had been hoping for percentage-based writing for a while. I guess this is not exactly the same as writing to whichever disk has the lowest percentage used, since it still seems to choose the disk(s) to which to write based on gross free space.

    Thank you, @Adoby. I was able to get the script working using your trick.


    At 3 AM, snapcript is run as root using this command:


    cd /usr/sbin && /bin/bash ratsscript.sh


    And then ratsscript.sh looks like this (may be overkill, but it works for me):


    Code
    cd /usr/sbin && /bin/bash ratsscript1.sh
    
    
    cd /usr/sbin && /bin/bash snapscript.sh
    
    
    cd /usr/sbin && /bin/bash ratsscript2.sh

    @Adoby, thanks for taking the time to respond. All of the scripts are located in /usr/sbin. I just looked again and didn't notice any reference to /usr/bin, if you could point that out I will correct it. To be honest, I don't know anything about the "proper" location for scripts so I just put them in the location of the SnapRaid plugin's script (that I am not currently using). The Scheduled job is set up as:



    It appears that OMV takes this information and automatically adds an entry in the cron.d file mentioned in the error message. The contents of that file are:


    Bash
    #!/bin/sh -l
    # This configuration file is auto-generated.
    # WARNING: Do not edit this file, your changes will be lost.
    /usr/sbin/ratsscript.sh


    I have no problem using the command line via SSH, but I prefer to use OMV built-in features whenever possible since I know the OS can overwrite certain config files. Also, since everything is located on the OS drive, there should be no issues with running from a non-executable location. As I mentioned, if I manually run the script from the Scheduled Job tab, everything works perfectly fine. I just can't figure out why cron can't locate the file when it is clearly located in the correct place and is executable.



    Code
    rats@ratsvault:/$ find /usr/sbin -name "rats*"
    /usr/sbin/ratsscript1.sh
    /usr/sbin/ratsscript.sh
    /usr/sbin/ratsscript2.sh
    rats@ratsvault:/$

    Recently, I began using a third-party SnapRAID script in place of the one packaged with the SnapRAID plugin. I used this on its own, called at 2AM through a Scheduled Job, for a week or so, until it began failing due to not having enough available memory. I realized that shutting down all my Docker containers freed up sufficient memory for the script to run, so I am now attempting to automate the whole process.


    I have created a "master" script that executes three individual scripts to shut down all Docker containers, then run the SnapRAID script, and finally, to start the Docker containers again. I added a Scheduled Job to run this master script at 3AM every day. When I test it by running manually from the Scheduled Job tab, all three scripts are called in order and I receive output indicating successful completion of the process. However, at 3AM, I receive an email with the error message:


    /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: 4: /var/lib/openmediavault/cron.d/userdefined-b5b64375-44ac-49f9-8c7f-b498c412336d: /usr/sbin/ratsscript.sh: not found


    Attached are the scripts: master (ratsscript.sh), shut down Docker (ratsscript1.sh), run SnapRAID (snapScript.sh, not my script), and start Docker (ratsscript2.sh). I am absolutely a Linux beginner and probably overlooked something simple. Any guidance would be appreciated. (Added .txt extensions to allow upload here).


    Edit: Pasting my scripts here -


    https://pastebin.com/kgs7kd9V (master)


    https://pastebin.com/8mW02Uhy (script 1)


    https://pastebin.com/G3rjgWpc (script 2)

    Well, I spoke to soon, because NZBGet has begun reporting the "out of space" errors again. Will have to do some more digging.


    Edit: Upon more research, disk quotas seem to be the issue. I have never established any quotas since installing OMV, but I noticed NZBGet throwing an error regarding exceeding disk quotas. So, I disabled them using sudo quotaoff -a and re-ran a few downloads that had immediately before been unable to unpack. This time, all three separate downloads unpacked successfully. Is there a more permanent way to ensure that quotas do not get enabled automatically on a reboot or other system event? Can I simply remove the quota arguments in the mntent sections of the config.xml file?

    @trapexit I played around in the terminal trying to figure out a command that would cause an error and had no luck. Moves, copies, unpacking, etc. all worked fine, and the issues continued to be experienced only within Docker containers.


    For the containers that were affected, I removed the mapping of a storage container and mapped the path instead, and it seems the problem may be resolved (at least for now, I have been downloading new media for a couple of days without issue). So either the Docker plugin isn't playing nice with MergerFS, or vice versa.


    This works:



    This doesn't: