Mounting existing EXT4 filesystem = unresponsive GUI, SSH fails, root errors

  • Good evening, I recently upgraded my server hardware and decided I would start with a fresh installation of OMV 6. I've overcome a few minor issues along the way, but I seem to be stuck at mounting the existing filesystems from my 10 data drives (all of which were created in previous versions of OMV).


    I installed OMV6 with only the system drive plugged in, and since then, have been able to mount the filesystem on my secondary SSD. All my other drives are HDDs , and immediately upon mounting any of their filesystems, the GIU turns unresponsive with a "502 - Bad Gateway" error, and I can no longer SSH into the machine. If I manually shut the server down, log back in to the GUI and revert the changes, then everything is fine.


    I'd appreciate any tips you could provide to get those filesystems working again!


    Edit: If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

  • With nginx I've seen it be nothing more than a cache problem with Chrome. Clear your browser cache or try another a different browser. Or try incognito mode.

    RaspberryPi 4 8GB | 256GB MicroSD | 2TB USB3 SSD | RPi OS 64 Lite | OMV 6 | Docker | Portainer | Plex

  • Thanks greno, I'll give that a shot in a couple of hours, but the fact that SSH connections are rejected would lead me to believe there's more going on. Also, some additional info:


    If I already have established an SSH connection before I mount the filesystem and accept the changes, it will stay alive, but trying to use sudo or switch to root gives me an error message that the "effective UID is not 0." Initiating a new SSH connection once the 502 errors start are always closed unexpectedly.

  • flvinny521

    Hat den Titel des Themas von „502 Gateway error upon mounting existing EXT4 filesystem“ zu „Mounting existing EXT4 filesystem = unresponsive GUI, SSH fails, root errors“ geändert.
  • Can you post the /etc/fstab before mounting the drives and after?

    And can you try mounting the drives one by one, to see if a particular one causes the error?

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Zoki - Here is fstab prior to mounting a "troublesome" filesystem:



    After mounting one, fstab looks identical, so it appears that the new filesystem is never committed. Since I started running into trouble, I have only been connecting a single drive at a time. Some more information that may or may not mean anything:


    I'm using the Proxmox kernel. I have most of the drives (but not all) connected through an SAS expander, but all the drives have this issue, even the ones connected directly to a mobo SATA port. greno, using an Incognito window or new browser does not resolve the issue.

  • flvinny521 I did not understand what you said. Do you mean the drives have not been mounted or the fstab has not been written when the trouble occured?


    Have you tried to mount on of the disks manually using the mount command? This may give more error messages.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Zoki I just meant that after mounting the FS in the GUI and clicking to accept the changes, there is no update to fstab. Also, the new FS does not appear in /srv/.


    I have not tried to manually mount anything, I'll look that up and give it a try.


    Edit - could this have anything to do with the filesystem paths changing from when the OS was installed? In other words, OMV is installed on /dev/sdb*, and that probably keeps changing as I connect new drives to the motherboard.


    Final edit - Mounting the drive manually actually does work (the drive is immediately accessible through the /srv/sda1 mount point I selected), but the FS is not displayed anywhere in the OMV GUI. Also., after rebooting, the /srv/sda1 mount point is empty when viewing in the terminal. Mount shows that the mount point is no longer in use:


  • At least we know, that the disks can be mounted to your system without any problem.

    Mounts are done by UUID, so this does not change when mounting other disks and mount will only mount files until reboot, so that is ok.


    Now wee need to find out, what is going on, when you select a disk and hit apply. I have to think a bit onb how to do this.

    Maybe someone else has an idea.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Thanks for the heads up that mount is only temporary and does not persist through a reboot. I am open for suggestions on where to go from here.


    One of the drives is brand new with no data on it, so I tried a few things since there was no risk of data loss. First, the drive itself was visible in the drives section of the GUI, so I tried to create a new filesystem on that drive, and it didn't show up in the drop-down menu. I assume this is because the existing FS was being detected. I then wiped the drive and created a new filesystem directly in OMV6 (this was much faster than OMV5 on a 14TB drive, by the way). This newly created filesystem could also NOT be mounted. I experienced the exact same issues as all the others.

  • Can you try to rin these commands to rule out som old config change:


    Code
    omv-salt stage run prepare
    omv-salt stage run deploy

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • See the output below (shortened for sanity). Afterwards, I rebooted and tried to mount again, same issue.

    Code
    omv-salt stage run prepare
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 6 (changed=5)
    Failed:    0
    ------------
    Total states run:     6
    Total run time:  16.266 s


    Code
    omv-salt stage run deploy
    <snip>
    
    Summary for debian
    ------------
    Succeeded: 2 (changed=2)
    Failed:    0
    ------------
    Total states run:     2
    Total run time:  31.393 s
  • Sorry, no idea, maybe votdev has an idea how t ofind out what is going on during apply when mounting existing files.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Thanks votdev, I checked the log and these are all the entries from the time I rebooted until mounting the filesystem in the GUI and before I accepted the changes. Nothing here stands out to my eyes: https://pastebin.com/6cyfDV4k.


    After clicking the check box and confirming the changes, resulting in the errors described earlier, a great deal of the end of the log is actually gone completely. The timestamp for the latest entries is a full 5 hours earlier than the previous log: https://pastebin.com/1KwKLXrq.

  • (Edit - See below, not fixed as I had hoped) Since I had some time to kill and nothing to lose, I did a fresh installation of OMV 6. I followed almost the exact same process, but this time, I was able to mount all my filesystems without issue. Either the whole thing was a fluke, or one one of the following things is what caused the error (I didn't do any of these before mounting the filesystems, unlike the first time when I experienced all the problems):


    1. changing my kernal to Proxmox and removing the non-Proxmox kernel
    2. Installing omv-extras and the following plugins: flashmemory, mergerfs, resetperms, snapraid, and symlinks


    Edit - Well, now I am unable to gain access to the GUI (Error 500 - Internal Server Error, Failed to Connect to Socket). This time I installed omv-extras and all the plugins listed above AFTER everything was mounted. I have no evidence to support this, but I feel like it may be flashmemory. I noticed that it was not running (red status on the dashboard), realized I never rebooted after installing, so I rebooted to see if the service would run. Immediately I was faced with this new issue.


    I found this thread which sounded similar, and tried the command that was suggested there:


    Code
    dpkg --configure -a
    dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system


    And then, to test this, did the following:

    Code
    mkdir test
    mkdir: cannot create directory ‘test’: Read-only file system


    So, somehow my root filesystem has been turned read-only. Thoughts?

  • remount you os disk rw and if this works apt-get purge openmediavault-flashmemory, reboot.

    If this works, you found the culprit.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Well my disk was giving some errors regarding the superblock having an invalid journal and a corrupt partition table, so I used GParted to wipe the OS drive and install OMV6 once again. This time I did everything EXCEPT install the flashmemory plugin and have had no issues whatsoever. I think this is the likely culprit by process of elimination. Thanks for spending so much time working through this with me.


    ryecoaaron, any idea how flashmemory would render my root drive read-only?

  • flvinny521

    Hat das Label gelöst hinzugefügt.
    • Offizieller Beitrag

    any idea how flashmemory would render my root drive read-only?

    It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.


    Thanks for chiming in. While I had the mergerfs plugin installed, I hadn't actually created a pool with it yet as all the filesystems that were going to be used in the pool weren't able to be mounted without the issues I was running into as discussed in the thread.


    Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...

    • Offizieller Beitrag

    Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...

    The flashmemory plugin is installed on every system that is installed with the install script and I have it on every system of mine. The plugin uses folder2ram which has changed very little since the plugin started using it (v3 I think). So, I have to believe it something unique to your system. Did you ever look at:


    sudo journalctl -u folder2ram_startup.service

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!