BAD GATEWAY in WebUI and ZFS pool is blank

  • I recently added another object to my ZFS pool. Rather than the mountpoint being /TPool/newObject, I accidentally set the mountpoint as /. Once this applied, I lost access to the WebUI, and I can't navigate the ZFS pool via SSH. How would I reverse this? Thanks.


    Edit: I actually can't SSH into the machine any more, either.

    • Offizieller Beitrag

    I think you need to boot a live distro (systemrescuecd) chroot into rootfs partition. From there disable zfs-import and zfs-mount systemd units. After that reboot see if system comes back again. Zfs pools won’t be there mounted anymore. Atm I don’t know how to change the mountpoint of a zfs people pool. Pretty sure you can info in google or another user can help you. Also if the new pool doesn’t contain any data you can wipe the disks and recreate them with a correct mountpoint .

  • Thanks @subzero79, I'd rather not erase the disks as they have a lot of data and my latest backup isn't so current. I'm assuming that when I have physical access to the terminal, I'll be able to change the mountpoint. I just don't quite know the command for that, yet. I was hoping for some feedback from this community in that regard.


    I wouldn't have any issue wiping out OMV if that would help me keep my ZFS pool's data intact, but I'd be worried that the same issue would be present once I tried to re-import the pool after a fresh install. I'm almost positive there is a simple fix regarding fixing the mountpoint, but I'm a bit wary to try something that could cause the system or my data to be irrecoverable. Is there a ZFS specialist I need to get in contact with on here?


    Edit: For clarity, the main pool is located properly as are several subsystems. The most recent one is the one that I messed up.

  • Thank you for your detailed response. I have an 8 disk pool that is a stripe of mirrors or the ZFS equivalent of RAID10.


    Yes, you have assumed correctly. The main pool was called TPool. And I have several filesystem objects inside of it named "nas", "movies", "tv", etc. I added another filesystem object aptly named "virus" for the purpose of storing quarantined objects via clamav.


    So the fields entered in the GUI were:
    Object type: Filesystem
    Prefix: I don't recall this one
    Name: virus
    Mountpoint: /


    Please let me know what other information I can provide. I have accessed the physical machine after a reboot via the recovery option in grub and have root access.


    Sent from my Pixel 2 using Tapatalk


    Edit: I've got an external hard drive and am performing an rsync backup at the moment in case I need to obliterate everything. I was able to start the ZFS service and mount the external hard drive.

  • So, I'm not entirely sure that the problem was with the mountpoint in the ZFS plugin. Once I had all my data backed up as mentioned in previous post, I ran the #zfs destroy TPool/virus command and rebooted. Upon reboot, I was asked to apply the configuration changes in the WebUI. I did so and was greeted with the Bad Gateway error again. I rebooted once more (this time via hardware reset button) and then chose to "Revert" the changes rather than apply them. Now all seems to be working fine. I'm going to mark this as resolved.

  • It looks like you were able to replicate my experience, exactly. Something that I forgot to mention is that I received that error initially when I left the mountpoint blank. I assumed the / mountpoint was inside of the pool, and would place the virus filesystem inside the root of the pool instead of the root folder. It had been awhile since I had created a ZFS object. Thank you for being so diligent in this.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!