Posts by Gwindalmir

    Using the proxmox kernel has nothing to do with running VMs. The original reason for adding it was that it was stable version (backports is usually a moving target) and the zfs module was built-in eliminating the problematic compiling. As for being less battle tested, I disagree. The proxmox kernel is a minimally modified Ubuntu kernel (very stable on the hundreds of Ubuntu systems I maintain in the enterprise). And since Proxmox is using a Debian userland, running the Proxmox kernel on OMV is a perfect fit. I have been doing it for years on many systems including production systems. The Proxmox kernel also offers better hardware support than the standard Debian kernel. You can use whatever kernel you like but that is your preference. I don't think I have ever seen the proxmox kernel cause a problem.

    I'm glad to hear you haven't had any issues, I'm just quoting what proxmox staff themselves have stated.

    As for running VM, the research I've read indicates it's optimized for VMs. In fact, that seems to be proxmox's "thing".

    Their own github even indicates various settings configured to help with VMs:

    I ran Gentoo for over a decade, I've been jaded by system breakages, and want to diverge as little from the baseline system as possible. This means default debian kernel, minimal community maintained packages, that sort of thing.

    I don't care about "bleeding" edge, as far as it goes here. I just want it to work.

    Anyway, I appreciate your feedback on it. :)

    I just upgraded to OMV6, and everything is working, except I can't access the Web-UI under /omv/.

    I get a blank, blue screen, and multiple 404 errors in the development console trying to load the js scripts.

    I've seen many threads on this topic, but none of them are relevant.

    It's running on port 81, and I can access everything else (portainer, and other docker services), just fine under their respective subfolders.

    I use the built-in nginx, not a docker container, and have a custom proxy conf under sites-enabled.

    After searching for a bit, the issue is the fact the index.html page has <base href="/"> in the header section. This is incompatible with subfolder proxying.

    You can see that line here:…rkbench/src/index.html#L6

    We need a configuration option to set the subfolder path, like other applications have.


    I was able to work around this by using the following in my proxy config:

        location / {
            root   /var/www/html;
            index  index.html;
            # Lines below were added
            if ($http_referer ~* (/omv/) ) {
                rewrite ^/(.*)$ /omv/$1 redirect;
            rewrite ^/assets/(.*)$ /omv/assets/$1 redirect;

    Downside is this can conflict with the root path (particularly the last line with assets). It doesn't for me, but it's a very hacky solution.

    So, I fixed this by doing the following (amd64):

    1. Remove everything zfs related, and install the stable version of the kernel

      apt-get remove libzfs4linux libzpool4linux openmediavault-zfs zfs-auto-snapshot zfs-dkms zfs-zed zfsutils-linux
      apt-get install linux-image-amd64/stable linux-headers-amd64/stable
      apt auto-remove

    2. Remove all kernels that aren't in the stable package (5.10)
      dpkg -l | grep linux-image
      apt-remove --purge linux-image-5.18.0-0.bpo.1-amd64 # repeat for each item listed above
    3. With the OMV Kernel plugin, make sure 5.10 is selected as the default Debian GNU/Linux, with Linux 5.10.0-16-amd64 as of this writing.
    4. Reinstall everything (plugin pulls in all deps):
      apt-get install openmediavault-zfs
    5. Once the system came back online, I did a ZFS Import in the web-UI (Import all).
    6. Then rebooted again to make sure everything started correctly.

    No problems after that.

    Why did I do this?

    I didn't use the proxmox kernel before, and don't want to use it now. I don't have VMs on my NAS, and don't want a kernel that's "less battle tested".

    Forgive me if this is common knowledge, but I couldn't find any forum references to rsync parameters, and was going to report a bug, but I found a workaround for my issue.

    You can specify environment variables (such as %RSYNC_USER_NAME%) by adding the appropriate option (even if provided in the web ui) under extra options.

    The UI doesn't let you type %RSYNC_USER_NAME% in the User field, nor * in the Group field.

    However, I was able to work around that by adding uid and gid directly to extra options, which places them at the end of the module config.

    This works as expected, and files are stored on the server as the authenticating user.

    Perfect for home directories!

    You can find the available variables on the rsync documentation (which happens to be linked to from the OMV UI):

    I know this is a bug, but when I went to open an issue on GitHub, it said to "wait for moderators on the forum", so here we are.

    If you add multiple users to an rsync module in the OMV UI, the rsync secrets file it generates is invalid.

    This causes a "password mismatch" when you try to log in from a client.

    I added one account, and it worked fine, then I went to add a second account and kept getting "password mismatch", even though I know the password was correct, and changed it half a dozen times to be sure.

    It places all secrets on one line, like this (/var/lib/openmediavault/rsyncd-home.secrets):


    (Bold for emphasis)

    Each user should be on their own line.

    Once I manually edited the secrets file (despite the warning), it worked.

    openmediavault 5.6.22-1 (Usul)

    EDIT: I just realized, the login information in the file is sorted alphabetically by name.

    The first user I added was "user2", then later I added "user1", and OMV sorted it first.

    It's possible that sorting is the source of the corruption.

    Nevermind, not that.