Beiträge von Antioch

    Hi, all. I'm a new OMV user running OMV4 which I installed using the official instructions on top of a clean Debian 9 netinstall. OMV appears to be working fine but I'm having issues getting any dockers to run.


    I've installed the OMV-Extras repository and from there installed and enabled the Docker plugin. Just to test I installed the linuxserver/nzbget docker but when I try to run it with default container settings (and host networking) I see the following output in the logs:
    [ERROR] Binding socket failed for 0.0.0.0: ErrNo 13, Permission denied
    Note: when I run this docker plugin in a standard OMV4 I've setup in a VM it works fine with the default settings.


    On my server, if I change the docker to "run in privileged mode" the error message goes away and I can connect to nzbget. I'd rather not have to run every docker in privileged mode, so I'd like to know what settings I need to change to be able to run the dockers normally.


    Thank you for your help!

    I don't consider this a fair comparison. I think OMV is more stable on hardware than a hyper-v VM especially if that hardware is stable running FreeNAS. It could be the 4.19 isn't quite as stable on hyper-v yet since it is newly released.

    You couldn't do anything that would realistically be considered "stability" testing, in a Windows client VM. For a fair comparison, OMV would need to be installed on actual hardware or ran from a purpose built VM server like Proxmox or ESXi.

    Oh, I completely agree. I wasn't attempting to do a stability test/comparison at all - only testing it from a feature perspective. I simply had never encountered an error like that before and was surprised. Thank you both for your feedback.



    Which brings me to why I run the proxmox kernel on some systems. First, proxmox uses the Debian 9 userland with the Ubuntu 18.04 LTS kernel and this is well tested by them. Second, this is exactly what OMV is when I run the proxmox kernel on an OMV system. Third, I find this kernel ultra stable on Ubuntu 18 and Debian. Finally, it includes the zfs modules (no compiling) which makes kernel upgrades much faster if you use zfs.

    This may be something I ought to consider as well, specifically because I do plan on using ZFS. Since I have your attention, I'd like to ask two related questions:

    • How does the ZFS plugin get along with the proxmox kernel? Is the plugin aware of the proxmox kernel and doesn't force unnecessary recompiles of the stock kernel when upgrading?
    • Out of curiosity, do you know of any (simple) way to get OMV installed onto a zfs formatted partition rather than ext4?

    I've installed all updates for OVM as far as I know (apt-get says there is nothing new). This is my kernel, not sure if it's the latest.


    Linux openmediavault 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64 GNU/Linux


    No, FreeNAS is running on actual hardware. If OMV tests out then I will move OMV over to said hardware, but given that this issue occurred within an hour of me testing on a VM I'm a bit hesitant to move to metal without understanding the issue.

    I'm considering migrating to OMV from FreeNAS (long time user) and decided to give OMV a test to see how it works and what it can do, so I installed it in a Hyper-V VM on Windows. I have installed a few plugins (docker, sftp, zfs, letsencrypt) and it seems to work well enough except for one critical error shown below.


    https://i.postimg.cc/dVgR7WFP/OMV-Page-Bug.png


    The foreground window is the output show on the VM's TTY at the login screen, the SSH window in the background is logged in as root.


    I have no idea what is causing this to occur, but it spits this information out to console in the middle of whatever I'm doing. I have seen it occur multiple times.


    I would like to figure out what the heck is going on because this doesn't look to be stable enough for me to want to use to replace FreeNAS which has been rock solid.


    Any ideas?


    Thanks,