Posts by jkaberg

    There is no such support in the plugin today. Personally I've made a similar setup as you want to achieve manually using the macvlan network driver. For those containers I'm not using the plugin... Additionally there is a docker openvpn image I'm using that relies on iptables to make sure all traffic on specific containers are forced via the vpn tunnel. Not sure if my setup really marches your requirements but let me know if you want more details.

    By the way, adding support for user specific networking is probably quite complex and not something I will be able to fix unfortunately. It is on the top of my own wishlist though :-)

    Just to clarify, I'm not using macvlan with vlan trunking, but rather to give the containers their own layer2 access and providing them with their own IP numbers on the same network as my docker host. I have multiple vlans in my network but haven't had the need to isolate the containers on that level yet.

    Yes, I actully got around to doing this manually myself.. short hint for people who need this:

    Create the network (do not create the .30 interface! docker does this for you.)

    docker network create -d macvlan --subnet= --gateway -o parent=eth0.30 vpnvlan

    Then just fire up dockers as you need, every container needs to have its own ip and there will be no need to exposing ports as all of the container ports gets exposed (exposing ports is also not supported by macvlan).

    docker run --name mydocker --net=vpnvlan --ip= .......

    Obviously this requires an trunked vlan 30 on eth0...

    Why are you setting this in options? This seems to be the problem.

    Ah yes, that was the problem indeed.

    I used to use this cause it helps formatting df output;

    without fsname=XXX

    with fsname

    Happy christmas everyone.

    Yesterday I got around to reinstalling OMV to 3.0.57 and setting up mergefs with a bunch of drives, but whenever I navigate the web ui where it maps the mergefs mountpoint (eg. in NFS, Samba or Docker plugins) I get the following error

    My fstab looks like this

    Anyone with any ideas as to why this is happening? or is it a bug?

    Ok so I think I understod you correctly, steps to fix my issue

    • Remove omv letsencrypt plugin
    • delete /etc/letsencrypt and /opt/letsencrypt
    • Install omvletsencrypt plugin
    • Regenerate certs with plugin (do not enable test option!)
    • (Optional) change certs for webUI, and if you got nginx websites plugin enabled

    This fixed it for me atleast

    Hey jkaberg,

    please stop all the references of the certificate in your system , then generating a new certificate.

    bitte stoppe alle Referenzierungen des Zertifikates in Deinem System, dann generiere ein neues Zertifikat.

    Please elaborate, what do you mean? Remove the "old" certs?

    Well getting this error now;

    Running from CLI:

    My settings:

    I'm trying to reverse proxy the WebUI of OMW but can't seem to get it to work properly,

    For now I have

    location /openmediavault/ {
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    And this gets me a blue screen with no login. So this tells me I need to rewrite urls in the pages since when I look at the html source some dependencies are still hardcoded to the root url / (/extjs, /images and some more)

    Anyone got around this?

    According to one could when doing the acme-challenge set tls to true and that would make the letsencrypt server challenge over https


    "type": "simpleHttp",
    "tls": false
    /* Signed as JWS */

    So if one could do that (with the plugin) you could then via the nginx websites plugin set up an default landing page with https (and change the letsencrypt webroot to wherever landing page's root is)

    How about that? No need for SNI proxy.

    PS: This is a feature request (acme-challenge over HTTPS instead of HTTP)

    Dont know which update gave me this but,

    I'd like to move the webroot (/) to something like /omw or /openmediavault and use / for other stuff. As I'm using the plugin let's encrypt I'm forced to have /.well-known/acme-challenge/* available for when ssl cert is renewed

    What I'm trying to say is, how do I move webroot without messing up future upgrades? I'm used to nginx and have set it up several times before, but I'm not to fimiliar with the inner workings of OMW

    I noticed there are two nginx plugins available, might I use these for this purpose; and if so, how?

    What version of mergerfs?


    joel@dunder:~$ mergerfs -v
    mergerfs version: 2.9.1
    FUSE library version: 2.9.5
    fusermount version: 2.9.5
    using FUSE kernel interface version 7.19

    (fuse and libfuse backported from sid because of bug in libfuse 2.9.0 which made libfuse segfault)

    Here's one from me (without direct_io)


    joel@dunder:~$ dd if=/dev/zero of=/media/xfiles/lol bs=10G count=5 conv=fdatasync
    dd: warning: partial read (2147479552 bytes); suggest iflag=fullblock
    0+5 records in
    0+5 records out
    10737397760 bytes (11 GB) copied, 80.7537 s, 133 MB/s

    When enabled I get the following error from daily cronjob,

    /etc/cron.daily/logrotate:error: error running non-shared postrotate script for /var/log/fail2ban.log of '/var/log/fail2ban.log 'run-parts: /etc/cron.daily/logrotate exited with return code 1

    The fail2ban plugin is enabled but obviously not running, does the plugin actually start the service? (solved this by starting the service manually)

    I keep getting this (notified by mail)

    /etc/cron.daily/logrotate:error: skipping "/var/log/syncthing/syncthing_***.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation.

    Doesn't the plugin setup correct permissions? I solved this manually but thought I'd let you know

    Nope. Only filesystems. Do you have an n/a filesystem?

    Yeah, I did (one drive which was/is damaged was listed as N/A and still in the array)

    I fixed the issue by going to File Systems and removed the N/A device. Then in terminal I did

    mdadm --stop /dev/md127
    mdadm --remove /dev/md127
    mdadm --zero-superblock /dev/sd*

    Now it works as expected :-)