Posts by justtim

    Sorry. I meant label, not ID

    On OMV (currently) drives are mounted as "dev-disk-by-label-xxx".

    Previously they were mounted as "dev-disk-by-UUID-xxxxx"

    Ok, that is indeed the case, the broken disk is still visible as dev/disk/by-label/volume2.

    So how would I go about setting the same label for the new disk? That cannot be done if the old disk is still present, and I cannot remove the old disk because the shares (and probably other stuff) still has references to the old disk.

    I will try adding the old UUID to the new disk first, to see if that does the trick. If it doesn't, it looks like the only way is to go with @Adoby's suggestion, cleanup all references, remove the old disk and add the new disk.

    If the disk is mounted "by-id" should it not be enough to just adjust the id? I would try if it is working.

    Where would I adjust the ID? Can you elaborate a on this?

    It seems to be also possible to change the UUID if needed…a-disk-to-whatever-i-want

    This is an interesting option. I guess I could give this a shot as soon as my new disk arrives, since I have nothing to lose (I'll make a backup of my SD boot card first) :)

    Hi, I have a SBC (Odroid HC2), with just one 3TB data disk and an SD-card containing the boot partition.
    I make regular backups of said data disk with Duplicati.
    Last weekend, the data disk crashed, and I am planning to replace it with a new one (probably SSD while I’m at it).
    What would be the best way to go about this? Can I simply replace the now broken disk (shows “missing” in OMV) with the new one, give it the same volume name, and restore my folders? It looks like I cannot delete the “missing” volume from the GUI...

    I'm starting to think the start options and environment variables are kicked off in the Docker run command when the container is started and are not actually stored anywhere, could that be true?
    Stil doesn't explain how Watchtower knows about all of them though...

    Hi, I am kind of new to Docker but have quite a lot of containers running nonetheless and love it!

    There is something I can't get my head around though, and I hope someone can explain this.
    When I run a container from an image, and set a lot of start options/environment variables, these are visible in the details of the running container. My question is, where are these saved? Are they saved to a Dockerfile somewhere?
    When Watchtower starts to update images and re-create containers, it knows what run options and environment variables to set, so it must get them from somewhere, right?

    This worked for me too!

    I noticed /opt/EasyRSA-3.0.3/openssl-1.0.cnf holds the following variables:

    default_days = $ENV::EASYRSA_CERT_EXPIRE # how long to certify for
    default_crl_days= $ENV::EASYRSA_CRL_DAYS # how long before next CRL

    Does anyone know where to access/change these variables?


    After having some trouble removing shared folders and filesystems for a crashed external backup disk, I was able to clean everything up (partly manually from config.xml).
    The only issue I have left is monit complaining when booting the system:

    Any idea where to clean up the remaining mountpoint apparently still listed somewhere?
    There are no more references in config.xml or in fstab.

    I am using the OpenVPN plugin with PAM authentication enabled. By default, every user in passwd is able to authenticate using PAM authentication.
    Is it possible to limit this right to certain users only to minimise the attack surface?

    You are correct abou 18.06. I forgot I reverted my RPi to try and figure this out AND mixed up the version number even though I just locked the plugin to use 18.09. My bad. But my point still stands. 18.09 is working on my RPi

    Allright. I decided to make a full backup and take my chances. Guess what, everything is fine and Docker is running stable on 18.09 :thumbsup: .
    Out of curiosity, what CPU model do you have in your RPi? Mine is a ARMv7 Processor rev 5 (v7l). Looking at the GitHub issue, the problem seems to be limited to the ARMv6 CPU models in older RPis.

    Nov 27 03:25:24 OMV-INTEL dockerd[1059]: time="2018-11-27T03:25:24.372150197-05:00" level=warning msg="Your kernel does not support cgroup rt period"
    Nov 27 03:25:24 OMV-INTEL dockerd[1059]: time="2018-11-27T03:25:24.372172412-05:00" level=warning msg="Your kernel does not support cgroup rt runtime"

    I get these 2 as well. Containers are fine. Don't know if it has always been like that, or since the latest update.