Beiträge von moreje

    Hello,

    I need to fix a network issue between my services:

    here is my configuration:

    - Nginx Proxy manager running on docker (Network ; bridge): internal IP: 172.20.0.8

    - KVM virtual machine running Home Assistant instance. IP: 192.168.1.172

    - OMV host has IP 192.168.1.50


    I can't access to my HA instance from outside my LAN.

    After many tests...I noticed that I can not ping HA Virtual machine from Nginx container.

    all the other machines on my LAN are actually seen by Nginx contaner.

    so I suppose there is a continuity issue beween the conainer network and the KVM network, but I'm not an expert and can not figure it out....

    anyone to help me? Ask me for the infos you need...

    Thank you

    JR

    kvm works different than virtualbox. So, it needs to like kvm needs. What specifically is confusing?

    I decided to use KVM plugin to set up my HomeAssistant VM.

    The VM is running, but I miss some informations....

    for example:

    is there a way to retrieve the IP of the VM from the plugin? I can't find it

    indeed, I can't find any informations on network part of the VM (MAC adress, etc..;)

    Can you help me?

    Hello,

    I'm moving my raid6 array (6x4To) to RAID5 in order to regain more space with 1 HDD.

    I used the mdadm --grow command

    but the reshape step is very slow:

    Code
    md126 : active raid6 sdb[7] sdd[4] sdc[6] sdf[9] sdg[8] sde[10]
    15627554816 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [===>.................]  reshape = 17.2% (672139880/3906888704) finish=9148.5min speed=5892K/sec
    bitmap: 0/30 pages [0KB], 65536KB chunk

    I know it could be much more rapid.

    I've tested many of the tips found on internet (stripe cache, sync speed min and max) but no change :(

    do you have any idea on what happens and what I could do?

    My setup is AMD Ryzen 5 5600G with 16 Go RAM, running last OMV 6.7.1-2

    thank you

    Well, I found the solution.

    nvidia runtime needs not to be specified in daemon.json anymore.

    now, it has to be registered here:


    etc/systemd/system/docker.service.d/override.conf


    with this settings:

    [Service]
    ExecStart=
    ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime


    then:


    systemctl daemon-reload

    systemctl restart docker

    Hi,

    After the last update of OMV, my containers lost their nvidia runtime access.

    I've noticed that daemon.json has been reseted to default

    But when I want to edit daemon.json with nvidia runtime specs...docker can't start

    Do you have any suggestion on how to fix that?

    Code
    {
    "runtimes": {
            "nvidia": {
                "path": "/usr/bin/nvidia-container-runtime",
                "runtimeArgs": []
            }
        },
    "default-runtime": "nvidia"
    "data-root": "/srv/dev-disk-by-uuid-e5954363-9d99-4c6f-9dd6-7c2ca9fc4d9e/Docker-Core"
    }

    Hello,

    I'm still having this issue...

    I've just edited my /etc/hosts as suggested, so I need to wait for next OMV update...

    but my question is: /etc/Hosts begins with:


    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    so is this this suggestion really usefull in our case?

    Hi,

    I think I know where the issue comes from...

    My OS partition was cloned from a backup I have done using clonezilla

    indeed my previous setup was with 1 nvme only and I wanted to switch to 2xnvme in RAID1 so secure my OS....

    after creating the array, I cloned back my backup onto....and this is when the ghost appeared....

    do you think there is a chance to clean this?

    Dear all,

    I have a strange behavior of my RAID 1 I've build for my OS, using two NVME SSD.

    OMV reports 2 versions of this array, one active (/dev/md127) and one with "False" state (/dev/md127p1)

    mdstat only sees the active array

    fstab show that root / is mounted on the false array (/dev/md127p1)


    and here is lsblk output:


    any ideas on how to clean this setup?

    thank you

    JR

    I've done the tip...

    I can't test now since it only occurs when applying OMV update..... looks like it is related to salt changes ....

    I'll let you know with nexty update....

    What Linux OS are you using?

    What version of OMV are you using?

    Are you using IPv6 on your network?

    it' OMV6, so debian 11 (install was done with OMV6 iso)

    I don't use/need IPV6