Beiträge von chris_kmn

    wtf, you serious? thats a serious bummer.


    after further testing is looks like always the container last spun up has access to the GPU, the others don't. so far I can occasionally get HW acceleration for Plex and Ollama, but its pretty unpredictable which one can actually use it. and additionally i wanted to play around with fooocus image generation...


    so i guess my only option is to to move my GPU apps to separate LXC containers, as there seems to be a way how to share the GPU between multiple LXCs

    I‘m not 100% sure, but I con only run one container with one GPU and I‘ve read about that somewhere. So it seems to be valid.


    May be there are ways to overcome that issue, but I don‘t know them. Strange thing on my machine is, that Plex ist running together with Tdarr, using the same GPU. But Plex and Handbrake are not able to use the same GPU. So I‘m running Handbrake on the internal GPU (Intel) and Plex on nvidia.


    It might be connected to the nvidia container toolkit. It passes the nvidia driver, which is outside of the container, to the docker container. There might be ways to install the nvidia driver inside the container. But I‘ve never done that.

    ok, i spoke to soon, drivers are installed, GPU should be visible inside docker containers as confirmed by

    Code
    docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

    i have changed plex and immich settings to utilize HW accel, but it does not work. neither plex or immich utilize the HW acceleration... what else could have i missed?

    Hard to say without any information about your installation. In general it is only possible to use one GPU per container. You can not use the same GPU in multiple containers.


    HW transcoding will only be used, if the codec of the video can be processed by your GPU. Wich graphics card are you using ?

    Cap10Canuck:

    If you really want to use such an old graphics card (in my opinion there won’t be a big benefit) you can follow this instructions to install the correct nvidia driver:

    NvidiaGraphicsDrivers - Debian Wiki


    Then proceed the tutorial after the nvidia driver part. But I‘ve never tried using the legacy drivers.


    For me it would make more sense to get a used nvidia quadro p600 or p1000 wich is consuming way less power and has way more video performance.

    I haven‘t had any issues with upgrading to omv7 - but I followed a tutorial that was posted somewhere here in the forum for upgrading from 6 -> 7.


    And yes, omv7 with debian 12 uses nvidia 5xx drivers.

    Hm, I didn’t run into that issue until now.


    Does it work if you leave out the xconfig part?


    Or may be it helps if you uninstall nvidia-xconfig with purge parameter and reinstall it ?


    apt-get remove nvidia-xconfig —purge

    switching to pve kernel allows to install the nvidia drivers without error, but i run into issues with sudo nvidia-xconfig - it does not find the GPU:


    although in lspci i can clearly see the device


    02:00.0 3D controller: NVIDIA Corporation GP104GL [Tesla P4] (rev a1)

    Wich kernel are you using on omv7 ? Do you use the proxmox-kernel ? And did you try to remove old nvidia packages ? For me it seem that there are some older dependencies

    nsas02 you should stick to the versions that are provided with the repositories. OMV6 / Debian 11 comes with nvidia 4xx drivers, OMV7 / Debian 12 comes with nvidia 5xx drivers. There is no difference in my tutorial for 4xx and 5xx drivers.

    OMV6 wants to install a bunch of nvidia package updates today. I recall this breaks any custom installation of nvidia hardware accelerated drivers. Is there a list of packages we need to keep back?

    I can not confirm, that the update is breaking earlier installations, as long as you have installed the drivers from the debian repository.

    I started from a fresh install of OMV 6 and the GPU did show up in my embyserver under transcoding and just last night when I checking on on things with my emby server I went to transcoding and I didn't see my GPU listed there.

    Question is if the „fresh install“ already has docker compose. That’s why I‘m asking if you upgraded from the latest OMV6 version

    Nvidia-smi is showing, that the driver is installed and working.


    Have you had the latest version of OMV 6 bevor you upgraded to ?


    And did the video transcode before the upgrade or is it another video ? Your „old“ GTX 1050 doesn‘t support many codecs….


    And you could check, if OMV 7 is offering an update of the nvidia-driver, as debian 12 supports nvidia 5.xxx driver versions.

    I‘m using the linuxserver-plex container:


    linuxserver/plex


    BUT, officially the 5xx nvidia drivers are not supported/released for debian. I‘m sticking with the 4xx drivers.


    chente


    If I got it right you want me to add a text to this passage:


    …………………………………………………………………………………………………………………………………………………

    Docker

    • In this section you can define the docker installation folder. This is useful for getting docker off the OMV system disk. The default path is /var/lib/docker.
    • In the Docker Storage field define the path of the folder you want to use to install Docker.
      • Avoid using symlinks in this field.
    • Click Reinstall Docker button. Docker is now installed in the new path.

    …………………………………………………………………………………………………………………………………………………



    My proposal would be as following:

    • If you are using nvidia drivers in your docker containers (e.g. for Plex or Jellyfin hardware transcoding) you have to leave the path blank. If not, the nvidia driver settings are getting corrupted each time you reconfigure the OMV settings.


    Do you think that is sufficient? You could of course add a link to my tutorial.


    Best, Chris

    chente :


    I‘m going to have a look into it. I think I will be able to provide a text. But I‘m not an english native so I have to see that I do understand everything correctly and get the right message.


    Give me some days and I make a proposal.


    Cheer, Chris