How to setup Nvidia in Plex docker for hardware transcoding?

  • +-----------------------------------------------------------------------------+

    | NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |

    |-------------------------------+----------------------+----------------------+

    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |

    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |

    |===============================+======================+======================|

    | 0 GeForce GTX 760 Off | 00000000:01:00.0 N/A | N/A |

    | 0% 35C P0 N/A / N/A | 0MiB / 1999MiB | N/A Default |

    +-------------------------------+----------------------+----------------------+


    +-----------------------------------------------------------------------------+

    | Processes: GPU Memory |

    | GPU PID Type Process name Usage |

    |=============================================================================|

    | 0 Not Supported |

    +-----------------------------------------------------------------------------+



    Not sure why it's saying not supported?

  • I think, your OMV + Docker + NVIDIA video card system is ready to hardware transcoding, but ....


    Quote

    Not sure why it's saying not supported?

    what video or movie do you want to transcode, H.264 or HEVC or...... because your video card (GTX 760) only supports transcoding up to H264.


    Read these articles:

    Using Hardware-Accelerated Streaming

    NVIDIA Hardware Transcoding Calculator for Plex

  • Just tried a movie h264 and it still not getting the hw indication.


    Edit


    I forgot to mention there is an image in docker for nvidia but it wont run so that might be why it's not working.

  • Please see if the hardware transcoding works with Jellyfin.


    Example:


  • I'm not sure about how to set it up as all my hard drives are not shown in Jellyfin. But I do think the transcoding is working as my CPU is not touched while transcoding. It still does not show HW transcoding so I don't know why? Thanks for all the help along the way. Im just curious why does the NVIDIA docker keeps replicating I have 2 unusable docker images of it in my docker.

  • Just noticed whenever Jellyfin is running Plex will hw decode stuff as the CPU stays under 20% activity. Now when Jellyfin is closed it hit's the CPU hard.

  • Quote

    Im just curious why does the NVIDIA docker keeps replicating I have 2 unusable docker images of it in my docker.

    When you run the docker run --gpus all nvidia/cuda:10.0-base nvidia-smi command, it downloads the "nvidia/cuda:10.0-base" image, which it uses to test the nvidia-container-toolkit and nvidia-container-runtime, so you can safely delete the containers



    and then the "nvidia/cuda:10.0-base" image.


    Quote

    It still does not show HW transcoding so I don't know why?

    In nvidia-smi: I found this and this (with the solution? (I can't try it because nvidia-smi works well for me)).

    In Plex: If I start the hardware transcoding in Plex I have the hw indication after about 7 seconds.


    Quote

    Just noticed whenever Jellyfin is running Plex will hw decode stuff as the CPU stays under 20% activity. Now when Jellyfin is closed it hit's the CPU hard.

    Now I watched it with Plex + Handbrake duo, but CPU usage didn't jump when I stop Handbrake (although the Handbrake only encodes using the nvidia card). (For me, at the moment Jellyfin does not want to use the nvidia card for hardware transcoding.)

  • I followed this guide before with OMV 4 and it worked. I am now trying to just install the patch.sh but OMV keeps saying - No such file or directory. I know it's there. I even made a new directory and it can't find it? I gave my account permission to read and write to all the folders I want to access. I am able to read and write to the disks so I don't know what is going on.

  • I don't know what the problem might be for you, I was able to install the nvml_fix and the nvidia-patch without error.

    Install nvml_fix (NVIDIA Linux Graphics Driver 440.64) :


    apt install -y git


    git clone https://github.com/CFSworks/nvml_fix.git

    cd nvml_fix

    make TARGET_VER=440.64

    sudo dpkg-divert --add --local --divert /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig --rename /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1

    sudo make install TARGET_VER=440.64 libdir=/usr/lib/x86_64-linux-gnu

    before_nvml_fix_440.64.txt  after_nvml_fix_440.64.txt


    then I installed the latest nvidia driver (NVIDIA Linux Graphics Driver 440.82) (if you are using a container in the Docker (e.g. Plex) that uses the nvidia card, stop the Docker before installing newest nvidia driver) and I reinstalled the nvml_fix:


    cd nvml_fix

    make TARGET_VER=440.82

    sudo dpkg-divert --add --local --divert /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig --rename /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1

    sudo make install TARGET_VER=440.82 libdir=/usr/lib/x86_64-linux-gnu

    before_nvml_fix_440.82.txt  after_nvml_fix_440.82.txt


    Install nvidia-patch:


    git clone https://github.com/keylase/nvidia-patch.git

    cd nvidia-patch

    bash ./patch.sh

  • @tama777, i've been following this and other resources to try to get everything correct (nvidia GT 740 working on a plex container via portainer on OMV 5), and i may be not understanding something. i can get to the same point as mikedurp above. nvidia-smi works on both the host and within the test docker using the nvidia driver 440.82, the latest supported by nvml_fix. However, when i attempt to apply the nvml_fix, verbatim per your code above, it breaks my docker nvidia-smi.


    One preface question - even without the nvml fix, I still wasn't able to select the nvidia runtime in the plex container in portainer. Is portainer a problem, should i not be using it in the first place?


    hoping you can help me out here.



    here's my config.toml

  • great googly moogly, had a major unrelated issue that i just solved. all of my volumes were mounted in omv with noexec option, which is what was breaking all transcoding in plex, i mistakenly thought that it was GPU related. Now that my CPU can trasncode, i'm back to same problem. i'm going to retry all driver installs now that my volume exec flags are correct.


    if anyone's frustinging linux ignorance is as great as mine in the future, the fix to zero plex playback is:


    1. Find the fstab entries you don't want with noexec in /etc/openmediavault/config.xml look at the mnent entries. Remove the noexec directive and run

    Code
    omv-salt deploy run fstab
  • well, 24 hours of misery later, and a missing comma was my problem. my daemon.json had an entry for both data-root and the nvidia runtimes, but i didnt separate the entries by a comma.


    Code
    {
    "data-root": "/srv/dev-disk-by-label-2zrth/containers",
    "runtimes": {
    "nvidia": {
    "path": "/usr/bin/nvidia-container-runtime",
    "runtimeArgs": []
    }
    }
    }
  • Hi tama777,


    Sorry for reviving this thread, but I'm having the same issue with getting HW transcoding working. This was the most useful guide I've found on this specific issue so far.


    I have installed the latest nvidia driver 455.28 which is reflected by nvidia-smi:


    I've also followed your instructions above and have installed the nvidia container toolkit and runtime.


    When I attempt to make a tdarr docker with the following paramaters:


    Code
    -e "NVIDIA_DRIVER_CAPABILITIES=all" \
    -e "NVIDIA_VISIBLE_DEVICES=all" \
    --gpus=all \

    I end up getting this error message:

    Code
    docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown.
    ERRO[0000] error waiting for container: context canceled

    Any ideas? Thanks so much!


    EDIT: Nevermind, I managed to find the solution. I included a .real at the end of /sbin/ldconfig.real in /etc/nvidia-container-runtime/config.toml

  • Hello everyone, during the installation I encountered this problem:



    Anyone have any idea to solve it? 😁

  • Here is what it puts

    Code
    root @ YannickServer: ~ # sudo dpkg --configure –a
    dpkg: error: --configure requires a legal package name. “–A” is not; illegal package name in specification "–a": must start with an alphanumeric character
    Use "dpkg --help" for help with installing and uninstalling packages [*];
    Use "apt" or "aptitude" to manage packages in a more user-friendly way;
    Use "dpkg -Dhelp" to get a list of debug flag values;
    Use "dpkg --force-help" to see the list of force options;
    Use "dpkg-deb --help" for help on handling * .deb files;
    Options marked with [*] display a lot of information: pipe them through "less" or "more".
  • For my GPU, I have to install the nvidia-legacy-340xx-driver.

    I have purge my system, I launch the driver installation with:

    Code
    apt install -t buster-backports nvidia-legacy-340xx-driver

    The installation is going well, then I do:

    Code
    apt install -t buster-backports nvidia-xconfig
    sudo nvidia-xconfig

    After I reboot and when I want to see if everthing is good with:

    Code
    nvtop
    or
    watch -d -n 0.5 nvidia-smi

    It's sais


    Code
    Impossible to initialize nvidia nvml : Driver/library version mismatch


    Have you got an idea ? 🤔

  • I have this kernel

    5.8.0-0.bpo.2-amd64


    You think the 450 driver is compatible with my GPU ?

    Because when I go to this site it says that my GPU is compatible with the 340 driver

    http://us.download.nvidia.com/…EADME/supportedchips.html


    And


    Code
    Detected NVIDIA GPUs:
    01:00.0 VGA compatible controller [0300]: NVIDIA Corporation G92 [GeForce GTS 250] [10de:0615] (rev a2)
    Checking card: NVIDIA Corporation G92 [GeForce GTS 250] (rev a2)
    Your card is only supported up to the 340 legacy drivers series.
    It is recommended to install the
    nvidia-legacy-340xx-driver
    package.


    And when I install the 450 driver

    NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!