How to setup Nvidia in Plex docker for hardware transcoding?
-
- OMV 4.x
- Kreavan
-
-
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 760 Off | 00000000:01:00.0 N/A | N/A |
| 0% 35C P0 N/A / N/A | 0MiB / 1999MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
Not sure why it's saying not supported? -
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 760 Off | 00000000:01:00.0 N/A | N/A |
| 0% 35C P0 N/A / N/A | 0MiB / 1999MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
I think, your OMV + Docker + NVIDIA video card system is ready to hardware transcoding, but ....
ZitatNot sure why it's saying not supported?
what video or movie do you want to transcode, H.264 or HEVC or...... because your video card (GTX 760) only supports transcoding up to H264.
Read these articles:
-
Just tried a movie h264 and it still not getting the hw indication.
Edit
I forgot to mention there is an image in docker for nvidia but it wont run so that might be why it's not working.
-
Please see if the hardware transcoding works with Jellyfin.
Example:
Code
Alles anzeigendocker create \ --name=Jellyfin \ -e PUID=1000 \ -e PGID=100 \ -e TZ=Europe/London \ -e NVIDIA_VISIBLE_DEVICES=all \ -e NVIDIA_DRIVER_CAPABILITIES=all \ -p 8096:8096 \ -v /srv/dev-disk-by-label-Docker/AppData/Jellyfin:/config \ -v /srv/dev-disk-by-label-Docker/AppData/JellyfinCache:/transcode \ -v /srv/dev-disk-by-label-WD6TB/Movies:/data/movies \ --restart unless-stopped \ linuxserver/jellyfin
-
I'm not sure about how to set it up as all my hard drives are not shown in Jellyfin. But I do think the transcoding is working as my CPU is not touched while transcoding. It still does not show HW transcoding so I don't know why? Thanks for all the help along the way. Im just curious why does the NVIDIA docker keeps replicating I have 2 unusable docker images of it in my docker.
-
Just noticed whenever Jellyfin is running Plex will hw decode stuff as the CPU stays under 20% activity. Now when Jellyfin is closed it hit's the CPU hard.
-
Zitat
Im just curious why does the NVIDIA docker keeps replicating I have 2 unusable docker images of it in my docker.
When you run the docker run --gpus all nvidia/cuda:10.0-base nvidia-smi command, it downloads the "nvidia/cuda:10.0-base" image, which it uses to test the nvidia-container-toolkit and nvidia-container-runtime, so you can safely delete the containers
and then the "nvidia/cuda:10.0-base" image.
ZitatIt still does not show HW transcoding so I don't know why?
In nvidia-smi: I found this and this (with the solution? (I can't try it because nvidia-smi works well for me)).
In Plex: If I start the hardware transcoding in Plex I have the hw indication after about 7 seconds.
ZitatJust noticed whenever Jellyfin is running Plex will hw decode stuff as the CPU stays under 20% activity. Now when Jellyfin is closed it hit's the CPU hard.
Now I watched it with Plex + Handbrake duo, but CPU usage didn't jump when I stop Handbrake (although the Handbrake only encodes using the nvidia card). (For me, at the moment Jellyfin does not want to use the nvidia card for hardware transcoding.)
-
I followed this guide before with OMV 4 and it worked. I am now trying to just install the patch.sh but OMV keeps saying - No such file or directory. I know it's there. I even made a new directory and it can't find it? I gave my account permission to read and write to all the folders I want to access. I am able to read and write to the disks so I don't know what is going on.
-
I don't know what the problem might be for you, I was able to install the nvml_fix and the nvidia-patch without error.
Install nvml_fix (NVIDIA Linux Graphics Driver 440.64) :
apt install -y git
git clone https://github.com/CFSworks/nvml_fix.git
cd nvml_fix
make TARGET_VER=440.64
sudo dpkg-divert --add --local --divert /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig --rename /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
sudo make install TARGET_VER=440.64 libdir=/usr/lib/x86_64-linux-gnu
before_nvml_fix_440.64.txt after_nvml_fix_440.64.txt
then I installed the latest nvidia driver (NVIDIA Linux Graphics Driver 440.82) (if you are using a container in the Docker (e.g. Plex) that uses the nvidia card, stop the Docker before installing newest nvidia driver) and I reinstalled the nvml_fix:
cd nvml_fix
make TARGET_VER=440.82
sudo dpkg-divert --add --local --divert /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig --rename /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
sudo make install TARGET_VER=440.82 libdir=/usr/lib/x86_64-linux-gnu
before_nvml_fix_440.82.txt after_nvml_fix_440.82.txt
Install nvidia-patch:
git clone https://github.com/keylase/nvidia-patch.git
cd nvidia-patch
bash ./patch.sh
-
@tama777, i've been following this and other resources to try to get everything correct (nvidia GT 740 working on a plex container via portainer on OMV 5), and i may be not understanding something. i can get to the same point as mikedurp above. nvidia-smi works on both the host and within the test docker using the nvidia driver 440.82, the latest supported by nvml_fix. However, when i attempt to apply the nvml_fix, verbatim per your code above, it breaks my docker nvidia-smi.
One preface question - even without the nvml fix, I still wasn't able to select the nvidia runtime in the plex container in portainer. Is portainer a problem, should i not be using it in the first place?
hoping you can help me out here.
Code
Alles anzeigenroot@omv:~# nvidia-smi Sat Jul 18 13:52:14 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 740 Off | 00000000:01:00.0 N/A | N/A | | 30% 50C P0 N/A / N/A | 0MiB / 1999MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ root@omv:~# docker run --gpus all nvidia/cuda:10.0-base nvidia-smi Sat Jul 18 18:52:21 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 740 Off | 00000000:01:00.0 N/A | N/A | | 27% 50C P0 N/A / N/A | 0MiB / 1999MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+ root@omv:~# ls drivers nvml_fix openmediavault-omvextrasorg_latest_all5.deb root@omv:~# cd nvml_fix/ root@omv:~/nvml_fix# ls empty.c libnvidia-ml.so.1 libnvidia-ml.so.440.100 Makefile nvidia-patch nvml_fix.c nvml_v3.h nvml_v9.h README.md root@omv:~/nvml_fix# make TARGET_VER=440.82 gcc -shared -fPIC -s empty.c -o libnvidia-ml.so.440.82 gcc -Wl,--no-as-needed -shared -fPIC -s -o libnvidia-ml.so.1 -DNVML_PATCH_440 -DNVML_VERSION=\"440.82\" libnvidia-ml.so.440.82 nvml_fix.c root@omv:~/nvml_fix# sudo dpkg-divert --add --local --divert /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig --rename /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 Leaving 'local diversion of /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 to /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1.orig' root@omv:~/nvml_fix# sudo make install TARGET_VER=440.82 libdir=/usr/lib/x86_64-linux-gnu /usr/bin/install -D -Dm755 libnvidia-ml.so.1 /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 root@omv:~/nvml_fix# nvidia-smi Sat Jul 18 13:58:38 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 740 Off | 00000000:01:00.0 Off | N/A | | 18% 50C P0 N/A / N/A | 0MiB / 1999MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ root@omv:~/nvml_fix# docker run --gpus all nvidia/cuda:10.0-base nvidia-smi NVIDIA-SMI couldn't find libnvidia-ml.so library in your system. Please make sure that the NVIDIA Display Driver is properly installed and present in your system. Please also try adding directory that contains libnvidia-ml.so to your system PATH.
here's my config.toml
disable-require = false
#swarm-resource = "DOCKER_RESOURCE_GPU"
[nvidia-container-cli]
#root = "/run/nvidia/driver"
#path = "/usr/bin/nvidia-container-cli"
environment = []
#debug = "/var/log/nvidia-container-toolkit.log"
#ldcache = "/etc/ld.so.cache"
load-kmods = true
#no-cgroups = false
#user = "root:video"
ldconfig = "/sbin/ldconfig"
#alpha-merge-visible-devices-envvars = false
[nvidia-container-runtime]
#debug = "/var/log/nvidia-container-runtime.log"
-
great googly moogly, had a major unrelated issue that i just solved. all of my volumes were mounted in omv with noexec option, which is what was breaking all transcoding in plex, i mistakenly thought that it was GPU related. Now that my CPU can trasncode, i'm back to same problem. i'm going to retry all driver installs now that my volume exec flags are correct.
if anyone's frustinging linux ignorance is as great as mine in the future, the fix to zero plex playback is:
1. Find the fstab entries you don't want with noexec in /etc/openmediavault/config.xml look at the mnent entries. Remove the noexec directive and run
-
well, 24 hours of misery later, and a missing comma was my problem. my daemon.json had an entry for both data-root and the nvidia runtimes, but i didnt separate the entries by a comma.
-
Hi tama777,
Sorry for reviving this thread, but I'm having the same issue with getting HW transcoding working. This was the most useful guide I've found on this specific issue so far.
I have installed the latest nvidia driver 455.28 which is reflected by nvidia-smi:
Code
Alles anzeigen+-----------------------------------------------------------------------------+ | NVIDIA-SMI 455.28 Driver Version: 455.28 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P400 Off | 00000000:01:00.0 Off | N/A | | 22% 42C P0 N/A / N/A | 0MiB / 1999MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
I've also followed your instructions above and have installed the nvidia container toolkit and runtime.
When I attempt to make a tdarr docker with the following paramaters:
I end up getting this error message:
Codedocker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown. ERRO[0000] error waiting for container: context canceled
Any ideas? Thanks so much!
EDIT: Nevermind, I managed to find the solution. I included a .real at the end of /sbin/ldconfig.real in /etc/nvidia-container-runtime/config.toml
-
Hello everyone, during the installation I encountered this problem:
Code
Alles anzeigenDone. Loading new nvidia-current-450.80.02 DKMS files ... Building for 5.8.0-0.bpo.2-amd64 Building initial module for 5.8.0-0.bpo.2-amd64 Error! Bad return status for module build on kernel: 5.8.0-0.bpo.2-amd64 (x86_64) Consult /var/lib/dkms/nvidia-current/450.80.02/build/make.log for more information. dpkg: error processing nvidia-kernel-dkms package (--configure): installed nvidia-kernel-dkms package post-installation script subprocess returned error exit status 10 dpkg: dependency problems prevent the configuration of nvidia-driver: nvidia-driver depends on nvidia-kernel-dkms (= 450.80.02-1 ~ bpo10 + 1) | nvidia-kernel-450.80.02; however: The nvidia-kernel-dkms package is not yet configured. The nvidia-kernel-450.80.02 package is not installed. The nvidia-kernel-dkms package which provides nvidia-kernel-450.80.02 is not yet configured. dpkg: error processing nvidia-driver package (--configure): dependency issues - left unconfigured Errors were encountered during execution: nvidia-kernel-dkms nvidia-driver E: Sub-process / usr / bin / dpkg returned an error code (1)
Anyone have any idea to solve it? 😁
-
Here is what it puts
Coderoot @ YannickServer: ~ # sudo dpkg --configure –a dpkg: error: --configure requires a legal package name. “–A” is not; illegal package name in specification "–a": must start with an alphanumeric character Use "dpkg --help" for help with installing and uninstalling packages [*]; Use "apt" or "aptitude" to manage packages in a more user-friendly way; Use "dpkg -Dhelp" to get a list of debug flag values; Use "dpkg --force-help" to see the list of force options; Use "dpkg-deb --help" for help on handling * .deb files; Options marked with [*] display a lot of information: pipe them through "less" or "more".
-
It works now!!!
I have something other than my GPU driver to change in your thread to make it work?
I use a NVIDIA Geforce gts250
-
For my GPU, I have to install the nvidia-legacy-340xx-driver.
I have purge my system, I launch the driver installation with:
The installation is going well, then I do:
After I reboot and when I want to see if everthing is good with:
It's sais
Have you got an idea ? 🤔
-
I have this kernel
5.8.0-0.bpo.2-amd64
You think the 450 driver is compatible with my GPU ?
Because when I go to this site it says that my GPU is compatible with the 340 driver
http://us.download.nvidia.com/…EADME/supportedchips.html
And
CodeDetected NVIDIA GPUs: 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation G92 [GeForce GTS 250] [10de:0615] (rev a2) Checking card: NVIDIA Corporation G92 [GeForce GTS 250] (rev a2) Your card is only supported up to the 340 legacy drivers series. It is recommended to install the nvidia-legacy-340xx-driver package.
And when I install the 450 driver
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
-
I have the possibility to get 2 Nvidia Quadro p600...
Do you think it is possible to use both on plex?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!