Beiträge von molnart
-
-
how do i do that? i tried dkms autoinstall, update-initramfs -u, update-grub, even apt install --reinstall nvidia-driver but nothing seems to work. if i need to purge the drivers or even worse restart OMV i rather throw that card out from a running system directly into the trash, but there is no way I am restarting my system before it reaches 500 days uptime.
-
it seems after a kernel update my nvidia gpu is just "gone". all my containers using nvidia refuse to start, nvtop or nvidia-smi shows there is no gpu in the system. lspci shows it.
-
thanks a lot, it was pretty easy. dumping the rrd file to an XML and finding the correct section in it it contains the human readable timestamp and the occupied space in bytes.
apparently it looks like i have free capacity for 2 years and a few months.
-
where is the RRD database located on OMV?
-
that one is extremely simple actually, its just a (free capacity / average daily deltas) for a long enough period of time. the question is where do i get those daily deltas
-
OMV keeps a pretty nice track of disk utilization under Performance statistics, and i also assume that those data are pretty granular as well. Probably there should be way how to use that information to estimate how long will it take to fill up my storage so i can plan an expansion sufficiently in advance.. Can somebody point into the direction which tools to use for that?
Unfortunately it seems that the folder /var/lib/openmediavault/rrd is just a bunch of png images and i cannot get any data out of those
-
You can not use the same GPU in multiple containers.
wtf, you serious? thats a serious bummer.
after further testing is looks like always the container last spun up has access to the GPU, the others don't. so far I can occasionally get HW acceleration for Plex and Ollama, but its pretty unpredictable which one can actually use it. and additionally i wanted to play around with fooocus image generation...
so i guess my only option is to to move my GPU apps to separate LXC containers, as there seems to be a way how to share the GPU between multiple LXCs
-
ok, i spoke to soon, drivers are installed, GPU should be visible inside docker containers as confirmed by
i have changed plex and immich settings to utilize HW accel, but it does not work. neither plex or immich utilize the HW acceleration... what else could have i missed?
-
finally i got this running. for me the steps where the following:
1) install proxmox kernel
3) install the drivers according to the instructions, but ignore the xconfig part
now everything seems ok, but i could not try it as in the meantime i moved all my containers that could utilize the GPU to a different VM, only to realize i can't pass the GPU there because its tied with my HBA in a single IOMMU group
-
i better stop playing with that, because those drivers completely broke docker somehow and i has to restore my whole OMV installation from backup.
-
switching to pve kernel allows to install the nvidia drivers without error, but i run into issues with sudo nvidia-xconfig - it does not find the GPU:
Code
Alles anzeigen$ sudo nvidia-xconfig Using X configuration file: "/etc/X11/xorg.conf". VALIDATION ERROR: Data incomplete in file /etc/X11/xorg.conf. Device section "Device0" must have a Driver line. WARNING: error opening libnvidia-cfg.so.1: libnvidia-cfg.so.1: cannot open shared object file: No such file or directory. ERROR: Unable to find any GPUs in the system. Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.nvidia-xconfig-original' Backed up file '/etc/X11/xorg.conf' as '/etc/X11/xorg.conf.backup' New X configuration file written to '/etc/X11/xorg.conf'
although in lspci i can clearly see the device
02:00.0 3D controller: NVIDIA Corporation GP104GL [Tesla P4] (rev a1)
-
i moved to virtualized OMV in 2020 when i wanted to run a router on my machine as well. first i did pass trough individual disks, then i have moved to a different rig with a HBA. both setups worked well.
EDIT: as some highlighted below, in disk passthrough mode OMV has no low level access to the disks, e.g. SMART data cannot be retrieved from within OMV (but they can from the underlying host). Some file systems such as ZFS rely on this data to function reliably,
-
yep, just updated to 7.0.2, still got the same error. votdev maybe can you look into this?
-
-
i am using the stock OMV7 kernel
Linux omv6 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux)
did perform an apt purge *nvidia* serveral times, but did not help.
omv6 is the hostname of my install, but i am on 7 actually
-
i am trying to install the nvidia drivers on OMV 7 according your guide but running into this error:
Code
Alles anzeigendpkg: dependency problems prevent configuration of nvidia-driver: nvidia-driver depends on nvidia-kernel-dkms (= 525.147.05-4~deb12u1) | nvidia-kernel-525.147.05 | nvidia-open-kernel-52 5.147.05 | nvidia-open-kernel-525.147.05; however: Package nvidia-kernel-dkms is not configured yet. Package nvidia-kernel-525.147.05 is not installed. Package nvidia-kernel-dkms which provides nvidia-kernel-525.147.05 is not configured yet. Package nvidia-open-kernel-525.147.05 is not installed. Package nvidia-open-kernel-525.147.05 is not installed. dpkg: error processing package nvidia-driver (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin (2.36-9+deb12u4) ... Processing triggers for initramfs-tools (0.142) ... update-initramfs: Generating /boot/initrd.img-6.1.0-18-amd64 Processing triggers for update-glx (1.2.2) ... Processing triggers for glx-alternative-nvidia (1.2.2) ... update-alternatives: using /usr/lib/nvidia to provide /usr/lib/glx (glx) in auto mode Processing triggers for glx-alternative-mesa (1.2.2) ... Processing triggers for libc-bin (2.36-9+deb12u4) ... Processing triggers for initramfs-tools (0.142) ... update-initramfs: Generating /boot/initrd.img-6.1.0-18-amd64 Errors were encountered while processing: nvidia-kernel-dkms nvidia-driver
tried installing on a plain debian VM and it went without errors
-
Wanted to add some new NFS shares and run into this error when applying settings:
Looks like its something related to fstab, checked the outputs of lsblk and /etc/fstab content just in case there would be some change in disk settings (did not mess with them) but found no obvious reason for the error
Code
Alles anzeigenFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 43, column 1 found conflicting ID 'remove_quota_files' in "<unicode string>", line 65, column 1 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [CRITICAL] Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 43, column 1 found conflicting ID 'remove_quota_files' in "<unicode string>", line 65, column 1 OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 43, column 1 found conflicting ID 'remove_quota_files' in "<unicode string>", line 65, column 1 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] retcode: 2 [ERROR ] Command 'blkid' failed with return code: 2 [ERROR ] output: [CRITICAL] Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 43, column 1 found conflicting ID 'remove_quota_files' in "<unicode string>", line 65, column 1 in /usr/share/php/openmediavault/system/process.inc:247 Stack trace: #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute() #1 /usr/share/openmediavault/engined/rpc/config.inc(178): OMV\Engine\Module\ServiceAbstract->deploy() #2 [internal function]: Engined\Rpc\Config->applyChanges() #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod() #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(622): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}() #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(146): OMV\Rpc\ServiceAbstract->execBgProc() #7 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg() #8 [internal function]: Engined\Rpc\Config->applyChangesBg() #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array() #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod() #11 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call() #12 {main}
-
i am running around 60 containers, so simply too much to list, but generally they can be put into the following categories:
Media & downloads management - plex, , *arr stack, Transmission, jDownloader, Jackett, etc.
Home automation - Home Asssitant, Esphome
Photo & media management - Immich, Paperless-ngx
Monitoring & Infrastructure - Grafana, InfluxDB, LibreNMS, UniFi Controller, PiAlert, etc.
Some utilities & tools - Vaultwarden, Tandoor, Firefly, StirlingPDF, etc,
-
the vast majority of people are accessing the NAS by wifi or 1 GbE at best, and even a single spinning drive with a not unrealistic "real world" 1500 to 1800 Mbps data rate is more than the 1000Mbps of that 1GbE connection.
that is true as long as you are making single thread transfers. in my case, during copying files to my NAS the transfer speed regularly drops below < 30 MB/s. that is because another process is writing to the disk with 20 MB/s, yet other one is reading with 5 MB/s, etc.
actually i am considering using a single SSD for all tasks writing data to the NAS and setting up a nightly cronjob to move the contents of the SSD to the individual drives, so that I can read with regular speeds from my normal data.
the amount of Wait-IO i see in the performance stats indicates there indeed would be a benefit from SSD cache