Yeah, I understand, just wanted to show that this exists, in case someone has interest in it. I use it and it works flawlessly to setup connections between servers. The Firewall can be configurated via ACLs online. I don't combine it with the OMV plugin though. Just by itself via cli.
Posts by oopenmediavault
-
-
if its new you can also just send it back and get another one? I tend to avoid that, usually, but if nothing helps...
-
Just want to lay it out, that some pretty easy thing exists called Tailscale, which does all the work for you and connects server via wireguard.
-
I am using the wd elements 12TB and I do have SMART via USB. Maybe it is not supported on smaller ones.
You could try taking a look at the inside model, it is easy to get the casing away without damage. Just look up some youtube videos. Then google the model of your disk
-
I am not at all an expert in this, but heres what I would do just to make sure everything is as it should be...
1) Check if the disk is successfully recognized by your server, you can search through the output of this command
2) Check if the smart services are running
Quotesystemctl status smartmontools.service
systemctl status smartd.service
3) Start them if necessary (and enable)
Quotesystemctl start smartmontools.service
systemctl start smartd.service
systemctl enable smartmontools.service
systemctl enable smartd.service
4) Reboot (just for good measure)
5) check output of "sudo smartctl -a /dev/sdx" (with x being the wd disk) and see if it works and gives back results
6) Check in WebGUI if smart monitoring is enabled
7) Check in WebGUI if you can view the smart details
-
Try deleting the databases. They will be recreated
rm -r /var/lib/rrdcached/db/localhost
systemctl restart rrdcached.serviceI think this solved it. Starting the Monitoring in the WebGUI and checking syslog output doesnt show any error like before anymore. I can also visit the monitoring graphs normally. So I would say this is solved.
Thank you a lot! -
You should start a transcode and check the logs in jellyfin. you Can do something like
This will give an output of how it was transcoded, if it shows hevc or qsv or something, or whatever hwa you are using, it works.
For me it gives out this:
QuoteStream mapping:
Stream #0:0 (hevc) -> setparams:default
overlay_qsv:default -> Stream #0:0 (h264_qsv)
-
mdheavy,
Da du noch angibst, Anfänger zu sein, gebe ich dir folgenden Rat: Ändere nichts, wovon Du noch nicht verstanden hast was es tut.
Ich würde nicht das System, das du geklont hast, verwenden, da es vermutlich, auch wenn es noch halbwegs läuft, sämtliche Fehler hat, die unkorrigierbar sind und das System unvorhersehbar crashen können.
Es ist besser eine frische Installation zu machen. Mach screenshots oder schreib dir auf, welche Plugins du schon installiert hattest. und merk dir die wichtigen Dinge, die du geändert hast. (wie. z.B. automatische Backups mit dem backup-plugin)
Dann nimm dir einen frischen USB stick und installiere neu. Die Festplatten, die am server angeschlossen sind, werden dabei nicht verändert!
-
Its not working for me.
GPU is not used, CPU usage between 75% to 90%.
I'd suggest reading the jellyfin documentation, it is pretty good and explains most, if not all things pretty comprehensible
Hardware Acceleration | JellyfinJellyfin supports hardware acceleration (HWA) of video encoding/decoding using FFMpeg.jellyfin.org -
I was running the omv-salt deploy command, unfortunately, it did not help, as shown below. What I find weird is that I did not change anything regarding the Monitoring or something connected to it, so I wonder why it started doing this now?!
QuoteSummary for debian
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Total run time: 370.425 s Dec 2 21:50:06 openmediavault monit[1457]: HttpRequest: error -- client []: HTTP/1.0 400 There is no service named "collectd"
Syslog also showed following.
CodeDec 2 21:50:06 openmediavault monit[487322]: There is no service named "collectd" Dec 2 21:53:12 openmediavault monit[1457]: Monit daemon with PID 1457 awakened Dec 2 21:53:12 openmediavault monit[1457]: Awakened by User defined signal 1 Dec 2 21:53:17 openmediavault monit[1457]: HttpRequest: error -- client []: HTTP/1.0 400 There is no service named "proftpd" Dec 2 21:53:17 openmediavault monit[501297]: There is no service named "proftpd" Dec 2 21:53:19 openmediavault monit[1457]: HttpRequest: error -- client []: HTTP/1.0 400 There is no service named "rrdcached" Dec 2 21:53:19 openmediavault monit[501337]: There is no service named "rrdcached"
After I ran the omv-salt command, I started the Monitoring from the Web-GUI again, with following result in syslog spammed again.
Code
Display MoreDec 2 22:16:13 openmediavault systemd[1]: Started /bin/systemctl reload monit.service. Dec 2 22:16:13 openmediavault systemd[1]: Reloading LSB: service and resource monitoring daemon. Dec 2 22:16:13 openmediavault monit[1457]: Reinitializing Monit -- control file '/etc/monit/monitrc' Dec 2 22:16:13 openmediavault monit[541568]: Reloading daemon monitor configuration: monit. Dec 2 22:16:13 openmediavault systemd[1]: Reloaded LSB: service and resource monitoring daemon. Dec 2 22:16:13 openmediavault systemd[1]: run-rd8f87192f18a4d08a9186db92cf380e0.scope: Succeeded. Dec 2 22:16:14 openmediavault monit[1457]: 'openmediavault' Monit reloaded Dec 2 22:16:14 openmediavault monit[1457]: 'rrdcached' process is not running Dec 2 22:16:14 openmediavault monit[1457]: 'rrdcached' trying to restart Dec 2 22:16:14 openmediavault monit[1457]: 'rrdcached' start: '/bin/systemctl start rrdcached' Dec 2 22:16:14 openmediavault systemd[1]: Starting LSB: start or stop rrdcached... Dec 2 22:16:14 openmediavault rrdcached[541602]: rrdcached started. Dec 2 22:16:14 openmediavault systemd[1]: Started LSB: start or stop rrdcached. Dec 2 22:16:15 openmediavault monit[1457]: 'collectd' process is not running Dec 2 22:16:15 openmediavault monit[1457]: 'collectd' trying to restart Dec 2 22:16:15 openmediavault monit[1457]: 'collectd' start: '/bin/systemctl start collectd' Dec 2 22:16:15 openmediavault systemd[1]: Starting Statistics collection and monitoring daemon... Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "cpu" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "df" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "disk" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "interface" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "load" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "memory" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "nut" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "rrdcached" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "syslog" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "unixsock" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: plugin_load: plugin "uptime" successfully loaded. Dec 2 22:16:15 openmediavault collectd[541673]: Systemd detected, trying to signal readiness. Dec 2 22:16:15 openmediavault systemd[1]: Started Statistics collection and monitoring daemon. Dec 2 22:16:15 openmediavault collectd[541673]: Initialization complete, entering read-loop. Dec 2 22:16:15 openmediavault collectd[541673]: Init SSL without certificate database Dec 2 22:16:15 openmediavault collectd[541673]: nut plugin: Connection to (localhost, 3493) established. Dec 2 22:16:15 openmediavault collectd[541673]: nut plugin: Connection is unsecured (no SSL). Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-free.rrd, [1670015775.503584:105706790912.000000], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-free.rrd': Invalid argument (status=-1) Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1. Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Some write plugin is back to normal operation. `write' succeeded. Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-reserved.rrd, [1670015775.503585:863682560.000000], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-reserved.rrd': Invalid argument (status=-1) Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1. Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Some write plugin is back to normal operation. `write' succeeded. Dec 2 22:16:15 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:15 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-used.rrd, [1670015775.503586:54490800128.000000], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-2a1b2d25-051d-43c6-b9b8-af7f9cf252e9/df_complex-used.rrd': Invalid argument (status=-1) Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1. Dec 2 22:16:15 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Some write plugin is back to normal operation. `write' succeeded. Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/disk-dm-2/disk_octets.rrd, [1670015785.464033:38695624704:57415749632], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/disk-dm-2/disk_octets.rrd': Invalid argument (status=-1) Dec 2 22:16:25 openmediavault collectd[541673]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1. Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/disk-dm-2/disk_io_time.rrd, [1670015785.464039:918528:4654360], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/disk-dm-2/disk_io_time.rrd': Invalid argument (status=-1) Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock Dec 2 22:16:25 openmediavault rrdcached[541617]: handle_request_update: Could not read RRD file. Dec 2 22:16:25 openmediavault collectd[541673]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/disk-dm-2/disk_ops.rrd, [1670015785.464037:719641:1000838], 1) failed: rrdcached@unix:/run/rrdcached.sock: RRD Error: mmaping file '/var/lib/rrdcached/db/localhost/disk-dm-2/disk_ops.rrd': Invalid argument (status=-1)
-
Hello, thanks macom for this suggestion. I have two questions regarding this:
1) Do I have to run it with or without monitoring enabled?
2) Will it change my configuration in any way?
-
What to remove completely and what not to break?
rrd-cached and the whole stuff that is used for monitoring (or whatever the error is coming from) without breaking anything omv related
-
Sorry, me neither. Did you try disabling/enabling monitoring?
yes, but the problem persisted. Could you tell me how to completely uninstall it safely without breaking things and reinstalling?
-
do you need more information? I don't really know how to start tackling it.
-
Did you try omv-firstaid item 7: check RRD database?
-
where is your fail2ban? on the nginx reverse proxy or on the host? Is the reverse proxy a docker container? Give some more info...
-
you didnt answer what you try to access. if it is a docker container, it might be because it has stuff in the nat table that makes it not even go to through your iptables chain (and input etc)
-
yes, but sometimes it is possible to see that a process was tried to be killed because it uses too much ram or other things,
I'm using pretty many containers (around 30-40) so maybe one of them is doing something it shouldnt. I also check my usage with node_exporter and grafana, but shortly before the system crashed, there was still enough RAM... unfortunately, also there, there was the last 10 minutes missing.Im still pretty sure it was a RAM issue, since in the OMV CLI (via a screen attached to the server), it was lagging pretty much, wasnt able to type anything, but it still reacted sometimes on keystrokes.
My solution to check what could be the cause now, is to periodically write the output of the top command of the top 5 memory consuming processes via cron to another server. With this I may be able to determine what could be the cause if it happens again
-
OMV isn't crashing though. Linux is.
I only recommended that because it seemed like it wasn't an instant crash and you could see logs leading up to the crash.
If only the OS is on the usb stick and other things like docker are running from data disks/ssds and the flashmemory plugin is installed, the only reason the system should crash is a junk usb stick or running out or memory or hardware is unstable.
Thats why I want to see the logs to determine what would use up the RAM.
Anyways, so if I will symlink /var to another drive, the flashmemory plugin will still create the folder2ram fs, is that correct? So in order to have them written directly, I would still have to disable the plugin ( may it even just be for the period until I found out what crashes my "linux" -
maybe the drives were scrubbing and therefor on high usage?