Is it possible to change this to another value maybe 95%?
Beiträge von johnvick
-
-
I think it will be 22% of one core. For example if you use an 8 core CPU you have max of 800%.
-
Install htop it will give you a better idea about CPU use.
-
Thanks for this first USB drive seems to be sorted working on the second.
Edit: The above has enabled the external drives to be monitored using Scrutiny but not in OMV web interface.
-
I have tried OMV on Proxmox out of interest and have found 2 GB has been more than enough. Depends what you are doing of course. My standalone server has 16 GB rarely uses more than 2 GB, this is streaming four videos on Jellyfin. I only have 16 GB (2 x 8 GB) to speed up dual channel RAM for transcoding. I suspect there is nothing wrong at your end.
Edit - just looked at your first pic more closely - now not so sure. But here's my standalone system like yours not as expected.
-
That's one way, there are no doubt others. I am not hugely familiar with remote mounts in OMV maybe someone else who is can help. But one big mergerFS pool is best avoided IMHO.
-
Wouldn't advise a mergerFS pool like this, better to make two separate pools. Reason I say this is that I have had two mishaps with a similar approach. First time a USB external drive from the pool failed to mount, I was away for the weekend, the system was unusable. Not too hard to fix.
Second time a SATA card with two drives on it failed to start on time due to change in BIOS power saving settings, the drives therefore not available for the pool in time. Big headache to fix as the problem was obscure as those two drives were present. Just my experience.
-
I have never been able to get RTC to wake the system (hardware specs below) but there is a BIOS setting to wake at specified time. Alternatively set up a very low power always on device (Pi or similar) with Ubuntu and make a cron job:
55 13 * * SUN /usr/bin/wakeonlan MAC address of OMV
This wake OMV at 13:55 on Sunday.
-
-
I have done this by getting Tailscale to obtain an SSL cert and key and loaded them through the OMV workbench page. Https works fine with the Tailscale MagicDNS address of the form (for example):
https://omv.chicken-duck.ts.net
I am looking at automating this as the certs will expire. I see the key has been renamed by OMV to openmediavault-long-string.key. Is this name significant? If I get a new key and rename it to this or the default openmediavault.key (which I think is the old self signed cert) will it work? Or will I have to manually reload the new cert and key through workbench?
-
Thanks but I couldn't reproduce what I have done and a lot of it is down to my mistakes I think, so I couldn't submit a coherent bug report. What I think set this off was that I cleaned dust out of the device, changed the BIOS setting for maximum power saving. A side effect of this was that two SATA drives on a PCIE card were slow to activate so the pool didn't load. If I had spotted this at the beginning it would have saved a lot of grief.
-
Salvaged the system but I had to edit omv config, remove references to the shares and then re add them. Rebooted and all good. Thanks for the help.
-
Spent hours trying to fix this today. Best I have done is remove mergerfs, remove entries referring to mergerfs from omv config, delete the systemd unit. Reinstall and recreate the pool and all seem back to normal but -
Shares are in default locations e.g. Movies is /Movies and not /mergerfs/pool/Movies. Despite this /srv/mergerfs/pool/Movies has the movies. So I can play in Jellyfin.
I can't move the shares as I get error: Failed to execute Xpath query.....
The way the system is it does not survive a reboot.
Close to doing full reinstall - any way to avoid?
-
Attempted manual pool restart:
CodeFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-pool2.mount' with exit code '1': OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-pool2.mount' with exit code '1': in /usr/share/php/openmediavault/system/process.inc:220 Stack trace: #0 /usr/share/openmediavault/engined/rpc/mergerfs.inc(197): OMV\System\Process->execute(Array, 1) #1 [internal function]: OMVRpcServiceMergerfs->restartPool(Array, Array) #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('restartPool', Array, Array) #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Mergerfs', 'restartPool', Array, Array, 1) #5 {main}
-
Pool isn't mounting again. This si the result of the above command:
Code
Alles anzeigen-- Boot 0596406696a74f329f7f3244e8d517fd -- Feb 16 20:06:21 omv systemd[1]: Mounting MergerFS mount for pool... Feb 16 20:06:21 omv mount[408]: terminate called after throwing an instance of 'std::out_of_range' Feb 16 20:06:21 omv mount[408]: what(): basic_string::substr: __pos (which is 2) > this->size() (which is 0) Feb 16 20:06:21 omv systemd[1]: srv-mergerfs-pool.mount: Mount process exited, code=exited, status=134/n/a Feb 16 20:06:21 omv mount[407]: Aborted Feb 16 20:06:21 omv systemd[1]: srv-mergerfs-pool.mount: Failed with result 'exit-code'. Feb 16 20:06:21 omv systemd[1]: Failed to mount MergerFS mount for pool. Feb 16 20:22:05 omv systemd[1]: srv-mergerfs-pool.mount: Directory /srv/mergerfs/pool to mount over is not empty, mounting anyway. Feb 16 20:22:05 omv systemd[1]: Mounting MergerFS mount for pool... Feb 16 20:22:05 omv mount[20339]: terminate called after throwing an instance of 'std::out_of_range' Feb 16 20:22:05 omv mount[20339]: what(): basic_string::substr: __pos (which is 2) > this->size() (which is 0) Feb 16 20:22:05 omv mount[20338]: Aborted Feb 16 20:22:05 omv systemd[1]: srv-mergerfs-pool.mount: Mount process exited, code=exited, status=134/n/a Feb 16 20:22:05 omv systemd[1]: srv-mergerfs-pool.mount: Failed with result 'exit-code'. Feb 16 20:22:05 omv systemd[1]: Failed to mount MergerFS mount for pool. -- Boot fe91364c834240a1bff636335a151f47 -- Feb 16 20:40:10 omv systemd[1]: srv-mergerfs-pool.mount: Directory /srv/mergerfs/pool to mount over is not empty, mounting anyway. Feb 16 20:40:10 omv mount[410]: terminate called after throwing an instance of 'std::out_of_range' Feb 16 20:40:10 omv mount[410]: what(): basic_string::substr: __pos (which is 2) > this->size() (which is 0) Feb 16 20:40:10 omv systemd[1]: Mounting MergerFS mount for pool... Feb 16 20:40:10 omv mount[409]: Aborted Feb 16 20:40:10 omv systemd[1]: srv-mergerfs-pool.mount: Mount process exited, code=exited, status=134/n/a Feb 16 20:40:10 omv systemd[1]: srv-mergerfs-pool.mount: Failed with result 'exit-code'. Feb 16 20:40:10 omv systemd[1]: Failed to mount MergerFS mount for pool.
-
The pool wasn't mounted.
-
Yes I tried the restart pool it did nothing. I suspect now having thought more that I knocked a drive connection when cleaning dust from the case hence the problem. But reconnecting all the drives and rebooting didn't fix it.
Anyway more or less back to where I was, thanks all for the input.
-
# omv-showkey mntent produces a lengthy output which contains some references to the old pool.
# om-showkey mergerfs produces:
Code
Alles anzeigen<mergerfs> <pools> <pool> <uuid>52167e95-4765-41a2-874c-e42a04bfd8bf</uuid> <name>pool2</name> <mntentref>45c6e10d-2541-4ec6-bf10-adf66eb4fad7</mntentref> <paths>/srv/dev-disk-by-label-Disk1:/srv/dev-disk-by-label-Disk2:/srv/dev-disk-by-label-Disk3:/srv/dev-disk-by-label-Disk4:/srv/dev-disk-by-uuid-5c0e9fa3-0bd4-49cb-8d31-3a5b0ae402a0:/srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5</paths> <createpolicy>epmfs</createpolicy> <minfreespace>4</minfreespace> <minfreespaceunit>G</minfreespaceunit> <options>defaults,allow_other,cache.files=off,use_ino</options> </pool> </pools> </mergerfs>
Which looks OK.
I'll sleep on it - I think the way forward is to delete the references to old pool and shares from the OMV config file.
It's all working again so the heat is off.
-
Thanks for the tip wasn't aware of that.
I am making progress, I have created a new pool called pool2. I can't add my shares to the pool2 as they appear to be in the pool I lost.
I have deleted NFS, SMB, rsync references. How can I remove the old pool?
Would removing this section of the omv config file do it? Don't want to mess things up more than I have.
Code
Alles anzeigen<mntent> <uuid>a3763456-2c4e-43d4-945f-6933f8ff62dd</uuid> <fsname>e74d9b89-f0f1-40ce-a580-7d910581bd2a</fsname> <dir>/srv/mergerfs/pool</dir> <type>fuse.mergerfs</type> <opts></opts> <freq>0</freq> <passno>0</passno> <hidden>1</hidden> <usagewarnthreshold>0</usagewarnthreshold> <comment></comment> </mntent>
-
I have a six drive pool that stopped loading for unclear reasons. All members of the pool seem ok. To try to fix I deleted the pool thinking I could recreate it, but this is not working. Uninstall and reinstall the plugin did not help.
When I try to recreate the pool I get the 500 error.
Any way to recover from this?