Thanks looks like I have it sorted, an external USB drive I am using for SanpRAID 2-parity must have unmounted.
I have rebooted and all good.
Thanks looks like I have it sorted, an external USB drive I am using for SanpRAID 2-parity must have unmounted.
I have rebooted and all good.
Could not login today, after trial and error found OS drive - 250 NVME - was full. Freed up some space and logged in. Wondering where the space had gone I ran:
root@omv:/# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 782M 3.3M 779M 1% /run
/dev/nvme0n1p1 221G 89G 121G 43% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 136K 3.9G 1% /tmp
/dev/sdi1 57G 24G 34G 42% /srv/dev-disk-by-uuid-0998f9db-89dc-4ca7-8534-ee0ee18edbdf
/dev/sdj1 3.7T 3.1T 620G 84% /srv/dev-disk-by-uuid-28003ffa-56bc-485a-919d-022ff11f6f3d
/dev/sdg1 4.6T 3.4T 1.2T 75% /srv/dev-disk-by-uuid-624a1443-dbec-499f-9805-bef01f6c1466
/dev/sdh1 3.7T 3.1T 620G 84% /srv/dev-disk-by-uuid-3862d953-9815-49cb-95f3-23dce1ac0ce9
/dev/sdb1 3.6T 3.0T 624G 83% /srv/dev-disk-by-label-Disk2
/dev/sdd 3.6T 2.8T 625G 83% /srv/dev-disk-by-label-Disk4
/dev/sda1 3.6T 3.0T 604G 84% /srv/dev-disk-by-label-Disk1
/dev/sdc1 3.6T 3.1T 573G 85% /srv/dev-disk-by-label-Disk3
/dev/sdf1 3.6T 583G 3.1T 16% /srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5
/dev/sde1 3.6T 3.0T 491G 86% /srv/dev-disk-by-uuid-5c0e9fa3-0bd4-49cb-8d31-3a5b0ae402a0
pool:e74d9b89-f0f1-40ce-a580-7d910581bd2a 22T 16T 5.9T 73% /srv/mergerfs/pool
/dev/sde2 916G 52K 870G 1% /srv/dev-disk-by-uuid-ca147bbc-9484-40a9-9bd3-52fc70b07760
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/d9c6f6304cb671dd20489f8941f701709957d3dd3786885a45432c8da36bf25c/merged
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/264ab153a47e377420e4a674f8b322c28bc5899bfeb6f20fd2cbc67ac641ff25/merged
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/7b77a7b1fda6a376cb6a53fe10052d04ed006329787dcd366c1303f848bbcbf8/merged
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/d4352f2ac6be23ccd7397f3b6cd9cd34bf5bd82f640a1a7971fd20788ace95b4/merged
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/c1183c957e3e8e290c5be4b17746a216643e4ee198664db8480e644f85b40ff7/merged
overlay 221G 89G 121G 43% /var/lib/docker/overlay2/4434969cc9278b2c6ae6a58ef07a2272584f389fa5a89606e5c78deae547b432/merged
tmpfs 782M 0 782M 0% /run/user/0
Display More
And then:
root@omv:/# ncdu --exclude /export --exclude /srv
--- / ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
13.1 GiB [##########] /var
5.1 GiB [### ] /usr
4.6 GiB [### ] /home
1.6 GiB [# ] /boot
445.4 MiB [ ] /data
111.5 MiB [ ] /path
8.7 MiB [ ] /etc
3.3 MiB [ ] /run
200.0 KiB [ ] /root
136.0 KiB [ ] /tmp
16.0 KiB [ ] /opt
e 16.0 KiB [ ] /lost+found
8.0 KiB [ ] /media
4.0 KiB [ ] /mnt
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] /dev
@ 0.0 B [ ] initrd.img.old
@ 0.0 B [ ] initrd.img
@ 0.0 B [ ] vmlinuz.old
@ 0.0 B [ ] vmlinuz
@ 0.0 B [ ] libx32
@ 0.0 B [ ] lib64
@ 0.0 B [ ] lib32
@ 0.0 B [ ] sbin
@ 0.0 B [ ] lib
@ 0.0 B [ ] bin
< 0.0 B [ ] srv
< 0.0 B [ ] export
Display More
Huge discrepancy - any clues as to why this should be?
Do you have base URL configured in Jellyfin network settings?
e.g. https://192.168.2.115:8096/jellyfin
If so add this to the Firestick settings
My experience has been I cannot wake system using rtcwake (hardware below). I have spent hours on this to no avail.
I can wake it from BIOS. Another option is if you have a 24/7 system running you can start OMV from wake on lan.
I have Tailscale installed in a Docker and installed via their curl script on other Linux devices. I can ssh into all devices remotely from WIndows laptop with Tailscale installed with no ports opened on router except 80 and 443. On the other devices I can connect using magicDNS e.g. https://mydevice.mymagic-name.ts.net - but this does not work on OMV. I have added the Tailscale DNS 100.100.100.100 in the network settings but no difference. Not a huge deal as I can access OMV web interface remotely using SWAG reverse proxy - but why isn't this working?
Have you connected your server to a monitor and keyboard?
Another useful tip from the SnapRAID manual:
"In Linux, to get more space for the parity, it's recommended to format the parity file-system with the -m 0 -T largefile4 options. Like:
On an 8 TB disk you can save about 400 GB. This is also expected to be as fast as the default, if not faster."
Therefore less need to worry about the data disks filling up as the parity disk is effectively bigger than them.
I did my first by
# snapraid sync
Then it was 4x4 TB drives took about 15 hours or so.
I believe I may have eliminated the helper script a source of the problem as doing # snapraid sync from CLI did not add the new drive or the content files to the two drives that did not have content?
Thanks deleted the plugin, disabled testing repo, reinstalled plugin and all looking good.
just tried and hasn't solved problem.
Before this I deleted all drives and readded with content and data on 6 drives and the other two parity. The conf file looks as it should.
# drives
#####################################################################
# OMV-Name: Disk1 Drive Label: Disk1
content /srv/dev-disk-by-label-Disk1/snapraid.content
disk Disk1 /srv/dev-disk-by-label-Disk1
#####################################################################
# OMV-Name: Disk2 Drive Label: Disk2
content /srv/dev-disk-by-label-Disk2/snapraid.content
disk Disk2 /srv/dev-disk-by-label-Disk2
#####################################################################
# OMV-Name: Disk3 Drive Label: Disk3
content /srv/dev-disk-by-label-Disk3/snapraid.content
disk Disk3 /srv/dev-disk-by-label-Disk3
#####################################################################
# OMV-Name: Disk4 Drive Label: Disk4
content /srv/dev-disk-by-label-Disk4/snapraid.content
disk Disk4 /srv/dev-disk-by-label-Disk4
#####################################################################
# OMV-Name: Disk5 Drive Label:
content /srv/dev-disk-by-uuid-5c0e9fa3-0bd4-49cb-8d31-3a5b0ae402a0/snapraid.content
disk Disk5 /srv/dev-disk-by-uuid-5c0e9fa3-0bd4-49cb-8d31-3a5b0ae402a0
#####################################################################
# OMV-Name: Disk6 Drive Label:
content /srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5/snapraid.content
disk Disk6 /srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5
#####################################################################
# OMV-Name: Parity Drive Label: Parity
parity /srv/dev-disk-by-uuid-28003ffa-56bc-485a-919d-022ff11f6f3d/snapraid.parity
#####################################################################
# OMV-Name: 2-parity Drive Label:
parity /srv/dev-disk-by-uuid-3862d953-9815-49cb-95f3-23dce1ac0ce9/snapraid.parity
Display More
snapraid sync still does not add content file to drives 5 and 6, and drive 6 still is not mentioned in the end of sync report.
root@omv:~# snapraid sync
Self test...
Loading state from /srv/dev-disk-by-label-Disk1/snapraid.content...
Scanning...
Scanned Disk5 in 0 seconds
Scanned Disk3 in 1 seconds
Scanned Disk4 in 1 seconds
Scanned Disk1 in 1 seconds
Scanned Disk2 in 1 seconds
Using 1133 MiB of memory for the file-system.
Initializing...
Resizing...
Saving state to /srv/dev-disk-by-label-Disk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk3/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk4/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-label-Disk2/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk4/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk1/snapraid.content in 10 seconds
Verified /srv/dev-disk-by-label-Disk3/snapraid.content in 10 seconds
Using 112 MiB of memory for 64 cached blocks.
Selecting...
Syncing...
Nothing to do
Display More
The just added file would have landed on disk 6 due to mergerfs rules.
I see disks 5 and 6 don't have a label is that relevant?
No, a small amount of video files. It was empty when I started this but mergerfs has added stuff.
I have just removed and re-added disk6 from the SnapRAID config page - the .conf file is updated. I added content to drive 5 - the .conf is updated. Bu these changes are not see on a sync:
root@omv:~# snapraid sync
Self test...
Loading state from /srv/dev-disk-by-label-Disk1/snapraid.content...
Scanning...
Scanned Disk3 in 0 seconds
Scanned Disk5 in 0 seconds
Scanned Disk2 in 0 seconds
Scanned Disk4 in 1 seconds
Scanned Disk1 in 1 seconds
Using 1133 MiB of memory for the file-system.
Initializing...
Resizing...
Saving state to /srv/dev-disk-by-label-Disk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk3/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk4/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-label-Disk2/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk4/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk1/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk3/snapraid.content in 10 seconds
Using 112 MiB of memory for 64 cached blocks.
Selecting...
Syncing...
Nothing to do
Display More
The .conf file has a weird name, could this be relevant?
omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf
So the script seems to be irrelevant as doing snapraid sync from CLI also ignores disk 6 and the change to disk 5.
The script is like the SnapRAID helper scripts you find but with more features it was written by a forum member (I think).
I should have mentioned I tried sync from the CLI.
Been using this and all good - until I add new drive through the OMV SnapRAID interface - disk 6 - which looks added. I tick the content and data boxes.
# drives
#####################################################################
# OMV-Name: Disk1 Drive Label: Disk1
content /srv/dev-disk-by-label-Disk1/snapraid.content
disk Disk1 /srv/dev-disk-by-label-Disk1
#####################################################################
# OMV-Name: Disk2 Drive Label: Disk2
content /srv/dev-disk-by-label-Disk2/snapraid.content
disk Disk2 /srv/dev-disk-by-label-Disk2
#####################################################################
# OMV-Name: Disk3 Drive Label: Disk3
content /srv/dev-disk-by-label-Disk3/snapraid.content
disk Disk3 /srv/dev-disk-by-label-Disk3
#####################################################################
# OMV-Name: Disk4 Drive Label: Disk4
content /srv/dev-disk-by-label-Disk4/snapraid.content
disk Disk4 /srv/dev-disk-by-label-Disk4
#####################################################################
# OMV-Name: Parity Drive Label: Parity
4-parity /srv/dev-disk-by-uuid-28003ffa-56bc-485a-919d-022ff11f6f3d/snapraid.4-parity
#####################################################################
# OMV-Name: Disk5 Drive Label:
disk Disk5 /srv/dev-disk-by-uuid-5c0e9fa3-0bd4-49cb-8d31-3a5b0ae402a0
#####################################################################
# OMV-Name: 2-Parity Drive Label:
6-parity /srv/dev-disk-by-uuid-3862d953-9815-49cb-95f3-23dce1ac0ce9/snapraid.6-parity
#####################################################################
# OMV-Name: Disk6 Drive Label:
content /srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5/snapraid.content
disk Disk6 /srv/dev-disk-by-uuid-e56ffb13-d28e-4987-a0e1-b627d20e85b5
Display More
I run the script and the output is:
Self test...
Loading state from /srv/dev-disk-by-label-Disk1/snapraid.content...
Scanning...
Scanned Disk3 in 0 seconds
Scanned Disk1 in 0 seconds
Scanned Disk2 in 0 seconds
Scanned Disk5 in 0 seconds
Scanned Disk4 in 0 seconds
Using 1133 MiB of memory for the file-system.
Initializing...
Hashing...
SYNC - Everything OK
Resizing...
Saving state to /srv/dev-disk-by-label-Disk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk3/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk4/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-label-Disk2/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk4/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk3/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk1/snapraid.content in 9 seconds
Using 112 MiB of memory for 64 cached blocks.
Selecting...
Syncing...
Disk1 2% | *
Disk2 8% | *****
Disk3 34% | ********************
Disk4 22% | *************
Disk5 16% | *********
parity 0% |
2-parity 0% |
raid 0% |
hash 0% |
sched 1% |
misc 39% | ***********************
|____________________________________________________________
wait time (total, less is better)
SYNC - Everything OK
Saving state to /srv/dev-disk-by-label-Disk1/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk2/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk3/snapraid.content...
Saving state to /srv/dev-disk-by-label-Disk4/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-label-Disk2/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk4/snapraid.content in 9 seconds
Verified /srv/dev-disk-by-label-Disk1/snapraid.content in 10 seconds
Verified /srv/dev-disk-by-label-Disk3/snapraid.content in 10 seconds
Display More
Disk 6 looks to be ignored (disk 5 does not have content).
Any help appreciated.
have a different Beelink - with my one there is a BIOS setting you can toggle power on after power restoration or not.
I used Odroid XU4 a few years ago and the solution was as below - it looks like this will work for your device. I wasn't using OMV at the time but this should give you some clues. Naturally back up the system before trying the technique as below.
I bought a Topton mini PC with i7, TB NVME and 32 GB RAM. The NVME was defective, I couldn't get it changed- it would have cost a fortune to return - and in the end I bought a Samsung locally at my own expense. Not impressed with their after sales service.
Maybe boot from Ubuntu live distro on USB to exclude hardware problem?