Not sure if related, but I'm seeing something similar - leaving workbench open will result in seemingly random 503s.
What's the best way to grab some logs for this?
Thanks
Not sure if related, but I'm seeing something similar - leaving workbench open will result in seemingly random 503s.
What's the best way to grab some logs for this?
Thanks
For those having the same issue - I found out it was caused by my boot drive being plugged into sata 5 instead of sata 0. Other drive expansion cards were initing before it and breaking the order it seemed. plugging into sata 0 for the boot drive made this go away instantly
Don't suppose either of you could share screenshots of your config for this please?
Looking into something similar as an offsite backup solution, and have been considering a similar approach, just want to make sure it is relatively easy to setup.
thanks!
Hey all,
I’ve been running an OMV server for quite a few years and somehow have been in the dark regarding MergeFS + SnapRaid.
I’d like to set both up on my OMV server, but have a couple of questions.
Currently, my setup looks like:
Data drives, all ext4:
Fry 4tb
Leela 2tb
Bender 10tb
Backup drive, ext4:
Nibbler 16tb
Currently for backups, I run varying frequencies of selective Rsync to Nibbler. In some sense, like a less optimal version of SnapRaid.
I’m wondering the following:
For MergeFS, I would plan to pool the three data drives together. Ideally, I’d like to preserve the top level drive names as root folders, e.g.:
mnt/pool/fry
mnt/pool/leela
I assume it should be possible, would epmfs be the correct policy to maintain this?
Also, I assume the correct path of action afterward would be to migrate my container configurations to use the new pool mount point?
Finally, I believe this should not impact the existing data on the drives in anyway, I just wanted to double check before rolling this. ![]()
Would correct configuration look something like this?
Then, I plan to use SnapRaid to replace the incremental Rsyncs to nibbler. Nibbler is 16tb in total, so is the biggest drive and also larger than all drives in the pool that require parity.
I’d simply mark all data drives as data and content and Nibbler as parity.
I was wondering how the parity drive specifically works in this case. In my current setup, if I suffer a drive failure, I simply rsync data back up to a new drive. Secondly, if I have issues with user error, e.g accidental file deletion, I have some hours to go to the backup drive and scoop the deleted file back up (due to delayed rsync).
I believe that if I have a drive failure with SnapRaid, I’d use the commands to swap drives over and then rebuild from parity. If I accidentally deleted a file and needed to easily recover, I don’t believe it would be particularly easy?
Therefore, should I retain my existing incremental rsyncs on the backup drive and add another parity drive for SnapRaid?
And for general recovery, I assume the following is the go to: https://github.com/trapexit/ba…_(mergerfs%2Csnapraid).md
Many thanks.
Hey guys,
I just resolved an issue on OMV4 where an rsync job was causing reboots. The solution was to delete the erroneous rsync job and recreate it in OMV UI.
For reference, here was the faulty command:
root@openmediavault:/var# cat /var/lib/openmediavault/cron.d/rsync-75f6051e-c7c2-4f8a-a574-54c78fdca828
#!/bin/sh
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
. /usr/share/openmediavault/scripts/helper-functions
cleanup() {
omv_kill_children $$
rm -f "/run/rsync-75f6051e-c7c2-4f8a-a574-54c78fdca828"
exit
}
[ -e "/run/rsync-75f6051e-c7c2-4f8a-a574-54c78fdca828" ] && exit 1
if ! omv_is_mounted "/srv/dev-disk-by-label-Users/" ; then
omv_error "Source storage device not mounted at </srv/dev-disk-by-label-Users/>!"
exit 1
fi
if ! omv_is_mounted "/srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/" ; then
omv_error "Destination storage device not mounted at </srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/>!"
exit 1
fi
trap cleanup 0 1 2 5 15
touch "/run/rsync-75f6051e-c7c2-4f8a-a574-54c78fdca828"
omv_log "Please wait, syncing </srv/dev-disk-by-label-Users/> to </srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/UsersBackup/> ...\n"
rsync --verbose --log-file="/var/log/rsync.log" --archive --delete --progress --stats "/srv/dev-disk-by-label-Users/" "/srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/UsersBackup/" & wait $!
if [ $? -eq 0 ]; then
omv_log "The synchronisation has completed successfully."
else
omv_error "The synchronisation failed."
fi
exit 0
Display More
After recreating, this was the working command
#!/bin/sh
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
. /usr/share/openmediavault/scripts/helper-functions
cleanup() {
omv_kill_children $$
rm -f "/run/rsync-a7f3c994-a06c-4cbb-8e1f-028ae9b44db8"
exit
}
[ -e "/run/rsync-a7f3c994-a06c-4cbb-8e1f-028ae9b44db8" ] && exit 1
if ! omv_is_mounted "/srv/dev-disk-by-label-Users/" ; then
omv_error "Source storage device not mounted at </srv/dev-disk-by-label-Users/>!"
exit 1
fi
if ! omv_is_mounted "/srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/" ; then
omv_error "Destination storage device not mounted at </srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/>!"
exit 1
fi
trap cleanup 0 1 2 5 15
touch "/run/rsync-a7f3c994-a06c-4cbb-8e1f-028ae9b44db8"
omv_log "Please wait, syncing </srv/dev-disk-by-label-Users/> to </srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/UsersBackup/> ...\n"
rsync --verbose --log-file="/var/log/rsync.log" --archive --delete "/srv/dev-disk-by-label-Users/" "/srv/dev-disk-by-uuid-cedb79ed-1935-416a-898a-9710adbd8f7c/UsersBackup/" & wait $!
if [ $? -eq 0 ]; then
omv_log "The synchronisation has completed successfully."
else
omv_error "The synchronisation failed."
fi
exit 0
Display More
I'm not sure if there were more changes elsewhere but posting here in case anyone finds this useful.
I have my persistent docker folders on the same drive.
Would another approach me to possibly resize the system partition and move the persistent docker locations onto the new partition on the same drive?
Hey,
Trying to add a shared folder from the system drive - can't see it as possible in the GUI - has anyone else done this?
Thanks
Thank you, solved!
FWIW my nginx conf:
FWIW just going to add in my nginx conf here doing the same thing:
As per the title, not sure if this is possible.
I'm currently running a let's encrypt docker container which is correctly routing to several other services I'm using.
I'm wondering if I could also leverage this for the web gui, however I'm not sure if it's possible.
I believe to do this, I would need to run the let's encrypt container in host mode rather than bridge?
Has anyone here successfully done this, or have a better recommendation for how to do this please?
Thanks!
Hey all,
Trying to set up LetsEncrypt reverse proxy with Docker, basically following https://www.youtube.com/watch?v=TkjAcp8q0W0 - the issue is, whenever I enter `--network my-net` into extra arguments for the docker container, I get the following error message:
cannot attach both user-defined and non-user-defined network-modes
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; docker run -d --restart=always -v /etc/localtime:/etc/localtime:ro --net=none -e LANGUAGE="en_US.UTF-8" -e TERM="xterm" -e AIRSONIC_HOME="/app/airsonic" -e PGID="100" -e PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" -e HOME="/root" -e LANG="C.UTF-8" -e AIRSONIC_SETTINGS="/config" -e PUID="1000" -e TZ="redcated" -v "/home/docker/airsonic":"/config":rw -v "/srv/dev-disk-by-label-Media/Music":"/music":rw -v "/podcasts" -v "/media" -v "/playlists" --name="Airsonic" --label omv_docker_extra_args="--network my-net" --network my-net "linuxserver/airsonic:latest" 2>&1' with exit code '125': docker: conflicting options: cannot attach both user-defined and non-user-defined network-modes. See 'docker run --help'.
I have even tried disabling the network option on the container entirely but it still persists. I have changed and tested different configurations in nginx. Nginx subdomain configurations for the services was setup also, e.g. for Airsonic, but only the default start page showed - probably because the services could not be discovered as the --network command wasn't used. Even though the services show on the same bridge in the networks tab, I do not think this is enough.
Any idea what I'm doing wrong?
Thanks!
19663cc141844fa288bdeafea60344f536e6b3a9
thought it might be useful