Backup that alters the source files would be dangerous and not make sense.
That's what I figured. Thanks for your reply and reassurance!
Backup that alters the source files would be dangerous and not make sense.
That's what I figured. Thanks for your reply and reassurance!
Read the wiki - https://wiki.omv-extras.org/do…v7_plugins:docker_compose - specifically the backup section that talks about the SKIP_BACKUP flag.
Perfect, so this is intended behavior and I should manually specify the /share directory to be skipped. Can I assume that the partial backup that I interrupted was non-destructive and all the files rsynced still exist at /share? I spot-checked a few specific files and everything seems to be in tact at the source directory, but doing this from my phone makes it a bit trickier to verify.
If the backup doesn't alter or delete the source files, I'll just delete the backup and move forward from there.
EDIT - I noticed that the sabNZBD backup process was actually still running and appears to actually have been moving the entire contents of /share to the container backup. I killed the rsync process and disabled the backup until I get my head around this.
I have just begun playing with the Compose plug-in as a way to manage the docker containers I had created manually through Portainer or docker compose. I was interested in simplifying backups and restorations of containers and OMV itself and seems like this is the way to go.
My Compose settings are below. Config sits on my SSD system drive and is where I've been storing all my compose files and persistent configuration of each container. Pool is a large mergerfs pool consisting of 20ish drives, and "share" is the single top -level folder that contains all the data. This is split into "storage" and "media," with each of those being further split, and on and on.
I began by migrating a few less used containers, MeTube and Bonob, which seemed to go well, so I then created one for my primary media downloading service, sabNZBD. The compose file looks like this:
services:
sabnzbd:
image: lscr.io/linuxserver/sabnzbd:latest
container_name: sabnzbd2
environment:
- PUID=1000
- PGID=100
- TZ=America/New_York
volumes:
- /config/sabnzbd:/config
- /share:/share
ports:
- 38092:8080
restart: unless-stopped
I also have backups enabled every Tuesday at 11PM.
At midnight each day, a snapraid sync and scrub runs, which usually takes between 6-8 hours. When I noticed that it was still running after about 11 hours, I looked at the output and found that it appears that the backup of the sabNZBD container included the entire contents of the /share directory that was mapped inside the container. I assume these are hardlinks, as there's no way I could store dupes of all the data in that folder. However, what I want to back up is only the compose file itself and the persistent configuration, but not the entire contents of every mapped directory in each container.
I assume I've made an error with my configuration somewhere. Can anybody help me identify it? I'm trying to post an example of the line in the snapraid output that led me to these conclusions, but I am traveling and only have JuiceSSH on Android to access the system. I can't seem to figure out how to copy the text that extends past the width of the console window.
I am pulling a data drive from my SnapRAID array but don't see a way to follow the official FAQ steps using the OMV plugin. I have first transferred all data off this drive to other data drives in the array using rsync. Now, the official documentation says to follow these steps:
How can I remove a data disk from an existing array?
To remove a data disk from the array do:
Change in the configuration file the related "disk" option to point to an empty directory
Remove from the configuration file any "content" option pointing to such disk
Run a "sync" command with the "-E, --force-empty" option:
snapraid sync -E
The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
When the "sync" command terminates, remove the "disk" option from the configuration file.
Your array is now without any reference to the removed disk.
I know that I can't manually edit the config file since the plugin will overwrite it, so how do I point the drive ("a1" in my case) to an empty directory? This is my current config:
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
autosave 0
#####################################################################
# OMV-Name: a1 Drive Label: a1
content /srv/dev-disk-by-uuid-8d1467eb-1001-40d6-8ac6-c71a703d8a13/snapraid.content
disk a1 /srv/dev-disk-by-uuid-8d1467eb-1001-40d6-8ac6-c71a703d8a13
#####################################################################
# OMV-Name: a2 Drive Label: a2
content /srv/dev-disk-by-uuid-ff43d04a-fb0d-4bb3-819b-fc316127dcea/snapraid.content
disk a2 /srv/dev-disk-by-uuid-ff43d04a-fb0d-4bb3-819b-fc316127dcea
#####################################################################
# OMV-Name: a3 Drive Label: a3
content /srv/dev-disk-by-uuid-5f738cba-66ea-40d5-9466-e4b92eadbc6a/snapraid.content
disk a3 /srv/dev-disk-by-uuid-5f738cba-66ea-40d5-9466-e4b92eadbc6a
#####################################################################
# OMV-Name: a4 Drive Label: a4
content /srv/dev-disk-by-uuid-9b9f5be5-c08e-4a16-9c3f-fbd5c96512d3/snapraid.content
disk a4 /srv/dev-disk-by-uuid-9b9f5be5-c08e-4a16-9c3f-fbd5c96512d3
#####################################################################
# OMV-Name: b1 Drive Label: b1
content /srv/dev-disk-by-uuid-2de3d287-7125-418e-a72d-0241b394dac9/snapraid.content
disk b1 /srv/dev-disk-by-uuid-2de3d287-7125-418e-a72d-0241b394dac9
#####################################################################
# OMV-Name: b2 Drive Label: b2
content /srv/dev-disk-by-uuid-9b3b2e69-3f8d-4782-8367-3570893ed86b/snapraid.content
disk b2 /srv/dev-disk-by-uuid-9b3b2e69-3f8d-4782-8367-3570893ed86b
#####################################################################
# OMV-Name: b3 Drive Label: b3
content /srv/dev-disk-by-uuid-05fda9c9-1d96-46ed-9525-b8ffd665a8a3/snapraid.content
disk b3 /srv/dev-disk-by-uuid-05fda9c9-1d96-46ed-9525-b8ffd665a8a3
#####################################################################
# OMV-Name: b4 Drive Label: b4
content /srv/dev-disk-by-uuid-193024de-5ffa-42dd-b816-729337701129/snapraid.content
disk b4 /srv/dev-disk-by-uuid-193024de-5ffa-42dd-b816-729337701129
#####################################################################
# OMV-Name: c1 Drive Label: c1
content /srv/dev-disk-by-uuid-90539c0d-4d62-46b5-a8a2-39c3359eb2cb/snapraid.content
disk c1 /srv/dev-disk-by-uuid-90539c0d-4d62-46b5-a8a2-39c3359eb2cb
#####################################################################
# OMV-Name: c2 Drive Label: c2
content /srv/dev-disk-by-uuid-f288cf77-4120-4389-9b83-fa0ac6d69ab8/snapraid.content
disk c2 /srv/dev-disk-by-uuid-f288cf77-4120-4389-9b83-fa0ac6d69ab8
#####################################################################
# OMV-Name: c3 Drive Label: c3
content /srv/dev-disk-by-uuid-cf029430-ee73-4797-877e-3303f83363b2/snapraid.content
disk c3 /srv/dev-disk-by-uuid-cf029430-ee73-4797-877e-3303f83363b2
#####################################################################
# OMV-Name: c4 Drive Label: c4
content /srv/dev-disk-by-uuid-6557b78c-97a7-4be7-98e7-51f582f37ac2/snapraid.content
disk c4 /srv/dev-disk-by-uuid-6557b78c-97a7-4be7-98e7-51f582f37ac2
#####################################################################
# OMV-Name: a0 Drive Label: a0
parity /srv/dev-disk-by-uuid-ae14e7c3-50d8-4740-9530-aeba293f48fb/snapraid.parity
#####################################################################
# OMV-Name: b0 Drive Label: b0
2-parity /srv/dev-disk-by-uuid-13c10b20-6411-4c05-96c4-5379c83159ae/snapraid.2-parity
#####################################################################
# OMV-Name: c0 Drive Label: c0
3-parity /srv/dev-disk-by-uuid-ff9f3e27-a6ae-4723-ac77-a783a4f45630/snapraid.3-parity
#####################################################################
# OMV-Name: d1 Drive Label: d1
content /srv/dev-disk-by-uuid-ea81e6c8-0d4c-4be5-a3a6-9c33525bc491/snapraid.content
disk d1 /srv/dev-disk-by-uuid-ea81e6c8-0d4c-4be5-a3a6-9c33525bc491
#####################################################################
# OMV-Name: d2 Drive Label: d2
content /srv/dev-disk-by-uuid-acbc4170-336c-4e7b-981f-67cc46fb2dc5/snapraid.content
disk d2 /srv/dev-disk-by-uuid-acbc4170-336c-4e7b-981f-67cc46fb2dc5
#####################################################################
# OMV-Name: d3 Drive Label: d3
content /srv/dev-disk-by-uuid-9ea264e0-b0ac-426f-9bb0-e5d215b3b11c/snapraid.content
disk d3 /srv/dev-disk-by-uuid-9ea264e0-b0ac-426f-9bb0-e5d215b3b11c
#####################################################################
# OMV-Name: d4 Drive Label: d4
content /srv/dev-disk-by-uuid-219f3346-3bd3-4444-a1b7-5dcaf35e1b8c/snapraid.content
disk d4 /srv/dev-disk-by-uuid-219f3346-3bd3-4444-a1b7-5dcaf35e1b8c
#####################################################################
# OMV-Name: e1 Drive Label: e1
content /srv/dev-disk-by-uuid-9c4e1a5e-65bc-4bed-ad3b-6a3f6c645795/snapraid.content
disk e1 /srv/dev-disk-by-uuid-9c4e1a5e-65bc-4bed-ad3b-6a3f6c645795
#####################################################################
# OMV-Name: e2 Drive Label: e2
content /srv/dev-disk-by-uuid-a34a0bc3-484a-4f2f-a0a1-d57b889e265f/snapraid.content
disk e2 /srv/dev-disk-by-uuid-a34a0bc3-484a-4f2f-a0a1-d57b889e265f
#####################################################################
# OMV-Name: e3 Drive Label: e3
content /srv/dev-disk-by-uuid-51fad1d1-e53f-4422-a879-a2f119b0f0be/snapraid.content
disk e3 /srv/dev-disk-by-uuid-51fad1d1-e53f-4422-a879-a2f119b0f0be
#####################################################################
# OMV-Name: f1 Drive Label: f1
content /srv/dev-disk-by-uuid-3e079b60-3eef-4c46-971b-976b959a4f60/snapraid.content
disk f1 /srv/dev-disk-by-uuid-3e079b60-3eef-4c46-971b-976b959a4f60
#####################################################################
# OMV-Name: f2 Drive Label: f2
content /srv/dev-disk-by-uuid-1fefcc08-a926-4583-96f7-a4a5c12468c1/snapraid.content
disk f2 /srv/dev-disk-by-uuid-1fefcc08-a926-4583-96f7-a4a5c12468c1
#####################################################################
# OMV-Name: f3 Drive Label: f3
content /srv/dev-disk-by-uuid-a89a04d2-73be-486f-86c0-0ce6b2adf29a/snapraid.content
disk f3 /srv/dev-disk-by-uuid-a89a04d2-73be-486f-86c0-0ce6b2adf29a
exclude *.unrecoverable
exclude lost+found/
exclude aquota.user
exclude aquota.group
exclude /tmp/
exclude .content
exclude *.bak
exclude /snapraid.conf*
exclude /share/.plextemp/
exclude .bash_history
include /share/storage/
include /share/media/games/
include /share/media/unsorted/
exclude *.nfo
Display More
Not sure I would reinstall OMV just because of this. If you can chroot into the install, you can install another kernel.
I think OMV 7 is safe. I am running it on all of my systems.
Well this was a daunting issue for me to stumble into this morning, but thanks to your suggestions, I'm sticking with 6.9.11 for a bit longer. I was able to chroot in and install a new kernel as you said. I'll post my process here on the off chance somebody else runs into the same issue (or, more likely, I somehow do it again in the future).
On a side note, I greatly appreciate your dedication to this project and all the time you put into helping the community.
Boot into live Ubuntu ISO
Mount OMV system drive sudo mount /dev/nvme0n1p2 /mnt
Mount EFI partition sudo mount /dev/nvme0n1p1 /mnt/boot/efi
Mount additional filesystems for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -B $i /mnt$i; done
chroot sudo chroot /mnt
Install new kernel sudo apt install linux-image-x-amd64
Exit chroot (ctrl+d)
Reboot
Because it was running grub, I would've guessed you only needed to run update-grub.
Unfortunately it doesn't appear that I am so lucky. At this point should I work on backing up what I can and just reinstalling OMV?
If so, is OMV7 safe to install yet?
I would put the debian netinst iso on a usb stick and boot from it. Then choose to repair grub.
Just did this a couple of times after reading the documentation as it had been a while since I've done any of this. Ultimately, the boot still hangs at the same place.
As a sanity check, here's what I did in the live iso:
Mounted my system partition as root (/dev/nvme0n1p2)
Accepted the prompt to mount the /boot/efi partition
Selected to reinstall GRUB
Entered /dev/nvme0 as the device on which to install GRUB
Rebooted
Is there something I can verify by launching a shell and viewing the bootloader files?
I would put the debian netinst iso on a usb stick and boot from it. Then choose to repair grub.
Will give this a shot now, thanks.
I sure hope so because it's the only nvme drive installed.
Quoting myself... I did move my NVME system drive from one M.2 slot to a different slot a week or two ago, but the system has been booting fine since then, and it is still the only drive installed. I don't think that would affect anything but figured I would mention it.
Is that the correct nvme to boot from?
I sure hope so because it's the only nvme drive installed.
No clue than mate sorry. Rye will prob know ryecoaaron
Thanks for giving it a shot anyway!
For any future readers, when I edit the kernel options, this is what is there:
Is a debian kernel still installed on your system? Can you get your system to boot after selecting a debian kernel on the GRUB screen?
It appears to be, as the kernel is listed (along with memtest and UEFI settings) an an option in the GRUB menu, but I am not sure how to confirm that.
Can you choose the other kernel on GRUB?
The only other kernel available is the recovery version of the same kernel, but selecting it also causes the system to hang the same way. In fact, even trying to launch memtest results in the same issue.
Ah OK, well that may explain that issue. My bigger concern now is figuring out how to get my system to boot again.
I believe I may have quite some time ago, but not as part of this recent update process.
Today I attempted to install the KVM plugin but it appeared to fail due to some other out of date packages. I refreshed the available updates and installed them (in the UI) and received the "connection lost" error. I refresh my browser using ctrl+shift+R, but still some updates remained. I waited a while and then rebooted the server. However, the web UI would not load. I connected a monitor and found that the system hangs at the "Loading Linux 5.15.131-2-pve" message after the grub menu. Does anybody have suggestions on how to fix the system?
Display MoreNo, not using folder2ram plugin.
There is indeed a /etc/init/php7.3-fpm.conf with this in it:
CodeDisplay More# php7.3-fpm - The PHP FastCGI Process Manager description "The PHP 7.3 FastCGI Process Manager" author "Ondřej Surý <ondrej@debian.org>" start on runlevel [2345] stop on runlevel [016] # you can uncomment this with recent upstart # reload signal USR2 pre-start script mkdir -p /run/php chmod 0755 /run/php chown www-data:www-data /run/php end script respawn exec /usr/sbin/php-fpm7.3 --nodaemonize --fpm-config /etc/php/7.3/fpm/php-fpm.conf
But then WHY does it not make the directory?? It always worked until jan 6..
I made a script my own, put it in /root/bin and let rc.local call it.. and it works!
Do you have any updates on this issue? I am experiencing the same problem.
On OMV6 I have the same problem. The /run/php folder is not present after rebooting the system.
The only plugin I'm using is the remotemount plugin. When I delete it, it is working and /run/php is still present after rebooting.
For now I'm using a cronjob to reinstall php7.4-fpm @reboot. Not nice, but working for now.
Has anyone an idea why the folder is deleted?
Any update on this? I'm having the same problem now, and I am also not using the flashmemory plugin (OMV6 installed on an NVME drive).
It definitely didn't. flashmemory only copies files between a tmpfs bind mount and a mounted filesystem. This happens *after* the filesystems are mounted. Now, if you system is low on memory and the copy from the mounted filesystem to the bind mount fills the tmpfs mount, it could cause problems. The sync folder2ram does between tmpfs and the mounted filesystem at shutdown is very important. So, bad things can happen if this never happens. But it is hard to say while your system is mounted read only. You are using mergerfs as well. If a filesystem wasn't ready when the pool was to be assembled, the system could possibly be mounted read only. kern.log or messages in /var/log/ might be good to look at.
Thanks for chiming in. While I had the mergerfs plugin installed, I hadn't actually created a pool with it yet as all the filesystems that were going to be used in the pool weren't able to be mounted without the issues I was running into as discussed in the thread.
Ultimately, I was able to get my setup to work fine just by avoiding the flashmemory plugin (after 3 fresh installs using it that all failed), so I have to imagine it's somehow involved. As long as nobody else is having issues, maybe it was a fluke or an issue with my system drive, who knows...
Well my disk was giving some errors regarding the superblock having an invalid journal and a corrupt partition table, so I used GParted to wipe the OS drive and install OMV6 once again. This time I did everything EXCEPT install the flashmemory plugin and have had no issues whatsoever. I think this is the likely culprit by process of elimination. Thanks for spending so much time working through this with me.
ryecoaaron, any idea how flashmemory would render my root drive read-only?
(Edit - See below, not fixed as I had hoped) Since I had some time to kill and nothing to lose, I did a fresh installation of OMV 6. I followed almost the exact same process, but this time, I was able to mount all my filesystems without issue. Either the whole thing was a fluke, or one one of the following things is what caused the error (I didn't do any of these before mounting the filesystems, unlike the first time when I experienced all the problems):
Edit - Well, now I am unable to gain access to the GUI (Error 500 - Internal Server Error, Failed to Connect to Socket). This time I installed omv-extras and all the plugins listed above AFTER everything was mounted. I have no evidence to support this, but I feel like it may be flashmemory. I noticed that it was not running (red status on the dashboard), realized I never rebooted after installing, so I rebooted to see if the service would run. Immediately I was faced with this new issue.
I found this thread which sounded similar, and tried the command that was suggested there:
dpkg --configure -a
dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system
And then, to test this, did the following:
So, somehow my root filesystem has been turned read-only. Thoughts?