It was a long time ago and I don't remember the details but apparmor was hosing many of my containers. I uninstalled it.
Beiträge von gderf
-
-
Can't tell anything from that
-
-
Somehow and I don't know how the filesystem UUIDs on both disks are the same and also all thru both grub.cfg files.
I know when I changed this on my OMV6 disk after cloning it to update the clone to OMV7, but I may have restored the OMV6 disk from a backup that had the unchanged UUID.
Simple enough to fix.
Thanks for your help.
-
Recently there was a large set of updates which I installed in the shell. During that process a screen was presented saying I needed to select which drives I wanted to install Grub on and the list showed every drive in my system, 13 of them. I thought I selected the correct two drives to place Grub on, but it didn't work. My OMV 7 was bootable but when I attempt to boot my OMV6 install via the system BBS popup it boots OMV7.
I tried installing Grub to the two OMV disks like this:
But when I rebooted and selected the OMV6 disk, it booted OMV7.
Does anyone know what the shell command is that presents the table of all disks and allows selecting which disks to put grub on?
-
If it helps at all, the first install my ethernet adapter was eth0, on the last install it became end0. I've never seen that before.
-
What I didn't quite understood, is how do you manage to switch the booting to that older stick? It sounds, like you can do this remotely.
So far need to have physical access to the machine to be able to enter the BIOS and switch boot device. Which also includes attaching a screen and a keyboard. Which is one more reason, why I want it as simple as possible, since I can't attach additional devices at the NAS, where I have it running... (I need to relocate is to some desk or so)
But being able to do this remotely sounds interesting to me,
I would be happy if you could point me in the direction how to do this.
The machine has IPMI access which is accessible over the network on a serial over LAN connection (SOL). This is accomplished by the motherboard which has a system on a chip (SOC) running software, probably some cut down Linux variant on it that is connected to a dedicated ethernet port. Client access is via a terminal shell to the IP bound to that port.
In the terminal it looks just like a ssh connection or like the console usually available via a monitor and keyboard.
The SOC is running all the time so long as power is applied to the motherboard, even if the machine is shut down. Another way the SOC can be accessed is via a browser which allows certain commands to be run, status information is displayed, etc.
This functionality is not available on ordinary home use type motherboards.
So, yes, I can power on/off the machine, reboot the machine, select the boot device, enter the BIOS, all remotely over the network, which if I forwarded a port on the router, from anywhere via the internet There is no monitor or keyboard attached in my use case but those ports are available, and the machine is locked in a safe. The only times I have physically accessed it is to add more hard drives and add some more RAM.
-
That's normal. Something else is causing the problem. But I don't know what. I'd look at your browsers closely.
-
Is that with the container running? Try again after stopping it if it is running.
-
-
Here's my log. It shows a few more process lines. I didn't compare it line for line though.
Code
Alles anzeigen2024-07-22T19:05:39.862549620Z /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration 2024-07-22T19:05:39.862593951Z /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ 2024-07-22T19:05:39.865409927Z /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 2024-07-22T19:05:39.874458595Z 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 2024-07-22T19:05:39.885448380Z 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf 2024-07-22T19:05:39.885596232Z /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh 2024-07-22T19:05:39.885684198Z /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh 2024-07-22T19:05:39.889386769Z /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh 2024-07-22T19:05:39.890589431Z /docker-entrypoint.sh: Configuration complete; ready for start up 2024-07-22T19:05:39.900068135Z 2024/07/22 19:05:39 [notice] 1#1: using the "epoll" event method 2024-07-22T19:05:39.900089038Z 2024/07/22 19:05:39 [notice] 1#1: nginx/1.27.0 2024-07-22T19:05:39.900094814Z 2024/07/22 19:05:39 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 2024-07-22T19:05:39.900098563Z 2024/07/22 19:05:39 [notice] 1#1: OS: Linux 6.1.0-23-amd64 2024-07-22T19:05:39.900102153Z 2024/07/22 19:05:39 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576 2024-07-22T19:05:39.900190353Z 2024/07/22 19:05:39 [notice] 1#1: start worker processes 2024-07-22T19:05:39.900327671Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 29 2024-07-22T19:05:39.900438329Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 30 2024-07-22T19:05:39.900532732Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 31 2024-07-22T19:05:39.900658833Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 32 2024-07-22T19:05:39.900820406Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 33 2024-07-22T19:05:39.900980841Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 34 2024-07-22T19:05:39.901184769Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 35 2024-07-22T19:05:39.901310682Z 2024/07/22 19:05:39 [notice] 1#1: start worker process 36
-
Would you be willing to try the one that dockerhub forum suggested to me should prove whether or not things are working?
Works for me. As do the other dozen containers I run here.
What does the container log say about this one?
-
I run a script that uses dd to make a nightly backup of my system drive. The image is stored on a data drive and I keep the most recent seven images.
There are backup plugins within OMV, but since I don't use them I don't know anything about them. Look them over for yourself, maybe one will do what you need.
-
To what IP address in the container are you trying to communicate with? How did you determine this address ? How are you trying to communicate with the container and from where are you trying?
-
I cant login in with my admin password via CLI or ssh
The admin user intentionally does not have CLI or SSH access enabled.
-
Look in /etc/fail2ban/jail.conf
Is that IP in the ignoreip list? -
Ok, thanks, but what is the path of the file? I know it exists because I already found it, but I forgot where it is
/var/log/journal but there are many files there, none of which are human readable.
Use the journalctl program to read the journal. You'll probably want to read the man page.
There is also the /var/log/syslog file and its rotations.
-
You wouldn't have unpredictable behavior until the machine is rebooted.
You could do like I do by writing the dd image of the boot drive to an image file on another disk. But recovery using this image file isn't exactly trivial.
I have my old OMV 6 USB stick still in the machine and I can boot to that instance when desired or needed, interactively on startup in a remote ipmi terminal. From there I can use the image file file to write to a new unused spare USB stick that is always in the machine by running dd in the console shell. The machine is headless so these things happen via a remote machine which could be anywhere that has network access. Once that is done I can reboot and select the newly written USB stick to boot to and set that one to be the default boot disk. I would also have to modify the backup script to select the new USB stick as the source for future backups if I didn't want to swap the old stick for the new one.
I have been making automated daily dd image file backups like this since starting with OMV2 nine years ago (that's over 3200 backups). I keep the seven most recent images.
-
You can NOT use the admin user to login via ssh or console in the default OMV configuration. Use the root user instead.
-
Read the container's log file.
What machine are you trying to browse to plex.tv/claim on? If you are getting a black screen something is wrong with that machine or its browser. Try another browser or clear its cache.
It's difficult to help when you have provided very little beyond that it does not work.