Thanks
EDIT: I can't find a reason for it. If memory serves, this reverts a change sometime in 2019 or early 2020.
Thanks
EDIT: I can't find a reason for it. If memory serves, this reverts a change sometime in 2019 or early 2020.
Prior to the update executed on my systems on 27-05, the script variable $HOSTNAME was filled with "<system name>.<domain name>", since that update (and without me changing anything), it is filled with "<system name>", so without ".<domain name>". As I use it in scripts and email message filters, this change is an unpleasant surprise, especially since I can't find any documentation about it. Can somebody please point me to the documentation about this change and/or the reasons for it?
Did the link to the pull request not prove that I am doing this? Oddly enough, I still have not run into this issue. But maybe I keep using the 64bit image which does not use raspbian packages - it is pure Debian.
Apologies, I misunderstood. And I am only running into this issue on one of my two installations. Due to some corruption I had to reinstall one last Saturday and on that one the problem crops up. The problem does not show on the other installation, which was last reinstalled last September.
The only packages that an RPi should use from debian-backports are cockpit and borgbackup. I see no harm in using either.
I changed the install script and submitted a pull request to not add the security repo on RPis. Old installs will have to set environment variable. I guess I could have an update of omv-extras do it.
Please do as this only occurs (at least for me) on recent, new installs, I don't experience the problem on an older installation.
Display MoreI had exactly the same scenario.
"apt clean && apt install libapt-inst2.0" helped
This happened at least 3 times to me. At least it's one line clean up instead of many
Thx
Thanks, that helped for me as well on an installation that is less than a day old (had to reinstall).
No it isn't. You will break apt-get update if the repo file points to an invalid ip. Just: sudo truncate -s 0 /etc/apt/sources.list.d/vscode.list
Thanks, I first commented out the active line in that file, I just believe in belt and suspenders where some companies are involved. If it breaks apt update, I'll know it got reactivated without my knowledge or permission.
Thanks, just the info I needed for this problem:
I see one thing:
Quote
opt/jdownlaoder/downloads
Is that a typo or did you copy/paste that?
I did now I got 3 left not sure why it happened
Most likely because one (or maybe more) container(s) needed some place to store some data and you didn't provide a fixed location (volume/bind mount) for it, so every time that container is started it creates new volume. With the above, I'd say the solution is elementary.
Display MoreIs it normal for a pihole docker container to lose some settings after it has been stopped and then started?
I recently added some fans to my case and I had to stop the pihole container before shutting down omv.
When I brought omv back online my pihole container was already started but I had to redo my upstream dns settings, my conditional forwarding and my regex lists.
Is there something I need to do to make sure my container settings are saved?
Thanks.
I don't know enough about the rest, but were your upstream DNS settings passed as environmental variables during the original container creation or did you fill them in after you started it? If the former, you may have a problem, if the latter I recommend you change it.
I just installed Yacht and it immediately showed me a problem with one container that wasn't visible with Portainer until I did a deep inspection, something I wouldn't have done without some indication there was a problem.
It was a problem (continuous restarting) which also showed up in the old Docker plug-in and which still doesn't show in Portainer, something I pointed out before if memory serves.
But, for Docker beginners, there's something to be said for being able to pull a single image and run a container from it, in simple manner. That's a learning tool and a first step, from which an understanding of Docker compose becomes possible.
That is how I began with OMV4 and the Docker plug-in. And I started on Docker because I couldn't get Pi-Hole working, so went for the Docker version, which necessitated macvlan. With the Docker plug-in I managed, with Portainer I still haven't been able to create a macvlan. At present I use the command line to create macvlans and start the containers and I use Portainer only for monitoring (when I suspect something is wrong) and to kill and/or remove a container as that is where a GUI comes in handy. For some reason I never got Docker-compose working and I don't feel the need any more as I keep all my commands for the various containers and macvlans as text files on my laptop, where they are available if I have to reinstall an RPi.
And yes, I still miss that possibility to pull a single image and run a container from it, even though I don't need it any longer. It made it a lot easier to find a nice image doing what you are looking for.
Should have looked at the bottom first, recurring problem.
Hi, sorry for piggybacking on to this thread but I came looking for a way to do just this, clone my installation drive onto a USB drive before i do a wipe and clean install but was worried that I might need to recover. Anyway... I'm a complete noob who setup OMV using some guides many years ago. Could you share how I can do a clone of my OS drive onto a USB. I've read about clonezilla and also something about a backup plugin but both assumed I create a folder on one of my data drives.
Any help would be appreciated!
My script also requires a folder on one of your data drives as a target for the backup/clone. And you will have to mount it as well. After that, a cronjob for a nightly backup is quite easy. My current script (as you can see, there are still some old changes visible).
#!/bin/bash
#
# Automate Raspberry Pi Backups
#
# Usage: system_backup.sh {path} {days of retention}
#
# Below you can set the default values if no command line args are sent.
# The script will name the backup files {$HOSTNAME}.{YYYYmmdd}.img
# When the script deletes backups older then the specified retention
# it will only delete files with it's own $HOSTNAME.
#
# Declare vars and set standard values
backup_path=/mnt/backup
retention_days=3
block_size=4M
ImageName=$(date +%Y%m%d_%H).$HOSTNAME.img
DspName=000_$HOSTNAME
# Check that we are root!
if [[ ! $(whoami) =~ "root" ]]; then
echo ""
echo "**********************************"
echo "*** This needs to run as root! ***"
echo "**********************************"
echo ""
exit
fi
# Check to see if we got command line args
if [ ! -z $1 ]; then
backup_path=$1
fi
if [ ! -z $2 ]; then
retention_days=$2
fi
# Create trigger to force file system consistency check if image is restored
touch /boot/forcefsck
# Create file showing the name so SD cards are identifiable
echo $(date +%Y-%m-%d_%H) > /boot/$DspName
# Now flush the buffers so the files are included in the backup!
echo $(date +%Y-%m-%d_%H:%M:%S) Start Flush > $backup_path/$DspName
sync; echo 1 > /proc/sys/vm/drop_caches
# And it is nice to have a log of the actions.
echo $(date +%Y-%m-%d_%H:%M:%S) End Flush, Start Backup >> $backup_path/$DspName
# Perform backup
# dd if=/dev/mmcblk0 of=$backup_path/$HOSTNAME.$(date +%Y%m%d).img bs=4M
# dd if=/dev/mmcblk0 bs=$block_size | gzip > $backup_path/$HOSTNAME.$(date +%Y%m%d_%H).img.zip
dd if=/dev/mmcblk0 of=$backup_path/$ImageName bs=$block_size
echo $(date +%Y-%m-%d_%H:%M:%S) End Backup, Start Zip >> $backup_path/$DspName
gzip -c $backup_path/$ImageName > $backup_path/$ImageName.zip
echo $(date +%Y-%m-%d_%H:%M:%S) End Zip, Start Clean Up >> $backup_path/$DspName
rm $backup_path/$ImageName
# Remove fsck trigger
rm /boot/forcefsck
# Delete old backups
# find $backup_path/$HOSTNAME.*.img.zip -mtime +$retention_days -type f -delete
find $backup_path/*.$HOSTNAME.img.zip -mtime +$retention_days -type f -delete
echo $(date +%Y-%m-%d_%H:%M:%S) End Clean Up >> $backup_path/$DspName
Display More
Did you mount the file systems? And for some reason your screenshot won't load.
Nope. That means a file didn't install or uninstall correctly. You have to post all output for people to help with that.
It was in his post #307.
One reason I suggested he switch to English.
Ich würde vorschlagen erstens auf Englisch um zu schalten.
To me that seems like you are having network and/or connection problems, I don't have any problem reaching ftp.de.debian.org or httpredir.debian.org. For dl.bintray.com I do however get an HTTP ERROR 404.
Hi, erreib.
In fact, a lot of hosting companies offer their services. The choice of hosting is very important for the further development of your site. First of all, you need to look at what they offer. DDoS attacks are currently a very common reason for closing sites, you need to find a hosting service with good DDoS protection. Assistance from the managers of the hosting is also very important. Many hosting companies help you move the entire site, fix problems, and install the necessary plugins on the site. I own two sites and have found the perfect hosting service-Mangomatter Media. They really helped me with everything for 4 years my sites haven't run into problems. Treat the site as a living entity and you can earn a lot of money on it, as I do. If you have any questions write to me and I will help.
Why did you kick this one after nearly a year of quietude?