You can boot a linux live distro on your laptop/PC and do this.
So, add the new disk to a Linux sytem, format in ext4, change the label to volume2, and add it back in my Odroid? I guess that is worth a shot!
You can boot a linux live distro on your laptop/PC and do this.
So, add the new disk to a Linux sytem, format in ext4, change the label to volume2, and add it back in my Odroid? I guess that is worth a shot!
Sorry. I meant label, not ID
On OMV (currently) drives are mounted as "dev-disk-by-label-xxx".
Previously they were mounted as "dev-disk-by-UUID-xxxxx"
Ok, that is indeed the case, the broken disk is still visible as dev/disk/by-label/volume2.
So how would I go about setting the same label for the new disk? That cannot be done if the old disk is still present, and I cannot remove the old disk because the shares (and probably other stuff) still has references to the old disk.
I will try adding the old UUID to the new disk first, to see if that does the trick. If it doesn't, it looks like the only way is to go with @Adoby's suggestion, cleanup all references, remove the old disk and add the new disk.
If the disk is mounted "by-id" should it not be enough to just adjust the id? I would try if it is working.
Where would I adjust the ID? Can you elaborate a on this?
It seems to be also possible to change the UUID if needed
askubuntu.com/questions/132079…a-disk-to-whatever-i-want
This is an interesting option. I guess I could give this a shot as soon as my new disk arrives, since I have nothing to lose (I'll make a backup of my SD boot card first)
Thanks Adoby, ok, so there is no easy way to replace the existing volume with a new one and keep the volumename?
Hi, I have a SBC (Odroid HC2), with just one 3TB data disk and an SD-card containing the boot partition.
I make regular backups of said data disk with Duplicati.
Last weekend, the data disk crashed, and I am planning to replace it with a new one (probably SSD while I’m at it).
What would be the best way to go about this? Can I simply replace the now broken disk (shows “missing” in OMV) with the new one, give it the same volume name, and restore my folders? It looks like I cannot delete the “missing” volume from the GUI...
I'm starting to think the start options and environment variables are kicked off in the Docker run command when the container is started and are not actually stored anywhere, could that be true?
Stil doesn't explain how Watchtower knows about all of them though...
Let me clarify: The options I am using to start a container with (see screenshot) are saved in appdata outside Docker?
Where would I be able to find them on a Debian OMV install? Can you give me an example of such a config file on the file system?
Hi, I am kind of new to Docker but have quite a lot of containers running nonetheless and love it!
There is something I can't get my head around though, and I hope someone can explain this.
When I run a container from an image, and set a lot of start options/environment variables, these are visible in the details of the running container. My question is, where are these saved? Are they saved to a Dockerfile somewhere?
When Watchtower starts to update images and re-create containers, it knows what run options and environment variables to set, so it must get them from somewhere, right?
This worked for me too!
I noticed /opt/EasyRSA-3.0.3/openssl-1.0.cnf holds the following variables:
default_days = $ENV::EASYRSA_CERT_EXPIRE # how long to certify for
default_crl_days= $ENV::EASYRSA_CRL_DAYS # how long before next CRL
Does anyone know where to access/change these variables?
Ok, I figured this out. There where still entries to the removed disk in /etc/monit/conf.d/openmediavault-filesystem.conf. Manually removed the entries there, rebooted, and the error is gone.
Hi,
After having some trouble removing shared folders and filesystems for a crashed external backup disk, I was able to clean everything up (partly manually from config.xml).
The only issue I have left is monit complaining when booting the system:
Any idea where to clean up the remaining mountpoint apparently still listed somewhere?
There are no more references in config.xml or in fstab.
You have users that are allowed to use the system but you don't want to vpn?
Yes. I want to limit my VPN access as much as possible to reduce the attack surface.
Anyone?
I am using the OpenVPN plugin with PAM authentication enabled. By default, every user in passwd is able to authenticate using PAM authentication.
Is it possible to limit this right to certain users only to minimise the attack surface?
It's good to rebuild a few times for the practice. Don't forget to buy a 2nd SD-card to backup/clone your boot drive. See the User Guide (in my signature, below) for the cloning process.
Yes, doing so already, I make SD card clones with ApplePi Baker regularly. Thanks for the tip though
Thanks, was curious if there was some way to restore the config on a new device. Guess I won't get bored during the holidays than
Now that I have played around with OMV on my RPi 2 for a while, I would like to upgrade to the ODROID HC2.
Does this mean a full OMV re-install from scratch, or is there a way to do a migration?
You are correct abou 18.06. I forgot I reverted my RPi to try and figure this out AND mixed up the version number even though I just locked the plugin to use 18.09. My bad. But my point still stands. 18.09 is working on my RPi
Allright. I decided to make a full backup and take my chances. Guess what, everything is fine and Docker is running stable on 18.09 .
Out of curiosity, what CPU model do you have in your RPi? Mine is a ARMv7 Processor rev 5 (v7l). Looking at the GitHub issue, the problem seems to be limited to the ARMv6 CPU models in older RPis.
Nov 27 03:25:24 OMV-INTEL dockerd[1059]: time="2018-11-27T03:25:24.372150197-05:00" level=warning msg="Your kernel does not support cgroup rt period"
Nov 27 03:25:24 OMV-INTEL dockerd[1059]: time="2018-11-27T03:25:24.372172412-05:00" level=warning msg="Your kernel does not support cgroup rt runtime"
I get these 2 as well. Containers are fine. Don't know if it has always been like that, or since the latest update.
That issue isn't relevant. 18.06 is being installed and works. It is just touchy about getting to that point.
I am on 18.06. People start having issues when updating to 18.09 on their ARM based systems which is exactly the update waiting to be installed on my system.
There is nothing to fix with the plugin. The problem with the RPi is with the docker package/repos which we don't maintain. Backup your system and try it. I won't guarantee anything with the RPi.
Did some further research and it looks like its best to wait a bit longer. For the RPi owners stumbling on this tread: https://github.com/moby/moby/issues/38175