Beiträge von Ener

    Damned! Every time when you write an entry in the forums, you will find a way to solve this on your own... ||

    The trick is - of course - the /root/.ssh/config. Of course you have to enter the correct keywords in the according Host-section. I simply added the following:

    Code
    Host omv6 omv6.domain.ending <IP>
       IdentityFile /etc/ssh/openmediavault-generatedkey

    Save and enjoy :)

    Thank you again! I got it to work now. Turns out my Host-entry for the repo-server in /root/.ssh/config wasn't sound. Before fixing it, I tried setting the BORG_RSH environment variable in all sorts of places (/etc/profile.d/borgbackup.sh, /etc/profile, /root/.bashrc), but when borg is run via sudo or the OMV plugin the environment variables specified there don't seem to get set on Debian/Raspberry Pi OS (aarch64). Thank you very much!

    Can you explain this a little bit more detailed? As far as I understand, I've got a setup close to yours and got exactly the same error:

    Code
    exit code '2': Remote: Permission denied, please try again.
    Remote: Permission denied, please try again.
    Remote: backup-user@omv6: Permission denied (publickey,password).
    Connection closed by remote host. Is borg working on the server?

    One OMV6 instance is used as a BorgBackup server (I simply installed the BorgBackup plugin version v6.1.2) and another OMV instance (currently running OMV5) should connect as the BorgBackup client.

    I created a pair of ssh keys on OMV5 (System - Certificates - SSH) and copied the public key to the "backup-user" on the OMV6 machine. I did this via "copy"-option in the SSH-Key section (that creates an valid entry in the ~/.ssh/authorized_keys file on the target) and additionally, I copied the pubkey via Clipboard to the "Add public SSH key" section in the user menu.

    I didn't really understand if ryecoaaron 's post above means that this should be sufficient that the BorgBackup Plugin's ssh connection from OMV5 to OMV6 will be established with the newly created certificate's private key. From shell I'm able to establish a connection to the OMV6 server with the newly created private key :

    Code
    root@omv5:/root# ssh -i openmediavault-generatedkey backup-user@omv6
    Last login: Tue May 17 16:33:12 2022 from <IP>
    backup-user@omv6:~$

    So, in General, I hope I may ask a couple of questions in this old thread:

    1. Is it possible to generate a BorgBackup remote repo from OMV5 to OMV6 via plugins or is there a hiccup due to the different plugin versions?
    2. What am I missing? It feels like I'm missing a configuration to tell the BorgBackup plugin to use the correct private key for the connection to the remote repo.
    3. wadoli What do you mean with the Host-entry in your /root/.ssh/config? As far as I understand, there is no need for any additional configuration if you're not using this user/certificate combination for a ssh connection from the client to the server.
    4. Creating the remote repo should be possible without any shared folders, right? I'm using OMV for a couple of years now and I'm familiar with the concept of shared folders, permissions, privileges and ACLs. But I'm new to BorgBackup ;)

    Edit: I'm preparing the backup for the migration from omv 5 to omv 6 on the client machine, that's the reason why I'm still on omv 5 on the client. So I'll hopefully end up in having two omv 6 machines and a working Borg Backup from my local NAS to the offsite NAS.

    Well, this is very easy as I try to set it up with the default yml file (have to change the web port as 8000 is already in use by portainer):

    In the network tab, all three containers are connected to the paperless_default network as expected.

    If I understand your situation/question:

    Two containers deployed from one yml file will place them on the same (custom) network. Deploying them separately in their own yml file will put them on different networks. In Portainer there is a Network tab that will allow you to join them onto the same network. I haven’t done this myself so I don’t know details. Hope this helps.

    What you told is quiet right but isn't exactly my issue. Please forgive me if I didn't explain that in round terms. Allow me to try that again: When I deploy a new project - let's take paperless-ng as an example as it is very simple one. It consists of three containers (one database container, one redis container, one app container), a new custom network (called paperless_default) is created automatically and all container are placed in that network.

    As far as I understand the docs, all containers in the same network are allowed to lookup all other containers in the same network by their respective name defined in the corresponding yml file that was used to create the containers (and the network).

    In fact, even this basic lookup inside this docker network doesn't work for me (as well as lookup of public hostnames in the internet but this is another step and not yet necessary at that point). Do you have any idea what could cause this lookup failure?

    I just tried it with another dockerized project (teedy) which runs quiet fine when using the onboard database but will not work at all if I use it with a dedicated database container... I assume this is an lookup/reachability issue as well.

    Thanks for your answer and I think I got your point.

    But as far as I understand the docker-compose docs, a default network is created automatically as described in the first lines of the docs:

    Zitat

    By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

    What I don't understand: Is there any configuration inside docker that prevents docker containers in the same network to reach each other? Let's call it a docker "firewall" setting or something similar? I have no clue where this could make sense but this could explain my issues. Is there another reason why a container will not be able to reach another? In "real life" basic network troubleshooting you would first look for layer 1 which is indisputably present here. Layer 2 and Layer 3 is not clear to me, maybe a container has no ip address in that particular network? But what could prevent that? At least the db container I'd like to reach is connected to the newly created network and has an ip address (I checked this in the Portainer gui) and so should be reachable by the webserver container. But again: Why could the db container not resolve itself via localhost?

    It looks like I messed up DNS for all newly created containers but have no idea where to check this.

    Any help is highly appreciated :)

    I'm currently getting sane about an issue in docker on openmediavault and I can't find any solution by myself.


    Please allow me to describe the issue in some words: When creating docker container for the document management systems paperless-ng the webserver-container is unable to resolve the other containers. I tried to setup that stuff with docker compose, following the installation guide.

    After creating the container with the docker-compose up -d command, the webserver-container states could not translate host name "db" to address: Temporary failure in name resolution and even the db-container is in state could not resolve "localhost": Temporary failure in name resolution.


    It seems like my docker installation doesn't allow any dns-lookups at all for container in a bridge network and I have no idea why. Do anyone know about any incompatibility between portainer and docker-compose? Or does anyone have an idea where to dig into that issue? I'm googling around for a couple of days but unfortunately, I'm obviously not hitting the right search string...


    I've had a comparable problem while testing a mayan edms docker instance: The instance was running fine but was unable to resolve the mayan-homepage for it's integrated update check mechanism. In the meanwhile my portainer and a watchtower instance are running fine on the system bridge network and at least watchtower is able to lookup hosts in the internet as it sends mail and pulls new images as it should.


    Before I throw away my current setup and start with a clean install I'd like to send an SOS to the community: Has anyone an idea where to look for the root cause?

    Just one hint from my experiences with paperless docker on openmediavault 5: I tried to get it running the last 2 weeks or so and had no luck. I always got the issue that the sudoers-file permissions are incorrect inside the container.

    Today (as the very last idea) I updated the docker-compose binaries to the latest ones (currently 1.27.4), following the official linux guide: https://docs.docker.com/compose/install/ and it finally worked as expected.


    Maybe this helps anyone.


    My answer to the question: Why paperless? As the author states on his git-Page: It just works!

    I have (currently) no need for a full blown document management system like mayan-edms but I'll definitely give that one a try in the future.

    As far as I understand the vrdp option in VirtualBox, you have to see this as a kind of console access to the virtual machine, similar to a physical screen connected to a physical video port (e.g. VGA) on a dedicated machine.
    There is no need for the guest os to support rdp or even less complex: There is no need for a guest os at all!
    VBox will provide video output from the virtual machine via rdp on the port that you choose (default is the port range 9000 to 9100, resulting in port 9000 as long as it is the first vm on your host).
    If you're fast enough, you will probably see the virtual bios (I never tried that out...).

    Hey guys, I've had the same problem today: I'd wanted to install a fresh debian VM but was unable to login to the vrdp-session for using the graphical debian installer.
    I wasn't able to connect via rdp nor via vnc (using Ubuntu with Vinagre Client). After trying different network settings (NAT, bridged...), googling around, reading docs on the omv forum and the virtualbox manual, I found an easy solution for my "issue": When connecting to the machine via rdp, you have to specify a user in the connection window, even if you've choose Authentication Method "None" in the phpVirtualBox settings! Works like a charm! :whistling:


    Maybe I'm just the most stupid user in the world, but maybe this may help anyone with the same foolishness ;)

    It should be noted that only USB UPS work. UPS with network can not be managed with OpenMediaVault.


    Greetings
    David

    I know it's an old thread and probably an very stupid question, but I got a very old, very cheap UPS (Fiskars Power Rite Max) with a brandnew batterie and that UPS only has a serial connector. As far as I understood, the plugin is also able to deal with a serial connection, isn't it?

    Have you ever tried unplugging your router for 10 seconds and plugging it back in, any switches on your network too???


    Routers/switches can get hung up for broadcasting every once in a while.

    Man, once again a huuuuuge thanks for your support and an endless amount of patience to help us stupid "wanna-be"-administrators :whistling:
    Your tipp to restart the router saved me a lot of time troubleshooting my windows clients (7 and 10). I use a Fritzbox 7590 and after a reboot all Samba-Shares are visible in the Network overview again.

    Backup only supports the normal installation method used by the OMV iso (which does not support uefi). That said, it is only the bootloader. The important part of the backup is done with rsync. So, recreate the partitions, restore the backup, install the bootloader from a rescue iso. You might be able to re-install fresh and just rsync the backup over the new install.

    Yep, that was my fault: I definetely wanted to use EFI during install and found out later, that's not that usual so far. For documentation: I was not able to restore my backup in the way you described, but as my NAS was previously declared just for testing, I did a clean reinstall and I'm now another tester of OMV4 *g* So let's see if it is stable enough for my continous testing productive environment :whistling::rolleyes::thumbup:


    Thanks a lot ryecoaaron for your help and your work on omv-extras! You guys are doing an awsome job!!

    I assume, since you have a (U)EFI partition, your old drive used a GPT scheme, instead of the old MBR sheme ... so the commanddd if=/mnt/backup/grub_parts.dd of=/dev/sda bs=512 count=1 is wrong because GPT uses 34x512Byte ...


    Sc0rp

    Yep, you're absolutely right, I didn't have that in mind. But unfortunately, the backup option seems to be created for a BIOS installation as the grub_parts.dd-file is only a 512 Bytes file and the grub.dd only 446 Bytes. Seems like I've got to do a clean reinstall and choose another backup option (and must not use Verbatim SSD :rolleyes: ).

    Hey guys, I don't know if it's okay to recycle that old thread, but I'm feeling my person is a bit too unimportant for opening new threads and filling the forum with my personal trash. Feel free to move my post into a new thread if necessary!


    I had a system drive failure some days ago and the OS drive is completely lost. I was using a 128GB Verbatim SSD and it failed after roundabout 2 years and a few days (after I run out of warranty of course!)...
    In general my OMV server was intented to be a testing environment but you get used to it if you have it and as it was running pretty fine for over 1 year or so, I'm missing some stuff now (mostly Syncthing-config, local certificates etc.) and need to do a restore. I used the System Backup Plugin and have some (more or less) actual backups of the OS drive I'd like to restore.


    What I already did: I replaced the defect 128GB Verbatim SSD with a 1TB WD HDD, installed Debian 8.9.0 from ISO (in UEFI mode as I did with the SSD before) and OMV from the repository, I copied the last backup to a USB-HDD and downloaded the current SystemRescueCD ISO to boot from that image. I tried to follow ryecoaaron's excellent manual on Backup a live system but fail in one point:


    After

    Code
    dd if=/mnt/backup/grub_parts.dd of=/dev/sda bs=512 count=1


    all partitions on the HDD are lost (before there were six partitions, where sda1 is the EFI system partition, sda2 a 6.5G, sda3 2.8G, sda4 63.9G swap, sda5 381.5M and sda6 857.5G). Thats why the next step with

    Code
    mkfs.ext4 /dev/sda1

    will fail as there is no sda1.


    In general, I'm okay with doing a clean reinstall and setting up all that stuff again, but if you have an idea what im doing wrong, I'll be very thankful for a hint or a solution for my dilemma. :whistling:

    ehm, how do I actually create a Jessie UEFI image? mto boot and then install it on a UEFI drive?


    R.I.P. OMV you were an excelent NAS software but couldn't hold up with modern times booting terror from Micro$oft.

    You will find bootable ISO files on debian.org as ryecoaaron said in his post. Use etcher.io to make a bootable USB-Stick, boot in plain UEFI (not legacy) mode and you're done :thumbup: I think there is no need for a dedicated UEFI omv ISO, keep coding on features and bugfixes, you awsome omv-guys <3

    Hey guy, you helped me a lot. I tried to get the docker plugin working for the last 4 days and then found your post on google... I've got the same problem on Kernel 4.9.

    I realize that but you are quoting an old post. There is another thread where I made changes to the plugin to download the certs from the trusted source using mozroots - commit

    Yes, I've realized later that your post is from january. Thanks a lot for the hint with the commit! But just to be sure: I've to download the certs with mozroots manually, right?


    Edit: Got it, this should be the post you mentioned: Duplicati failing to see content - [SOLVED]

    cert-sync doesn't work on the debian version of mono since it is too old.

    Got that. Thanks a lot and once again: You guys are doing a great job with all these plugins!!

    When you are adding the cloud service, there is an option on the screen right below to add the cert. No need to do anything from Linux command line.Got

    Today on my lazy sunday, I was browsing the new OMV plugins and was very happy to find the new duplicati plugin.
    Unfortunately, I ran in the same problem that was described by tylerism.


    I've read your answer ryecoaaron, so first of all: Thanks for caring! But Sorry, there are two disadvantages in using the advanced option with -accept-specified-ssl-hash=<ENTER CERT HASH HERE>:

    • It will not work, as the hash code changes every time you use the "check connection" button. Probably M$ uses different redundant servers for accessing their services and therefore there are different certificates which equals different hashes.
    • From a security perspective, it is not the best idea to create some accept rules for unknown certificates as this is not the way certificates are intented to work. The proper solution would be to verifiy the chain of trust trought the root and intermediate certificates. What the mono-specific command cert-sync /etc/ssl/certs/ca-certificates.crt would do is to download the root and intermediate ceritificates from a trusted source, so tylerism's way is the most common and secure solution. Actually, I have no idea why this command is not working on an "usual" omv installation. Perhaps anyone else in the space out there? :)

    Added infromation: I just found the thread about the Amazon Cloud Drive certificate error and I think the different (older) duplicati version is likely the core issue here. I'll try to add the mono-repo and will test again.