Beiträge von unsoiled_iciness

    Running omv 7 (before the recent kvm plugin update), I experienced the same problem (manual 1st run worked, scheduled run did not).


    But I have no spaces in the file path.


    SHELL=/bin/sh

    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

    # m h dom mon dow user command

    0 3 */18 * * root omv-backup-vm -v vmx -d '/srv/dev-disk-by-uuid-95f64b7-583c-4fb7-a5ff-dd7da868a706/backup_vmx/' -k 1 -p -s 2>&1 | mail -E -s "Cron" -a "From: Cron Daemon <root>" root >/dev/null 2>&1



    However, I did notice that my command is "0 3 */18 * *".


    I want the backup to run at 3am on the 18th of every month, but it was missed last night.


    Perhaps I have UTC/local time wrong?

    I have a VM running six containers including databases and whatnot.


    When I manually run a stop.sh to the containers in the VM it takes about 20 seconds before I get a shutdown (off) message from all six containers. If I just send shutdown in the VM, without stopping the containers there also appears to be a 15 second delay or so before the VM shuts down.


    However, when KVM schedule shuts down the the VM before backup, it only seems to take 2-3 seconds.


    Is this okay? If not, how do I make the KVM backup give more time for the VM to shutdown?

    I initially received an error 'filesystem flags changed'.


    So I rebooted the system a few times and ended up with no option to decrypt the disk in omv's encryption plugin options. Yet, the disk is available to encrypt with a new label and password.


    Under "Disks", OMV [6.9.14-1 (Shaitan)] recognizes the SSD, but under "filesystems" the system shows as missing.


    When I run, lsblk the disk is recognised (without filesystem listed) and hd -n 512 /dev/sdf returns:


    fsck.ext2: Input/output error while trying to open /dev/sdf"

    "superblock could not be read..."


    the output suggests running alternate superblock. But I have not tried this yet.


    OMV smart test status was showing good

    Here is my docker compose for virtmanager. Adjust the paths and ports to your system.


    Thanks! I wouldn't have had a clue otherwise.


    So far so good.


    I got docker (and virt-manager) working -


    I uninstalled docker (under omv-extras), then compose, then omv-extras.


    Then via weTTY, installed omv-extras via the commandline (wget -O - https://github.com/OpenMediaVa…ckages/raw/master/install | bash).


    Then in omv gui enabled the docker repo, installed compose, then configured compose (just the files section), clicked save (or restart docker, or vice via - I don't recall). I got an error but saving again I got installed and running under the docker status.


    DNS appears to be working but I'm going to try getting pihole-unbound running again.


    Thanks again.

    Post the full error. Without seeing the error it is difficult to receive help.

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; apt-get --yes --autoremove purge docker-ce docker.io containerd.io containerd docker-ce-cli docker-compose-plugin docker-compose 2>&1': Reading package lists...


    Building dependency tree...


    Reading state information...


    Package 'docker-ce' is not installed, so not removed

    Package 'docker-ce-cli' is not installed, so not removed

    Package 'docker-compose-plugin' is not installed, so not removed

    E

    :

    Unable to locate package containerd.io


    E

    :

    Couldn't find any package by glob 'containerd.io'


    E

    :

    Couldn't find any package by regex 'containerd.io'



    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; apt-get --yes --autoremove purge docker-ce docker.io containerd.io containerd docker-ce-cli docker-compose-plugin docker-compose 2>&1': Reading package lists...


    Building dependency tree...


    Reading state information...


    Package 'docker-ce' is not installed, so not removed

    Package 'docker-ce-cli' is not installed, so not removed

    Package 'docker-compose-plugin' is not installed, so not removed

    E

    :

    Unable to locate package containerd.io


    E

    :

    Couldn't find any package by glob 'containerd.io'


    E

    :

    Couldn't find any package by regex 'containerd.io'

    in /usr/share/openmediavault/engined/rpc/compose.inc:238

    Stack trace:

    #0 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMVRpcServiceCompose->{closure}('/tmp/bgstatusXN...', '/tmp/bgoutputJb...')

    #1 /usr/share/openmediavault/engined/rpc/compose.inc(254): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))

    #2 [internal function]: OMVRpcServiceCompose->reinstallDocker(NULL, Array)

    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('reinstallDocker', NULL, Array)

    #5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Compose', 'reinstallDocker', NULL, Array, 1)

    #6 {main}

    Problem solved (albeit I created another one).


    My omv server machine has 4 LAN ports. I have the ethernet cable in the wrong port. This was due to changes in the omv port settings between backups, such that at some point I had decided to use a backup without changing the physical cable accordingly.


    Now on this better working version (I can access my KVM VM), docker is not installed (in OMV I mean, not talking about the KVM VM).


    OMV Compose, Docker status = 'Not installed". But I can can't reinstall it because it can't find packages it's trying to purge. Any suggestions?

    I tried accessing the VM from another (ie non-QubesOS) machine on the network. Same error (no route to host). So I assume QubesOS is irrelevant.


    Did you read here about the different types of network interface in the KVM plugin?


    The VM was/is configured to use a macvtap network I created. I tried stopping the VM, deleting and recreating a new network. Still no route to host.


    However, I have managed to connect to the VM with Spice and noVNC. The error I previously received when using the omv kvm plugin link was due to it using a domain name and https. Using http and the ip of the omv server I can get to the consoles. Does that suggest I have a problem with my router not resolving local domain names (all local network)?


    Anyway, I can't get into the VM using noVNC because the prompt for the password times out before I can finish typing the password (it's very long). Also, is it safe to type passwords over http?


    I would suggest that you install mber5/virt-manager docker on the OMV


    How should I configure these three variables in docker-compose?


    volumes:

    - "/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock"

    - "/var/lib/libvirt/images:/var/lib/libvirt/images"

    devices:

    - "/dev/kvm:/dev/kvm"


    If I don't have much luck I'm thinking of just building omv from scratch.


    Or perhaps before doing an omv reinstall, I should try backing up the VM, deleting and then restoring it? The VM was a lot of work and I was about to create a backup (I did create a snapshot before these issues but I have no idea how to use that feature).

    If OMV is it's own server (not a VM), and you can access OMV with other systems the the problem is most likely with the new QubeOS setup based on what I managed to gather from your post.

    Yes that.

    omv is its own server - my NAS server (running on bare metal)

    The VM I can't access is running in/on the KVM plugin in omv

    QubesOs is my workstation (recently upgraded) that i use to connect via lan to omv and/or the KVM VM running on/in omv


    As for the pihole docker, are you running it on a macvlan? It requires port 53 which will conflict with OMV so it must use a macvlan. There is a guide in the guides section of the forum for doing this.

    Yes. It's on a macvlan. Was running for a while. But I don't know if things went south when I added syncthing, or something else.


    Anyway, I might head over the the QubesOS forum as you suggest because that's the most significant change I made that resulted in the Destination Host unreachable. It's just strange that I can ping the omv host (from my new Qubes machine), but not the VM running in it.

    I recently upgraded the computer (essentially a new computer) I use to access my omv server (both on the same local network) and as a result, can no longer access my KVM VM.


    When I try to ping my only KVM VM I get 'Destination Host Unreachable', and when I try to ssh in, 'No route to host'. (I have the same error when I try to ping my pihole-unbound docker container. Pihole stopped working a few days before I upgraded the access computer. I temporarily do not use pihole as I try to resolve the current issue).


    In omv KVM plugin, both Spice and noVNC links give 'we can't connect to the server at "myserver".local.


    The address in the browser of opened link from omv shows a port different from the one in omv, for example:

    - omv Spice port: 5901

    - the browser shows: https://myserver.local:8091/spice_auto.html?resize=remote


    - I have fail2ban in the KVM VM, plus 2FA.

    - Transmission and syncthing dockers are working fine, I can access them via the portainer console.

    - Pihole-unbound docker unheathy (eg github: no route to host and pihole-FTL: no process found errors). But I can access it via the console


    Changes I recently made were:

    - The computer runs QubesOS, which uses xen VMs access the network - I just restored the old VMS into the new machine. This resulted in a MAC address change. I updated my router giving the new computer the static IP of the old computer but with new MAC address

    - Removed an unused nvme card on the server (which I assume had no data on it - only should a few kbs)

    - backed up (by cloning) the server usb but when redeploying the KVM VM, I forgot to start the volumes and iso pools - so the VM ran without those for a few days I think.

    - I lost internet connection but prior to that, used pihole-unbound as my DNS successfully, pointing my router's DNS to pihole docker. When things stopped working, I managed to get the internet working again by setting all the DNS options in my router to 1.1.1.1 (router is OpenWRT)


    Thanks for reading

    Does your password have a ' or " or $ in it?

    No. But it does have other math symbol characters. I tried adding a purely alphanumeric key but it wouldn't accept it, presumably because I had to use my current key (which won't pass when checked).


    Does this mean I have to wipe the drive and encrypt again with new keys?


    Edit: I've got a very bad memory - I see by my own earlier post (RE: LUKS disk encryption plugin), using pure alphanumeric passphrase would fix it - but I'd rather not wipe the drive now since it would mean losing a day's work

    I can Lock/Unlock my encrypted drive okay but can't successfully test my key or add keys.


    For testing my key I get (I've replaced it below with <my_password>):


    500 - OK error

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; /bin/bash -c 'echo -n '<my_password>' | cryptsetup luksOpen -v --test-passphrase '/dev/sde' --key-file=-' 2>&1' with exit code '2':



    -----

    I'm on OMV6 and all updates are installed.

    Thanks everyone for your advice. That last video link was really great - I'll be watching that many times I think.


    I'm probably suffering from some sort of 'medical student disease' as I learn about security practices/issues.


    I think perhaps syncthing, using their relay service, might be the best for my case. I don't have to use wi-fi or open ports and data is e2e encrypted.

    What NPM vulnerabilities are you referring to? Can you put a link?


    I use Syncthing to transfer files from smartphone to server. I do this only with wifi, without using external relays. Don't trust your Wi-Fi network?


    I edited my original post - as the security vulnerabilities I think have since been fixed:


    See link here:

    Externer Inhalt www.youtube.com
    Inhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.
    Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.


    And wi-fi issues (VPNFilter virus?):

    What is WPA3? Is WPA3 secure and should i use it? | Comparitech
    Is WPA3 secure? This is an important question in wi-fi security after a serious vulnerability was found in Wi-fi Protected Access 2 (WPA2), the security…
    www.comparitech.com


    At the end of the above article the author suggests for home networks 'Stop using wi-fi: Connect to the internet via an ethernet or data (3/4G) connection at home, or use mobile data, particularly for sensitive transactions.' and 'turn off your wifi connection if not using it'.


    Anyway, perhaps periodically using wi-fi would be more secure than 24/7.

    Task objective:

    - backup (or perhaps syncing) phone photos, contacts, calendar, 2FA, password manager to omv


    Context:

    - one user

    - omv main use is LAN media server

    - OpenWRT router

    - static IP

    - bluetooth/wi-fi off (WPA3 is available in phone/router but not used due to security risks. But perhaps turning it on once a day to sync would be more secure than opening ports to the entire world 24/7?)

    - phone uses self-maintained wireguard vpn (deployed on my externally hosted cloud VPS 'server' - I know wireguard is peer-to-peer but you get what I mean), and is available for any device


    Considerations:

    - apparently usb flash drives are not a secure mediums for file transfers, neither is wi-fi, so via internet is more secure, unless perhaps, I plugin phone to server using a usb cable?

    - I have no need to access to my server from outside the house (or example, to administer it or access files).

    - On my previous practise server, I successfully deployed dockers nextcloud AIO, nginx pm, fail2ban, and opened ports 80, 443 (and another for nextcloud talk but could not connect the app, perhaps because I didn't forward it in npm).

    - I have successfully used npm to limit incoming connections to the IP address of my wireguard server (I think I can also do this in the router). So nextcloud was able to do instant-uploads from phone to omv via my wireguard VPS, and sync's calendar and contacts.


    So, functionally, above is just what I want, but...


    All info sources I've seen warn of the risks of opening ports, and I've heard of un-patched longstanding vulnerabilities in nginx npm (now since patched). So I'm starting to worry and wonder if there is a better way?


    Eg:

    - syncthing

    - headscale/tailscale - but port-forwarding?

    - just wireguard - port forwarding?


    Syncthing looks good due to not having to open ports, but privacy is also a desire of mine, and the public relays know which devices are talking to which. I'm not sure how much of an issue this is, but the solution is to run your own relay, which brings me back to the same problem; opening ports.


    I have to laugh, I'm staring at my phone and omv server right next to each other and all I want to do is get files from one to the other. Surely I'm making this far harder than is should be? Just looking for a simple, secure, private method.


    Thanks for taking the time to read this. Any advice would be greatly appreciated.

    I'm curious to know why you insist so much on a rootless installation?

    Ah ha! That was the other question I DIDN'T write - 'should I really want to go rootless?'


    For security through better isolation. But it is really necessary?


    My main reason for running a omv is for a home media server. However, I'd like to run nextcloud so I can sync my phone photos, calendar and contacts. I'd also love to run a chat service to replace Signal. But I'm concerned about opening ports and the security risks involved - so I thought rootless nginx npm, nextcloud and chat service would be a good idea...