Posts by jodumont

    Sorry to jump in with a different setup (RAID+LUKS+Docker) but ext4 instead of BTRFS

    But I also believe that docker and docker-compose should not start if the drive is not UP even if the container restart is at always or unless-stopped



    When I was using TrueNAS Scale; that issue never occurred.

    The major difference I observe is
    on TrueNAS Scale my drive where unlocked via a keyfile while on OpenMediaVault is with a Password.


    So I added keyfiles in my RAIDS and Voilà!

    Just had the same issue by following this guide


    Like bunkerman I had an issue with /etc/nvidia-container-runtime/config.toml

    Hi, this is not working for me anymore in docker (tested with Jellyfin and Immich):


    my solution has been to reverse my change only for /etc/nvidia-container-runtime/config.toml and put back the original files

    and uncomment the second line: accept-nvidia-visible-devices-envvar-when-unprivileged = true

    Now docker run perfectly with nvidia

    Lately I installed openmediavault-remotemount

    then after latest updates (might not be totally related)

    I lost access to webui of my OMV


    After reconfiguring workbench via omv_firstaid
    I received a message that nginx cannot start

    Quote from journalctl -xeu nginx.service

    Sep 17 06:56:59 omv nginx[26061]: 2024/09/17 06:56:59 [emerg] 26061#26061: unknown directive "dav_ext_lock_zone" in /etc/nginx/conf.d/openmediavault-lockzone.conf:1

    Sep 17 06:56:59 omv nginx[26061]: nginx: configuration file /etc/nginx/nginx.conf test failed

    Sep 17 06:56:59 omv systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE

    Code: /etc/nginx/conf.d/openmediavault-lockzone.conf
    dav_ext_lock_zone

    For now; I fixed it by commenting the line


    But I wonder if I'm the only one with that issue

    With Debian, and OMV, and Proxmox, from time to time, my machine take longer to boot because it suddenly look for raid (md)

    if you are sure you don't use softRAID (mdadm)

    you could remove that step from your initialization process.


    Code
    ls -R /usr/share/initramfs-tools/*|grep ^md
    update-initramfs -u

    So I decide to try with the flow, flow of OMV, but also more security by using PodMan instead of Docker but when I try to run podman as a user I receive this Error message :

    $ podman ps

    Error: cannot find newuidmap: exec: "newuidmap": executable file not found in $PATH


    This could be easily fix by installing the uidmap package

    apt install -y uidmap


    which also use by lxc, docker and rootlesskit to run container as a non root user.

    While I also look forward to run omv / directly on zfs like proxmox do, I agree with ryecoaaron too like advantage for now

    if you want a snapshot system, go with btrfs, or even better use the plugin and do backup of your root ;)


    overall the main advantage I could see of zfs is the fault tolerance, and maybe, the caching for better performance, but again

    - for performance, you could cache most of OMV in RAM with the plugin FlashMemory

    - and for fault tolerance you should do backup


    I'm not saying the backup plugin is the only way to back things up. Just pointing out that at least one plugin doesn't support zfs on root.

    is the backup plugin support btrfs ?

    for what tasks/purpose exactly? in past forum posts alternatives for file management (i.e. webtop) have been recommended already

    I exactly started the post with that by saying, OBS-Studio; to exploit the GPU of my OMV


    but anyway, since computers are multitasks I should multiples computer to do multiple task

    - one to store files

    - one to record video

    - one to edit video

    - one to write email


    I'll probably just return to my OMV on top of Proxmox


    thanks for your advice.

    1st I would like to apologize of making a OMVstein; but for a project I would really appreciate to use OMV as Storage and being able to have GUI on local to run, from time to time OBS-Studio on it.


    Everything went well until I install sddm

    sddm remove openmediavault



    - and if I reinstall openmediavault, sddm is removed


    Does somebody know the way to make this work ?

    I share the root (/) of my internal drive to being able to make a backup of my data (not the system) via USB-Backup but my USB-Drive is not available under Device

    Mainly my devices are ZFS, such as internally and externally (USB)

    So I was wondered on I could make this append ?

    Does it play well with docker?

    yes; it work well with my dockers since I use a reverse proxy for all my services and redirect only https

    Why do you have these aoutgoing rules?

    good question, mainly because they where in the original post from tekkb but also because I like trouble :P

    neigther http nor https use UDP.

    HTTP/3 (aka QUIC) use UDP

    you have really large networks wich are allowed to access http

    my OMV is a laptop and so it move from local network to local network, and also these rules are a base for all the OMV I install; my goal is more to insure omv-gui is only allow on the local network (not rerouted on the internet), anyway behind fail2ban is active.

    Old subject, but I was passing by here, of course OMV should not be your frontline firewall of your network, but OMV, like your workstation (Windows, Mac or Linux), your phone (Android or iPhone) and your IoT should have a firewall because at the end the weakest link in your network will be attacked and use as a pivot to attack other.


    an old, but still good source, post about how to configure the firewall is available here

    RE: Help setting up firewall (iptables)


    Here an example of my rules:


    HINT: before starting I would recommend you to change, at least temporary, your Auto logout time to at least 30 minutes, so you don't stress about being kick out in the middle of your edition.

    (System -> General Setting -> Web Administration -> Auto logout)


    ## INPUT
    | Direction | Action | Familiy | Source | Port | Destination | Port | Protocol | Extra options | Comment |
    | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
    | INPUT | ACCEPT | IPv4 | - | - | - | - | All | -m conntrack --ctstate ESTABLISHED,RELATED | ESTABLISHED,RELATED |
    | INPUT | ACCEPT | IPv4 | - | - | - | - | All | -i lo | LOOPBACK |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | 192.168.42.42 | - | ICMP | | PING |
    | INPUT | ACCEPT | IPv4 | - | 22 | 192.168.42.42 | - | TCP | | SSH |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | 192.168.42.42 | 8006 | TCP | | WEBUI |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | - | 137 | UDP | | SMB/CIFS |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | - | 138 | UDP | | SMB/CIFS |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | - | - | TCP |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | 192.168.42.42 | 445 | TCP | | SMB/CIFS |
    | INPUT | ACCEPT | IPv4 | 192.168.0.0/16 | - | - | 631 | TCP | | CUPS |
    | INPUT | ACCEPT | IPv4 | 173.245.48.0/20 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 173.245.48.0/20 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.21.244.0/22 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.21.244.0/22 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.22.200.0/22 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.22.200.0/22 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.31.4.0/22 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 103.31.4.0/22 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 141.101.64.0/18 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 141.101.64.0/18 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 108.162.192.0/18 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 108.162.192.0/18 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 190.93.240.0/20 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 190.93.240.0/20 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 188.114.96.0/20 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 188.114.96.0/20 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 197.234.240.0/22 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 197.234.240.0/22 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 198.41.128.0/17 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 198.41.128.0/17 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 162.158.0.0/15 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 162.158.0.0/15 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 104.16.0.0/13 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 104.16.0.0/13 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 104.24.0.0/14 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 104.24.0.0/14 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 172.64.0.0/13 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 172.64.0.0/13 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 131.0.72.0/22 | - | 192.168.42.42 | 443 | TCP | | HTTPS via CF |
    | INPUT | ACCEPT | IPv4 | 131.0.72.0/22 | - | 192.168.42.42 | 443 | UDP | | HTTPS via CF |
    | INPUT | REJECT | IPv4 | - | - | - | - | All | | REJECT |


    ## OUTPUT
    | Direction | Action | Familiy | Source | Port | Destination | Port | Protocol | Extra options | Comment |
    | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | - | All | -m conntrack --ctstate ESTABLISHED,RELATED | ESTABLISHED,RELATED |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | - | All | -o lo | LOOPBACK |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | - | ICMP | | PING |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | 53 | TCP | | DNS |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | 53 | UDP | | DNS |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | 8006 | TCP | | WEBUI |
    | OUTPUT | ACCEPT | IPv4 | - | 123 | - | - | UDP | | NTP |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | 443 | TCP | | HTTPS |
    | OUTPUT | ACCEPT | IPv4 | - | - | - | 443 | UDP | | HTTPS |
    | OUTPUT | ACCEPT | IPv4 | - | 53| 192.168.0.0/16 | - | UDP | | AVAHI |
    | OUTPUT | REJECT | IPv4 | - | - | - | - | All | | REJECT |


    PS: sadly, this forum does not support markdown ;(

    UPDATE: if you want to run k3s directly on ZFS OMV (without KVM) you need to install docker and force k3s to use docker.

    The issue is k3s compile CRI without ZFS support

    Be sure to uninstall docker and to delete /var/lib/docker before then run these commands

    Code
    curl -sSL https://get.docker.com/ | CHANNEL=stable sh
    systemctl enable --now docker
    
    curl -sfL https://get.k3s.io | sh -s - --docker
    
    kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml


    Note: k3s use traefik 1.8, if you want to disable the service during the installation like this

    Code
    curl -sfL https://get.k3s.io | sh -s - --docker --disable=traefik

    The issue/contextI imported my encrypted zfs pool from TrueNAS SCALE and everything work well so far, except when I reboot I have to login and simply type:

    Code
    zfs load-key $pool && zfs mount $pool

    My question for OMV

    it is in the roadmap of OMV to include these options

    1. create an encrypted zfs pool via the WebUI
    2. doing the load-key automatically at boot

    My HowTo for whom want encrypted zfs pool on their OMV

    Be aware

    1. native encrypted zfs is not fully privacy full proof; if someone have physical access to your machine, he will potentially be able to list (ls -R /encrypted-zpool) the files but not reading the content of these files (cat /encrypted-zpool/file) on your zfs drive even if the key is unloaded.
    2. I switch (export/import) my encrypted zpool mirror from debian and ubuntu based system without issues, you just need to know which version of openZFS you are using, and it'salgorithms limitations/incompatibilities.
    3. mirror are the best; zraid with 3,4,5 disk will slow down your drive access time (I/O); if you want more security do 2 mirrors and replicate them.
    4. For most of the usage case; ZIL and SLOG are useless for home and SOHO (small office/home office); I mean most home/SOHO will be limited by their network anyway.

    as root

    1. create your key and store it somewhere it is accessible during the boot process.

    Code
    dd if=/dev/urandom of=/etc/zfs/zpool.key bs=1 count=32

    2. use the drive ID (here an example:)

    Code
    dataset_name=ssd
    drive1=/dev/disk/by-id/ata-Samsung_SSD_860_PRO_256GB_S42VNF0K205921K
    drive2=/dev/disk/by-id/ata-Samsung_SSD_860_PRO_256GB_S5GANE0N204528K
    
    zpool create -f $dataset_name mirror $drive1 $drive2 -O encryption=aes-256-gcm -O keyformat=raw -O keylocation=file:///etc/zfs/zpool.key

    3. create zfs-load-key service

    4. enable the service for each encrypted ZFS pool created.*

    Code
    systemctl enable zfs-load-key@$dataset_name

    *from time to time, escaping characters will be included in the $dataset_name variable; to resolve this...

    Code
    systemctl status zfs-load-key [TAB] [TAB]
    systemctl disable zfs-load-key@$dataset_name
    systemctl enable zfs-load-key@ssd
    systemctl daemon-reload

    References

    - create encrypted zfs pool: https://wiki.archlinux.org/title/ZFS#Native_encryption

    - script zfs-load-key: https://github.com/openzfs/zfs…50#issuecomment-497500144

    and now the bonus track for who may read until the end ;)

    You may have notice since you installed zfs; mdadm try to mount raid without success at every boot, at maybe I'm picky, but this was annoying me.


    So I was able to speed up my boot by removing every file md* inside /usr/share/initramfs-tools/

    Code
    ls -R /usr/share/initramfs-tools/*|grep ^md
    update-initramfs -u

    and now no more time waiting for mdadm which randomly mount raid's


    ps: I also took the time to remove every file which was pointed at btrfs, dm, lvm, ntfs, xfs without any issue


    ref: https://unix.stackexchange.com/a/673315/88344

    DISCLAIMER: k3s don't support ZFS at the moment; so if you use ZFS, you will need to create a k3os VM.


    Adopting the future (present)


    More and more Kubernetes take the lead in the world of containerization. The early bird stage is done, and even solution for home user are appearing (such as k8s as home).

    Here a good source of charts: https://artifacthub.io/packages/search



    For a long time, I've simply used docker-compose because k8s was necessitating too much time and effort. Then I recently discovered k3s, which could run a one single node (no cluster need) and it is 100% compatible with k8s.

    From TrueNAS SCALE to OMV

    TrueNAS SCALE integrated it very nicely inside his WebUI and, even it still in beta, they already have a community who taking over to translate popular apps in their ecosystem.

    https://github.com/truecharts/apps


    But TrueNAS SCALE, being TrueNAS, even if the SCALE project is base on Debian, they simply took the Debian kernel and rebuilt everything around; result, nothing works as expected

    KVM can't pass PCI and USB properly, apt-get is unavailable and so on.


    - So this is why I converted my encrypted zfs from TrueNAS to OpenMediaVault.

    - Now my Windows VM run on ZFS with proper passthrough and all my shares are back; I tackle the k8s missing part, because docker-compose is sad and so 2010.

    Step1: Installing k3s

    WARNING: k3s don't support ZFS (https://github.com/rancher/k3os/issues/331)

    skip this step1; create a k3os VM instead (https://github.com/rancher/k3os#quick-start) then go to step2


    It is easy as they mention (url)

    Code
    curl -sfL https://get.k3s.io | sudo sh -
    # Check for Ready node, takes maybe 30 seconds
    k3s kubectl get node

    Step2: Helm the magic wand

    ref: https://helm.sh/docs/intro/quickstart/

    Code
    wget https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
    bash get-helm-3

    Step3: the missing dashboard

    I tried a bunch a dashboard; the more natural was kubernetes-dashboard; but again it brought back the complexity of k8s.

    Then I discovered that portainer could manage k8s and charts; which is nice since OMV user are probably already familiar with.

    ref: https://docs.portainer.io/v/ce…rver/kubernetes/baremetal

    Code
    export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    helm repo add portainer https://portainer.github.io/k8s/
    helm repo update
    
    helm install --create-namespace -n portainer portainer portainer/portainer
    
    export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[1].nodePort}" services portainer)
    export NODE_IP=$(kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")
    echo https://$NODE_IP:$NODE_PORT


    and Voilà! you're ready to chart your OMV via Portainer.