Posts by reddy

    Today I received new update for the OMV itself (5.6.2-1) and after installing it I suddenly could see openmediavault-zfs 5.0.6 without enabling any extra repositories. After installing it - everything seems to be back at normal. Magic.
    Thanks for all the help!

    Thanks for response! I did exactly like you said (though I was a bit hesitant thinking that the worst thing in this thread came from the backports) - left only backports on, run apt clean, went to the Update Management where I checked for new updates - only the recent updates to docker-ce, docker-ce-cli and openmediavault-omvextrasorg stayed on the list. I went to the Plugins and searched for openmediavault-zfs - only 5.0.5 there.
    I also tried with command line, clearly see the backports there, but still no luck installing openmediavault-zfs:

    Then I found an option to enable pre-release updates and community-maintained updates in the Update Manager. Enabled those, update all the packages and tried openmediavault-zfs - still just 5.0.5. Finally I enabled all repositories under OMVExtras, apt update again and tried openmediavault-zfs once more - still the same, can't install due to missing dependencies and only 5.0.5 visible... :(

    I'm in the same situation now with ZFS pool missing from the UI, throwing "No filesystem backend exists for 'zfs'." error on me anytime I try access the File Systems menu.

    I'm on a Proxmox kernel "Debian GNU/Linux, with Linux 5.4.101-1-pve" currently, with the non-Proxmox kernel and headers removed.

    I have never had "Testing repo", "Extras repo" nor "Backports" enabled.

    First I tried enabling "Testing repo", "Extras repo" (but not "Backports"), running apt update and then following RE: ZFS packages update issue (OMV but the newest available version of zfsutils-linux seems to be already installed:

    root@debian:/home/reddy# apt -s install zfsutils-linux
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    zfsutils-linux is already the newest version (2.0.3-pve2).
    zfsutils-linux set to manually installed.
    The following package was automatically installed and is no longer required:
    Use 'sudo apt autoremove' to remove it.
    0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded.

    and I can't see version 5.0.6 of openmediavault-zfs plugin anywhere:

    root@debian:/home/reddy# apt list -a openmediavault-zfs
    Listing... Done
    openmediavault-zfs/buster,now 5.0.5 amd64 [residual-config]
    openmediavault-zfs/buster 5.0.4 amd64 [residual-config]
    openmediavault-zfs/buster 5.0.3 amd64 [residual-config]
    openmediavault-zfs/buster 5.0.2 amd64 [residual-config]
    openmediavault-zfs/buster 5.0.1 amd64 [residual-config]
    openmediavault-zfs/buster 5.0 amd64 [residual-config]

    so trying to install it would fail:

    Where can I find 5.0.6 version of openmediavault-zfs plugin? Which repositories do I have to enable?

    So, I tried to turn to RE: ZFS packages update issue (OMV instead, but apparently all cache is gone after enabling "Testing repo", "Extras repo" and apt update:

    root@debian:/home/reddy# ls -a /var/cache/apt/archives/
    . .. lock partial

    I read the whole thread and I'm still confused. Can someone help me, please?

    Thanks again geaves and ryecoaaron for helping me to debug it! It was indeed AppArmor problem. I disabled it in the way described at and surely Yacht page opens nicely on port 8001. I hope this thread helps anyone facing similar issues in the future.

    I honestly don't know how AppArmor got there in the first place, I tried to install Debian as lean as possible. On the other hand that Debian wiki discourages from disabling it, but I bet in my scenario (OMV-based NAS in internal network, with a few extra docker containers) I should not be worried too much.

    Huge thanks again geaves! Funnily enough my logs look the same as yours until the "Permission denied" line...

    Another thing which came to my mind is that I installed OMV on top of minimal Debian installation following https://openmediavault.readthe…stallation/on_debian.html, not from the original ISO, as I wanted to set it up on BTRFS. So maybe it has some extra firewall/security settings... I have the same setup on VBox, so not surprisingly I'm hit by the same error there. I could try setting up a VM using original OMV ISO, but that wouldn't prove much more that I've already learnt from you.

    For example, apparently I have AppArmor running:

    Thanks for all your time!
    I actually did not install Portainer at all. i decided to try with Yacht first, that's why it's on its default port 8001.

    And I took your advice and tested the same on VBox leaving all default docker paths unchanged, so /var/lib/docker (but it's still not ext4, just BTRFS, as I wanted to have snapshotting on / as well). And I managed to snatch the beginning of Yacht logs, clearly stating it's using some user with UID and GID of value 911. Can this be the cause? I don't have such user on my system, checked both /etc/passwd and /etc/group...

    SelfhostedPro - any chance you could also take a look, please?

    Thanks for the link. I might have misunderstood something, but I think my problem is related to network socket permissions, not the filesystem, right? Which is weird, as docker processes run from root account:

    reddy@nas-new:~$ ps aux | grep dock
    root 2219 0.1 0.7 1244528 112760 ? Ssl Feb09 2:21 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    reddy 8178 0.0 0.0 7928 892 pts/0 S+ 22:20 0:00 grep dock
    root 10092 0.0 0.0 548820 3824 ? Sl Feb09 0:00 /usr/bin/docker-proxy -proto tcp -host-ip -host-port 8001 -container-ip -container-port 8000

    Besides, ports 8000 and 8001 should be available for non-root users anyway...

    Just in case, my current /etc/docker/daemon.json:

    "data-root": "/tank/data/docker"

    and docker info, which confirms I'm using ZFS storage driver already:

    Have you changed the path where docker is installed via the GUI

    Indeed I did. I put it on the ZFS pool: /tank/data/docker

    And apparently it created the folder with root write permissions only:

    reddy@nas-new:~$ ls -l /tank/data/
    total 790
    drwxrws---+ 4 reddy users 7 Feb 4 08:14 backups
    drwx--x--x 13 root root 13 Feb 9 22:10 docker

    Shall I change that?

    Hello. I'm new to docker on OMV. I've just set up a fresh server with proxmox kernel (installed through OMV-Extras) as I have a ZFS RAIDZ2 pool there. I also installed docker and tried to set up Yacht, both through OMV-Extras as well, just by pressing "+ Install" in the UI, first on docker (then server reboot, just in case) then on Yacht.. Unfortunately, I can't open Yacht UI, looks like it can't bind to the port. The container seems to be running:

    root@nas-new:/home/reddy# docker ps
    09c68c7e634d selfhostedpro/yacht "/init" 7 minutes ago Up 7 minutes>8000/tcp yacht

    But I get repeating errors in the logs:

    What do I do wrongly and how to set it up correctly?

    Thank you both!

    One concern I have with attempt to use ZFS for the system drive is the fact it's not native to the Debian kernel, so if something goes wrong and I end up just in the root shell, I won't have all the tools and means I need to fix it, right? So it might be really tricky in such situation. BRTFS on the other hand should work out of the box, if I'm not mistaken?

    crashtest - the Proxmox Kernel is a post-install setup, right? So that's the way I could get ZFS for my data drives, but still not for the original system installation of OMV.

    So my current idea remains as this, please correct me if that isn't a sane approach:

    - BRTFS for the system SSD drive (with snapshots sent to an offline backup, if possible)

    - ZFS RAIDZ2 on 5 Seagate IronWolfs for data (likely with snapshots, too, and an offline backup)

    And you made me confused a bit with this:


    Setup a RAIDZ2 pool. (My preference would be for a RAID10 equivalent, but that's me.)

    Isn't it so that RAIDZ2 is rather an equivalent of RAID6? Besides, I can't really set up anything RAID10-like with 5 drives. I don't also have a requirement on particularly good read performance - my current RAID5 setup is fast enough for my needs. I rather care about the safety of data and decent utilization of the drives space, so I considered RAID6-like approach a good compromise. Still, correct me if I'm wrong, please.

    Thanks HannesJo! I'll definitely test it first on a VM and likely I'll ask more questions :)

    I can install plain Debian first ( especially that my motherboard is UEFI-only) or I can modify OMV ISO to enable detailed partitioning in the same way I did ages ago with my initial encrypted setup.

    About the filesystem for the OMV setup - I guess it's simply much easier with BTRFS than ZFS as it's available in the distro by default. But why did you choose it over ZFS for the data? I had an impression that ZFS is a bit more reliable than BTRFS, especially for RAID5/RAID6-like configuration (see: Debian wiki)

    Thanks for the hint about the way to handle passphrases!

    BTW, I've just found this nice blog post: Installing Debian 10 Buster with Encrypted LVM and btrfs Subvolumes, looks interesting.


    I'm using OMV for years configured with full encryption (both the system partitions and storage (like I wrote there with full guidelines on longer available wiki page) but the time has finally came to refresh the hardware and the whole setup. I used to use encrypted RAID5 setup with LUKS and mdadm running ext4 on 3 drives. With the new setup I want to have better safety against drive failure (I had to replace disks several times during those years, praying the RAID5 rebuilds properly), so I initially thought of similar setup I used to have, just using 5 drives with RAID6. However, now when I started to read again, it looks like better options are available, like ZFS and BTRFS for example.

    The hardware I decided to go is an ASRock J4105B-ITX, 16GB non-ECC RAM, Dell H200 SAS card flashed to IT mode, 5 Seagate IronWolf 4TB drives for data and some small 32GB SSD for the system.

    I'll also keep offline backups on separate drives.

    My goals:

    - required: system filesystem snapshotting (so I can roll back in case something goes wrong)

    - required: system and data filesystems encrypted

    - required: resilience to two drives failure

    - nice-to-have: data filesystem snapshotting

    Could you recommend how to set up the system? Can ZFS or BTRFS be used for OMV system partitions, so I can have snapshotting capability? Can they be easily encrypted, too? Shall I go with ZFS RAIDZ2 for the 5 data drives? Any links to the guidelines appreciated, too.

    I have some progress. I managed to boot it when I when through following additional steps:

    • When I'm chroot'ed into installed system during the installation, but just before the reboot, I installed cryptsetup manually

      /etc/init.d/networking restart
      apt-get update
      apt-get install cryptsetup kbd
      update-initramfs -k all -u

    • After first reboot encrypted partition is still not open so startup just drops to busybox, so I opened it manually, scanned for logical volumes and exited busybox to boot to the final system

      cryptsetup luksOpen /dev/sda5 sda5_crypt
      lvm vgscan
      lvm vgchange -ay

    • When the system finally booted - installed all updates through update manager in OMV UI

    It still shows "volume group not found" error after reboot but just after that it asks for the password to the encrypted partition and then boots properly.

    Does anyone have some idea why cryptsetup and its configuration is missing when simply using the installer?


    I'm trying to install a fully encrypted system (except /boot) with Kralizec ISO similarly as I did with old 0.3.x (…2C_feedback.2C_discussion).
    All I changed in the ISO is once again just commenting out all lines in "### Partitioning" section to get full access to the partitioning options. I didn't have to load any additional modules during the installation as the current installer already has xts.ko and gf128mul.ko included. There's no need to re-create the encrypted partition as the installer by default uses aes-xts-plain64 now. So, it'd seem that it should be nice and easy thing, everything could be set up through the installer, but...
    Everything installs properly but after the reboot system doesn't ask for encryption password to the partition. It fails on searching for LVM volumes, which is obvious if the encryption partition isn't open. I even tried the simplest option with encryption from the installer - Guided partitioning with everything on one encrypted partition - still the same problem after first reboot.

    So I tried the same with clean Debian minimal network ISO and it nicely asks for the password after the reboot... So it looks like Kralizec image is somehow faulty - like cryptsetup startup scripts are missing after the restart. Any idea how to fix it or how to update the Kralizec ISO to work the same way as the original Debian one?
    I know I can simply install Debian and put Kralizec on top but I'd prefer to go with the image you created as I believe you've tweaked everything in the best possible way :)

    Thanks in advance for any help. I spend full two days and countless reinstallations in VBox to make it working, with no luck so far :(

    Update: I tried to install also to chroot into installed system before the reboot (similarly as described in my initial guide linked above) and install cryptsetup manually there (and run "update-initramfs –u" after that) - no change, still doesn't boot. What might be stripped from Kralizec image that it allows the installer to create all necessary partitions but it won't boot with encryption?


    First of all - apologies, as it seems that the mkisofs command parameters on the wiki get cut. I've fixed that now.

    The reason you get an error about incorrect CD is most likely due to improper or lack of update of md5sum of preseed.cfg file in md5sum.txt
    Double check that, please.

    Just FYI - since I switched to ext4 (more than 6 months now) my mirror is stable with no surprises like this split. I hope it will stay like that :) I have no idea if and how the file system can affect mdadm, so I don't know how to help anybody or even myself if that happens again...


    I'm affected by the problems with Intel wlan cards (yes, I know I should connect NAS to the ethernet cable but need to use my NAS over wlan :) ) - - which exists even in the 3.2 kernel available in backports for Squeeze. How to upgrade to the most recent kernel (like 3.9.x or 3.10.x)? Is e.g. this guide safe:…tu-linux-mint-and-debian/

    BTW, any estimation when we may expect OMV release based on wheeze? Newer kernels are available in wheeze backports.