Recommendation for fully encrypted installation with snapshotting?

  • Hello.

    I'm using OMV for years configured with full encryption (both the system partitions and storage (like I wrote there with full guidelines on longer available wiki page) but the time has finally came to refresh the hardware and the whole setup. I used to use encrypted RAID5 setup with LUKS and mdadm running ext4 on 3 drives. With the new setup I want to have better safety against drive failure (I had to replace disks several times during those years, praying the RAID5 rebuilds properly), so I initially thought of similar setup I used to have, just using 5 drives with RAID6. However, now when I started to read again, it looks like better options are available, like ZFS and BTRFS for example.

    The hardware I decided to go is an ASRock J4105B-ITX, 16GB non-ECC RAM, Dell H200 SAS card flashed to IT mode, 5 Seagate IronWolf 4TB drives for data and some small 32GB SSD for the system.

    I'll also keep offline backups on separate drives.


    My goals:

    - required: system filesystem snapshotting (so I can roll back in case something goes wrong)

    - required: system and data filesystems encrypted

    - required: resilience to two drives failure

    - nice-to-have: data filesystem snapshotting


    Could you recommend how to set up the system? Can ZFS or BTRFS be used for OMV system partitions, so I can have snapshotting capability? Can they be easily encrypted, too? Shall I go with ZFS RAIDZ2 for the 5 data drives? Any links to the guidelines appreciated, too.

  • That is exactly what I'm sitting on. To install OMV on an encrypted BTRFS Drive, the easiest (only?) way is to just install debian (without desktop) on LUKS encrypted BTRFS drive and then install OMV on it.


    When setting up storage, you can just LUKS encrypt every single drive and then directly form a BTRFS Raid on the unlocked drives. You should not use mdadm for building Raid since you would loose BTRFS bit-rot protection. When U used to LUKS and BTRFS it is really easy and straightforward. I tested the setup process on a VM first, just to be safe in every step.


    In order to auto-unlock the drives after boot, there are several different ways. I just used the same passphrase for all of my drives and activated the debian built-in crypttab script to auto unlock all the drives with a single passphrase prompt on bootup.


    (Of course you should ALWAYS HAVE a BACKUP)


    I can give you more detailed explanation if there are particular questions.

  • Thanks HannesJo! I'll definitely test it first on a VM and likely I'll ask more questions :)

    I can install plain Debian first ( especially that my motherboard is UEFI-only) or I can modify OMV ISO to enable detailed partitioning in the same way I did ages ago with my initial encrypted setup.

    About the filesystem for the OMV setup - I guess it's simply much easier with BTRFS than ZFS as it's available in the distro by default. But why did you choose it over ZFS for the data? I had an impression that ZFS is a bit more reliable than BTRFS, especially for RAID5/RAID6-like configuration (see: Debian wiki)

    Thanks for the hint about the way to handle passphrases!


    BTW, I've just found this nice blog post: Installing Debian 10 Buster with Encrypted LVM and btrfs Subvolumes, looks interesting.

  • I am not absolutely certain but I think installing plain Debian first is even the only way to change partitioning and formatting of system disk. But I think you may find several threads in the forum where votdev answered that question.


    About BTRFS vs ZFS. My plan was to just test both of them and so I started with BTRFS since you don't need any additional packages for that. It is automatically updated and improved with every kernel update, and that makes it very attractive for the system drive, as you said. Once I had tested everything, I was convinced of BTRFS and so I had no real need to try ZFS anymore. I think there are several reasons pro ZFS and there are several reasons pro BTRFS.

    I think that RAID5/6 problem can occur in very very rare cases on power loss. I did quite intensive testing and was not able to reproduce that. Nevertheless, devs say it is still not fully fixed and stated as unstable. So one should actually listen to that and not use RAID5/6 in a productive environment. On the other hand, as far as I know, that problem can only occur on power loss. So you could probably use it productively with a UPS. Otherwise, ZFS might be better suited for you. (I thought you were using a RAID10. I misread that, sorry.)

  • Anecdotally, I've experienced strange behavior from BTRFS in years past. In my experiences with it, BTRFS command line utilities left a bit to be desired but that might be attributable to it's maturity level. Whatever it was, that was the actual issue, it seems to have been corrected a couple years ago where I've have no issues since. (To be fair; we're talking about a single disk / volume set up, connect by USB, to an SBC. There's numerous factors involved in that but, again, whatever it was appeared to be corrected.)

    While BTRFS might be usable for your scenario, keep in mind that it's penetration into the user segment of the server market is very good, yet. That means if you do have some difficulties with BTRFS, free tech support is pretty thin. Among the various topics on this forum, BTRFS questions on issues / problems may go unanswered.

    On the other hand, if you have 100% backup (always recommended), that greatly reduces the risk to your data posed by any storage method.
    _________________________________________

    ZFS, on the other hand, is very mature (decades mature) and has excellent support on Linux. I have 2 servers running ZFS with zmirrors (RAID1) with no issues after 4 or 5 years.

    If you're interested in ZFS, setting it up is easy enough.

    - In OMV extra's, install the Promox Kernel. (The Proxmox Kernel has ZFS headers preinstalled)
    - Install the ZFS plugin. (For the typical user, this GUI plugin is capable of providing about +90% of the utilities needed to manage a pool.)
    - Setup a RAIDZ2 pool. (My preference would be for a RAID10 equivalent, but that's me.)

    I have a doc posted that walks through setting up automated, rotated and self purging snapshots. It's an older method but it still works and has no dependencies (which is something to consider when upgrading).

    If you're interested in setting ZFS, let me know. I can provide a few pointers to get you started.

  • Thank you both!

    One concern I have with attempt to use ZFS for the system drive is the fact it's not native to the Debian kernel, so if something goes wrong and I end up just in the root shell, I won't have all the tools and means I need to fix it, right? So it might be really tricky in such situation. BRTFS on the other hand should work out of the box, if I'm not mistaken?

    crashtest - the Proxmox Kernel is a post-install setup, right? So that's the way I could get ZFS for my data drives, but still not for the original system installation of OMV.

    So my current idea remains as this, please correct me if that isn't a sane approach:

    - BRTFS for the system SSD drive (with snapshots sent to an offline backup, if possible)

    - ZFS RAIDZ2 on 5 Seagate IronWolfs for data (likely with snapshots, too, and an offline backup)


    And you made me confused a bit with this:

    Quote

    Setup a RAIDZ2 pool. (My preference would be for a RAID10 equivalent, but that's me.)

    Isn't it so that RAIDZ2 is rather an equivalent of RAID6? Besides, I can't really set up anything RAID10-like with 5 drives. I don't also have a requirement on particularly good read performance - my current RAID5 setup is fast enough for my needs. I rather care about the safety of data and decent utilization of the drives space, so I considered RAID6-like approach a good compromise. Still, correct me if I'm wrong, please.

  • One concern I have with attempt to use ZFS for the system drive is the fact it's not native to the Debian kernel

    I think ZFS will be eventually be native but, in practical terms, it doesn't matter. The licensing issues are about the differences between the two free licenses that the Linux Kernel and ZFS use. That's the only thing preventing ZFS from being integrated into the kernel. What's happening right now is pointless legal wrangling which I believe will be solved eventually.

    On the other hand, ZFS on Linux is sponsored to the Lawrence Livermore National Lab. They're technically credible and competent, they're backed by the deep pockets of the US Federal Government and they're steadily developing ZFS. As a consequence, ZFS on Linux will be supported well into the foreseeable future.

    In practical terms, the Proxmox Kernel mentioned "is" a Debian kernel with ZFS headers pre-installed. This kernel is available for installation, in OMV, from the OMV extras plugin. In my particular case, I didn't go that route. I had no trouble with the ZFS header installation on the standard Debian kernel, that the ZFS plugin performs if the Proxmox kernel is not installed. (The plugin checks for ZFS headers.) Either way, ZFS works fine.

    Isn't it so that RAIDZ2 is rather an equivalent of RAID6?

    It is.

    Besides, I can't really set up anything RAID10-like with 5 drives.

    True. You'd have to buy one more drive of the same or very similar size for 3 drive pairs, to set up RAID10. With 6 drives in RAID10, you'd have the same capacity as your 5 drive raidz2 but with better performance and less stressful (for the hard drives) recovery.


    When it comes to RAIDZ2 versus RAID10; as previously noted, I like zmirrors. Here's a more through explanation of the -> advantages of zmirrors. Take a look. It's not too long and worth the read.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!