Advice on my Home Lab server

  • Hi everyone

    I'm looking here for your advice regarding a server build I have. First of all I will explain my current setup


    Server 1

    • HP Microserver Gen8
    • Intel Xeon E3-1240 V2
      • 4C/8T @ 3.4 GHz
    • 16Gb ECC unbuffered DDR3
    • Storage
      • 4x 3.5" bays (3x 2TB drives + 1x empty)
      • 1 odd 2.5" drive caddy
      • 1 internal usb 2.0 port
    • I'm running unRaid trial on this machine
    • I use it for NAS, run some docker containers and a couple of VMs

    Server 2

    • Zotac Zbox ID86
    • Intel Atom D2550
      • 2C/4T @ 1.86GHz
    • 4Gb DDR3
    • Storage
      • 1 internal sata port
      • 1x HDD external enclosure for 2x 2TB 3.5 hard drives
    • I'm running Debian 11 with OMV 6 on this machine
    • I use it only to make a local backup of Server 1
    • It was my main NAS system before I need to run the VMs I'm running now


    I was looking for the best OS for Server 1 (Server 2 is running OMV and I'm more than happy with it).


    My requirements are:

    1. Easy to use and setup
    2. Being able to create multiple users (everyone at my home)
    3. Support for SMB, NFS and RSync (this one is to Server 2 be able to pull the files for backup on schedule)
    4. Support for Docker
    5. Support for VMs
      • I'm looking this to be qemu/kvm, for more advance features such as hardware passthrough and for performance
    6. Support for native apps (like CUPS e.g.)


    At the moment I have the following options

    1. keep unRaid
      • I have no issue with the unRaid file system, that works for me
      • I don't really like the way Docker containers are managed. I installed Portainer, but when I create a new containers through Portioner, even passing the argument restart: unless-stopped, unraid seams to ignore it in favor of it's own ui option AUTOSTART
      • It doesn't support native apps. I have my CUPS server running in the Server 2
    2. True Nas
      • I aimed for this option for the ZFS native support, but
        • True Nas Core is based on BSD so the VMs are not KVM based
        • True Nas Scale refuses to boot from USB, when installed in a SSD connected to the internal USB port
          • I want this OS installed on an SSD connected to the USB 2.0 port not for the speed, but because SSD are more resilient to multiple writes than USB drives and I don't want to lose one of the 5 drive slots I have
          • Besides the the SSD on USB fact, I read online saying True Nas Scale VM support wasn't the best yet for more advanced features. I don't have any specific workload at the moment, but I want this build to be some home future proof
    3. Proxmox
      • I thought on this option because I could create a ZFS pool and then passthrough a volume to use as drive on OMV, having this away may NAS as a VM
      • Proxmox refuses to boot from USB, when installed in a SSD connected to the internal USB port
      • I want this OS installed on an SSD connected to the USB 2.0 port not for the speed, but because SSD are more resilient to multiple writes than USB drives and I don't want to lose one of the 5 drive slots I have
    4. Debian (or Ubuntu, RHEL, etc...) + Cockpit + plugins (ZFS, VMS, docker, smb, nfs, etc...)
      • TBH I don't have any hint on this. I've never used cockpit this way.
    5. OMV 6
      • My backup server is running OMV 6 and I didn't had any problem
      • On this I have some questions/concerns
        • Is best to use ZFS + Proxmox kernel or MergeFS + SnapRaid
          • If ZFS, I saw here the plugin for OMV6 has a simple UI and in the comments it is only for "read" and and change config need to be done by the CLI
          • If MergeFS + SnapRaid, which frequency I should run the SnapRaid parity sync
        • Is the KVM Plugin stable for day-to-day use.
        • I don't have any other box where I can test

    I know its a very long post but I think it explains well my dilemma and can help me with it.


    Zitat

    DISCLAIMER

    With this post, I don't intend by any mean to criticize the volunteers that help developing OMV. It's an amazing software and I like it so much, but at this moment I need to understand if it's the ideal tool for the job I have to it, as we don't want to driving a nail with a screwdriver

    • Offizieller Beitrag

    OMV 6 is the perfect option in my opinion. I replaced proxmox with OMV 6+kvm plugin myself. Only you can answer the zfs or mergerfs question. They serve different use cases. The zfs plugin will gain features. The command line setup is a one time setup usually. I am constantly improving the kvm plugin since I do use it myself and work with virtualization every day. Because it uses libvirt, you can use virt-manager, virsh, and cockpit along with the kvm plugin. This is more flexibility that proxmox and the plugin is simpler than proxmox.

    omv 7.4.2-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron thanks for your input and many many many thanks for your work on all the plugins you've been supporting/developing.

    I'll take in consideration your feedback. TBH OMV was my first option, but the KVM and storage questions made me go test unRaid.


    I'll take a look into the ZFS on OMV6 in this forum to try to get me more familiar with this topic. Quick question: once the Proxmox kernel is installed on OMV, do I need to take any special care about the updates (to avoid installing the debian kernel again e.g.)??

    • Offizieller Beitrag

    Quick question: once the Proxmox kernel is installed on OMV, do I need to take any special care about the updates (to avoid installing the debian kernel again e.g.)??

    When you install the proxmox kernel, it adds the proxmox repos. So, it will update just like other packages. If you remove the debian kernels, they shouldn't be reinstalled but it won't hurt anything if they are.

    omv 7.4.2-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • wultyc

    Hat das Label gelöst hinzugefügt.
  • wultyc

    Hat das Label OMV 6.x (RC1) hinzugefügt.
  • I would indeed stick to OMV6 as that can work with the oldest computer.
    I have TrueNAS running on a second hand DELL R520 with 8 drives and 80G of ECC RAM and 8*4TB of memory.


    ZFS needs roughly 1GB ram per TB Hard disk. ZFS uses a lot more diskspace for itself. But ZFS has snapshots and selfhealing.

    For older computer I would suggest OMV6 with MergerFS and Snapraid.
    First put Filesystem on the disks and then Make a pool/volume in mergerfs and then snapraid on the individual drives,

    • Offizieller Beitrag

    ZFS needs roughly 1GB ram per TB Hard disk

    This is a widely held belief, but it is not real. ZFS will use the available free RAM and the RAM will be freed when the system needs it. You can set up a system with 20TB of data and 4GB of RAM with complete peace of mind, and you won't have a problem on a home or small business NAS.

    • Offizieller Beitrag

    https://openzfs.github.io/open…tml#hardware-requirements

    Just don't use deduplication. Anyway I think nobody does it at home, it is very expensive and it does not make sense.

  • @chente I stand corrected, there's one thing that stil remains and a new one.


    ZFS has more overhead surely compared to regular raid. With the same Hard drives I had little remaining data to use than on my QNAP.


    The wife factor.. In case of a dead of the home administrator zfs is far more unknow than even ext4 en can give problems for recuperating data for the remaining partner.

    • Offizieller Beitrag

    ZFS has more overhead surely compared to regular raid. With the same Hard drives I had little remaining data to use than on my QNAP.

    I do not understand the reason for this statement. Are you comparing a Mirror Raid to a JBOD? ...

    The wife factor.. In case of a dead of the home administrator zfs is far more unknow than even ext4 en can give problems for recuperating data for the remaining partner.

    If the server dies, the ZFS Raid can be recovered on another Linux and ZFS machine.

    • Offizieller Beitrag

    I started out using UnionFS and SnapRaid on OMV. Over time I realized that maintaining this system requires some dedication and is not very intuitive. In case of problems it does not seem like a simple system.

    Researching ZFS I found that it is very easy to use and practically maintenance free. It has some disadvantages, it is less flexible with the initial configuration, disks of the same size, inability to remove or add disks to the pool. Obviating this, it seems easier to maintain. I don't really have to do anything. And also, if a disk breaks, I don't lose the data I'm working with at the moment, Snapraid is not instantaneous. Snapraid doesn't like databases, with ZFS I don't have that problem. ZFS allows me to compress the entire filesystem if I am interested in doing so.

    I like the simplicity so I reconfigured my drives, had 4 4TB drives, bought another one and mounted a 5x4TB RaidZ1 which I hope will last me a long time. I set it up with the OMV5 plugin, it was very straightforward. Although from CLI it should be very simple too. I trust this plugin will be ported to OMV6 with the same functionality. The rest of the disks I use as backup on another server with MergerFS, without SnapRaid.

  • I started out using UnionFS and SnapRaid on OMV. Over time I realized that maintaining this system requires some dedication and is not very intuitive. In case of problems it does not seem like a simple system.

    Researching ZFS I found that it is very easy to use and practically maintenance free. It has some disadvantages, it is less flexible with the initial configuration, disks of the same size, inability to remove or add disks to the pool. Obviating this, it seems easier to maintain. I don't really have to do anything. And also, if a disk breaks, I don't lose the data I'm working with at the moment, Snapraid is not instantaneous. Snapraid doesn't like databases, with ZFS I don't have that problem. ZFS allows me to compress the entire filesystem if I am interested in doing so.

    I like the simplicity so I reconfigured my drives, had 4 4TB drives, bought another one and mounted a 5x4TB RaidZ1 which I hope will last me a long time. I set it up with the OMV5 plugin, it was very straightforward. Although from CLI it should be very simple too. I trust this plugin will be ported to OMV6 with the same functionality. The rest of the disks I use as backup on another server with MergerFS, without SnapRaid.

    The MergerFS plugin has now the same functionality as the UnionFs plugin, otherwise I wouldn't have been able to deploy both on a the OMV6 system I have running now as a backup of the Truenas.


    Of course zfs can be recoverd on other machines, the question if my wife can and will do it remains to be seen.

    The overhead of ZFS is that a raid z1 from 8*4TB leaves you with less space to put your files on than a raid5 from the same drive.
    And if you take the 80% fill rule into account it gets even way worse.


    Also imagine you have TrueNAS with Z1 and a OMV6 with MergerFS and Snapraid and one parity disk and 2 loose disks (every month synced) a disaster occurs and you loose 2 disks on the TrueNAS. everything is gone then and you need to recover from backup. This will take a long time for the whole pool.

    If the same would occur to the OMV6 rebuild time would be shorter.


    Both have their merits. I hope that a wizard will get eventually into OMV? that upon first start just asks some questions, prepares the disksand , make the filesystem.


    Have a fine New Year.
    Guy

    • Offizieller Beitrag

    The MergerFS plugin has now the same functionality as the UnionFs plugin, otherwise I wouldn't have been able to deploy both on a the OMV6 system I have running now as a backup of the Truenas.

    The difference between UnionFS and MergerFS is that UFS creates a union of disk drives. MergerFS joins folders. Therefore MergerFS does the same as UFS and has more possibilities. UFS has not been ported to OMV6 because with MergerFS you can do everything.

    Of course zfs can be recoverd on other machines, the question if my wife can and will do it remains to be seen.

    True, this will depend on your wife. I did not understand the meaning of the question in your first post.

    The overhead of ZFS is that a raid z1 from 8*4TB leaves you with less space to put your files on than a raid5 from the same drive.
    And if you take the 80% fill rule into account it gets even way worse.

    This I did not know, I will investigate. What is the reason for this? In any case I would not use Raid5 in mdadm, it was strongly discouraged many years ago.

    Also imagine you have TrueNAS with Z1 and a OMV6 with MergerFS and Snapraid and one parity disk and 2 loose disks (every month synced) a disaster occurs and you loose 2 disks on the TrueNAS. everything is gone then and you need to recover from backup. This will take a long time for the whole pool.

    If the same would occur to the OMV6 rebuild time would be shorter.

    Certain. Recovery time is shorter. It's a long shot from my point of view. In any case, I'm not too worried about being at home for two days without watching movies. Personal information takes up very little space.

    I hope that a wizard will get eventually into OMV?

    This is planned to be done.

    Have a fine New Year.

    Equally, I wish you the same.

    • Offizieller Beitrag

    This I did not know, I will investigate. What is the reason for this? In any case I would not use Raid5 in mdadm, it was strongly discouraged many years ago.

    OK, I answer to myself. It seems that ZFS reserves 1/64 of the space for copy on write. It really is equivalent to 1.5% of the total. I suppose that is the price to pay for the advantages of ZFS over other systems. From my point of view it is not excessively relevant, that each one values it in their case.

    • Offizieller Beitrag

    it's also advised to not use over 80% of the available space

    This recommendation can be applied to all systems for one reason or another.

    • Offizieller Beitrag

    The difference between UnionFS and MergerFS is that UFS creates a union of disk drives. MergerFS joins folders. Therefore MergerFS does the same as UFS and has more possibilities. UFS has not been ported to OMV6 because with MergerFS you can do everything.

    The mergerfsfolder plugin allowed you to merge folders.

    The unionfilesystems plugin allowed you to merge filesystems. It was still just merging paths behind the scene.

    The mergerfs plugin allows you to merge, filesystems, sharedfolders, and folders. So, it actually does more than the other two plugins put together.

    omv 7.4.2-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!