• OMV AIO ESXI Build

      As promised here is my biuild details:

      I have had a lot of time testing and finally happy with my config.
      Now I confidently recommend ESXi / OMV build, There are several recommendations tho for hassle free build.
      1. You need a well tested hardware that will work well with ESXI as it will cause you endless hassle (New not always best) :0
      2. You NEED a good RAIID card that will work well with ESXI and Debian (OMV) or Disk controller that you can pass to the OMV to have direct disk access.
      3. HDDs that will be happy to run in RAD config if you use it (WD Greens are not suitable and I have had issues when running Software RAID they would drop out from array due to power saving firmware)

      I have originally planned to build by rig using ZFS for OMV but after some reading decided that until ZFS is supported by default in Linux and well tested I would stick with software raid. ZFS is awesome but when it goes wrong there will be nothing you can do. We use it in work for SAN. And issues with Linux are very real and you need to be prepared to with additional backup strategy. I was not prepared for this type of set up so I left it until I can do some further testing.

      ESXi Allowed me to Use extra power of the CPU and have fully functional home lab. I around 10-15 servers Windows / Linux mix without any issues. and average load of about 40 % on CPU / RAM.

      So the set up is as follows:
      Motherboard: Supermicro X10SRi-F
      CPU: Intel E5 1620 v3 cpu
      Ram: Samsung 32GB of DDR4 ram 2 x 16 GB eec 2133Mhz
      RAID Card:LSI 9211-8i Firmware ver 16 (Its important)
      HDDs: 4 WD Red 6 TB each in software RAID 10 (Handled by OMV) SSD for ESXi cache and we few smaller disks for junk temp folders etc.
      ESXi 6 update1 running from SATA DOM and
      Supermicro Case SC733TQ-500B
      ICY Doc ToughArmor MB992SKR-B 2.5" Drive cage
      ICY Doc FatCage MB153SP-B Drive cage

      So far I have had no issues with running OMV in VM except for the limitation of inability to do snapshots with ESXi (Its ESXi limitation due to how discs are handled in pass though mode). OMV successfully upgraded twice between major updates without any issues.

      All in all I am very pleased with the upgrade from HP microserver. This set up allows me to run a successful home network with minimal maintenance. My current uptime for the ESXi is 230 days and 184 days for OMV all restarts are upgrade maintenance related.

      On the separate note:
      CPU cooler, at the start I have used supermicro cooler even thou advertised as quiet was very noisy and you can hear it in the other room. SNK-P0048AP4 has great thermal performance and would work great for 2U chassis but for home / Office use you would need to look at alternatives.

      I have end up using the Noctua NH-U9DX i4. Oh the face of it it was perfect fit but I have had a major set back. Once system fully started all the fans would start running at 100% speed and then would slow down only to repeat the cycle over and over. After some investigation re seating the cooler and looking at the CPU and system temps I have noticed IPMI warning that CPU fan speed is below the threshold.
      After some google-fu I came accross this post:

      All in all changing values in IPMI was easy and well documented by Calvin. NOw my system is whisper quiet.

      And now for the pictures:


      I hope this would help someone build their home lab. And AIO server.
    • lepri13 wrote:

      and you need to be prepared to with additional backup strategy.

      You need this no matter what type of filesystem/raid/etc you use. I have seen too many people without a backup. When there is a problem with mdadm or zfs, it can be a pain. There are plenty of rescue/bootable disks (pentoo and sabayon come to mind) that can work with zfs and mdadm.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13 plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!