Posts by lepri13

    Setting all the settings first saving the settings. Then enabling the plugin and saving again provides different error:


    OMV Version 3.0.99
    CP config:
    OMV default repo, master branch
    Problem:
    Enabling CP causes an error


    I have re installed the plugin without ant success.

    When you run apt-get update
    You get an error. I have Checked with a friend his OMV has the same issue
    Ign file: Release.gpg
    Ign file: Release
    Ign file: Translation-en_IE
    Ign file: Translation-en
    Hit http://ftp.ie.debian.org wheezy Release.gpg
    Hit http://packages.omv-extras.org stoneburner Release.gpg
    Hit http://packages.openmediavault.org stoneburner Release.gpg
    Hit http://ftp.ie.debian.org wheezy-updates Release.gpg
    Hit http://dh2k.omv-extras.org plex-wheezy-mirror Release.gpg
    Hit http://ppa.launchpad.net precise Release.gpg
    Hit http://ftp.ie.debian.org wheezy Release
    Hit http://security.debian.org wheezy/updates Release.gpg
    Hit http://packages.omv-extras.org stoneburner-testing Release.gpg
    Hit http://download.mono-project.com wheezy Release.gpg
    Hit http://www.greyhole.net stable Release.gpg
    Ign http://dh2k.omv-extras.org stoneburner-miller Release.gpg
    Hit http://ppa.launchpad.net precise Release
    Hit http://ftp.ie.debian.org wheezy-updates Release
    E: Release file for http://ftp.ie.debian.org/debian/dists/wheezy-updates/Release is expired (invalid since 1d 13h 9min 31s). Updates for this repository will not be applied.

    Not saying it doesn't work just need to set your expectations, I have seen massive problems and unless you have another server to move your data to and fix the issue you MAY loose your data. Its more of FYI not run for the hills :)

    After updating sickrage Service failing to start wit the error:


    Error #4000:
    exception 'OMVException' with message 'Failed to execute command 'export LANG=C; invoke-rc.d 'sickbeard' start 2>&1': Starting SickBeard
    invoke-rc.d: initscript sickbeard, action "start" failed.' in /usr/share/php/openmediavault/initscript.inc:176
    Stack trace:
    #0 /usr/share/php/openmediavault/initscript.inc(141): OMVSysVInitScript->invoke('start')
    #1 /usr/share/php/openmediavault/initscript.inc(61): OMVSysVInitScript->start()
    #2 /usr/share/openmediavault/engined/module/sickbeard.inc(116): OMVSysVInitScript->exec()
    #3 /usr/share/openmediavault/engined/rpc/config.inc(206): OMVModuleSickbeard->startService()
    #4 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)
    #5 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)
    #6 /usr/share/php/openmediavault/rpcservice.inc(158): OMVRpcServiceAbstract->callMethod('applyChanges', Array, Array)
    #7 /usr/share/openmediavault/engined/rpc/config.inc(224): OMVRpcServiceAbstract->callMethodBg('applyChanges', Array, Array)
    #8 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array)
    #9 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)
    #10 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('applyChangesBg', Array, Array)
    #11 /usr/sbin/omv-engined(500): OMVRpc::exec('Config', 'applyChangesBg', Array, Array, 1)
    #12 {main}


    Has anyone seen this?

    As promised here is my biuild details:


    I have had a lot of time testing and finally happy with my config.
    Now I confidently recommend ESXi / OMV build, There are several recommendations tho for hassle free build.
    1. You need a well tested hardware that will work well with ESXI as it will cause you endless hassle (New not always best) :0
    2. You NEED a good RAIID card that will work well with ESXI and Debian (OMV) or Disk controller that you can pass to the OMV to have direct disk access.
    3. HDDs that will be happy to run in RAD config if you use it (WD Greens are not suitable and I have had issues when running Software RAID they would drop out from array due to power saving firmware)


    I have originally planned to build by rig using ZFS for OMV but after some reading decided that until ZFS is supported by default in Linux and well tested I would stick with software raid. ZFS is awesome but when it goes wrong there will be nothing you can do. We use it in work for SAN. And issues with Linux are very real and you need to be prepared to with additional backup strategy. I was not prepared for this type of set up so I left it until I can do some further testing.


    ESXi Allowed me to Use extra power of the CPU and have fully functional home lab. I around 10-15 servers Windows / Linux mix without any issues. and average load of about 40 % on CPU / RAM.


    So the set up is as follows:
    Motherboard: Supermicro X10SRi-F
    CPU: Intel E5 1620 v3 cpu
    Ram: Samsung 32GB of DDR4 ram 2 x 16 GB eec 2133Mhz
    RAID Card:LSI 9211-8i Firmware ver 16 (Its important)
    HDDs: 4 WD Red 6 TB each in software RAID 10 (Handled by OMV) SSD for ESXi cache and we few smaller disks for junk temp folders etc.
    ESXi 6 update1 running from SATA DOM and
    Supermicro Case SC733TQ-500B
    ICY Doc ToughArmor MB992SKR-B 2.5" Drive cage
    ICY Doc FatCage MB153SP-B Drive cage


    So far I have had no issues with running OMV in VM except for the limitation of inability to do snapshots with ESXi (Its ESXi limitation due to how discs are handled in pass though mode). OMV successfully upgraded twice between major updates without any issues.


    All in all I am very pleased with the upgrade from HP microserver. This set up allows me to run a successful home network with minimal maintenance. My current uptime for the ESXi is 230 days and 184 days for OMV all restarts are upgrade maintenance related.


    On the separate note:
    CPU cooler, at the start I have used supermicro cooler even thou advertised as quiet was very noisy and you can hear it in the other room. SNK-P0048AP4 has great thermal performance and would work great for 2U chassis but for home / Office use you would need to look at alternatives.


    I have end up using the Noctua NH-U9DX i4. Oh the face of it it was perfect fit but I have had a major set back. Once system fully started all the fans would start running at 100% speed and then would slow down only to repeat the cycle over and over. After some investigation re seating the cooler and looking at the CPU and system temps I have noticed IPMI warning that CPU fan speed is below the threshold.
    After some google-fu I came accross this post: https://calvin.me/quick-how-to-decrease-ipmi-fan-threshold/


    All in all changing values in IPMI was easy and well documented by Calvin. NOw my system is whisper quiet.


    And now for the pictures:



    I hope this would help someone build their home lab. And AIO server.


    It doesn't do raid5 and has no bbu. I got mine because I needed a lot more ports and wanted the cleaner sas-to-4 sata cables. I also thought about using zfs/btrfs someday.



    mpt2sas0: LSISAS2008: FWVersion(15.00.00.00), ChipRevision(0x03), BiosVersion(07.29.00.00)


    Cheers man, I am doing bench testing over the weekend, the set up is shaping up to be epic. I will do a small write up. I love the community here way better then freenas.


    And another reason for raid controller is when you pass the device with ESXi you have to pass the whole controller. So if you have one controller all sata ports will belong to OMV, this defeats the whole idea of AIO server.

    The card is supported by ESXi so thats not a problem, the reason for separate raid controller is that I can pass the RAID card to OMV inside ESXi and OMV should be happy and have full control over discs and controller. I am planning to use the server as AIO server storage and VM lab.
    I have read way to much about the freenas 8) decided to stay with OMV.

    Can't answer the ESXi part but I run mine 9211 in IT mode and use mdadm raid.


    What version of firmvare are you running?