Is it possible to install OMV to an M.2 PCIe SSD and run it from this SSD?

  • See my signature.


    It is running well on a vmdk virtual disk. No issues at all with install. Can't say the same for OMV3 or 4, I've spent some solid time ironing out issues but none related to the NVMe SSD.

    Fractal Design Node 304:
    VMWare vSphere 6.5 | OMV4.x on Debian 9 (Stretch): Asrock H270M-ITX - i3-7100 - 16GB DDR4 - 1x 80GB vmdk on Samsung 960 EVO 250GB PCIe NVMe SSD (system drive) - 1x 120GB vmdk on Toshiba 450GB HHD (docker data) - 3x Seagate IronWolf 8TB HDD (RDM physical passthrough, RAID-Z1 Pool)

    Einmal editiert, zuletzt von cipherbreak ()

  • My primary esxi datastore is the NVMe SSD. OMV is installed on a 80GB vmdk saved on that datastore.


    Prior to going with the vSphere solution, I had a baremetal OMV install using that same NVMe SSD with no issues.






    Fractal Design Node 304:
    VMWare vSphere 6.5 | OMV4.x on Debian 9 (Stretch): Asrock H270M-ITX - 16GB DDR4 - 1x 80GB vmdk on Samsung 960 EVO 250GB PCIe NVMe SSD (system drive) - 1x 120GB vmdk on Toshiba 450GB HHD (docker data) - 3x Seagate IronWolf 8TB HDD (RDM physical passthrough, RAID-Z1 Pool)

    Fractal Design Node 304:
    VMWare vSphere 6.5 | OMV4.x on Debian 9 (Stretch): Asrock H270M-ITX - i3-7100 - 16GB DDR4 - 1x 80GB vmdk on Samsung 960 EVO 250GB PCIe NVMe SSD (system drive) - 1x 120GB vmdk on Toshiba 450GB HHD (docker data) - 3x Seagate IronWolf 8TB HDD (RDM physical passthrough, RAID-Z1 Pool)

  • You can get vsphere for free from the vmware website. The free version has all the core functionality. You have to fill out a form and in a couple of days they'll issue you a free license. Otherwise, for type 1 hypervisors, you can use XenServer or Hyper-V Server 2016. Being that omv is debian based, either should work fine, though I'd keep it in the linux family and go with xenserver over hyper-v.

    Fractal Design Node 304:
    VMWare vSphere 6.5 | OMV4.x on Debian 9 (Stretch): Asrock H270M-ITX - i3-7100 - 16GB DDR4 - 1x 80GB vmdk on Samsung 960 EVO 250GB PCIe NVMe SSD (system drive) - 1x 120GB vmdk on Toshiba 450GB HHD (docker data) - 3x Seagate IronWolf 8TB HDD (RDM physical passthrough, RAID-Z1 Pool)

  • So.... I switched to FreeNAS.
    Could install to the PCIe M.2 SSD without any issues.
    Server runs smoothly, no thermal issues with the SSD whatsoever, GUI is very responsive.


    No matter what people say, I just can't get past the idea of running a server OS from a USB flash drive ;)


    Cheers!

    • Offizieller Beitrag

    So.... I switched to FreeNAS.
    Could install to the PCIe M.2 SSD without any issues.
    Server runs smoothly, no thermal issues with the SSD whatsoever, GUI is very responsive.


    Did you try installing Debian first? As jollyrogr says, OMV does work on m2 but the OMV ISO doesn't have the necessary components to do that (missing uefi). So, using the Debian ISO and then the six lines of code to install omv after that should be all it takes.


    No matter what people say, I just can't get past the idea of running a server OS from a USB flash drive

    The FreeNAS people will tell you it is just fine and lots of people run production VMware ESXi servers on sd cards. Just sayin...

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Initially I was interested in OMV, but after tkaiser's off-putting comments I decided to look for an alternative NAS OS with full support for modern hardware.


    However, I appreciate the effort that others here made to provide me with useful information regarding OMV on M2.


    The FreeNAS people will tell you it is just fine and lots of people run production VMware ESXi servers on sd cards. Just sayin...


    Yes, I know. To each his own, right?

  • Initially I was interested in OMV, but after tkaiser's off-putting comments I decided to look for an alternative NAS OS with full support for modern hardware.


    However, I appreciate the effort that others here made to provide me with useful information regarding OMV on M2.


    Yes, I know. To each his own, right?

    I hope you are running ECC memory on your motherboard (but i don't the original motherboard mentioned supports it).
    I don't use an ECC motherboard at home - which is why I use OMV.


    If i was running an enterprise class nas, then i would use FreeNAS with ZFS.
    You have more potential for data loss with ZFS than ext4 if you are not using ECC.


    --- edit --- i checked your motherboard.
    - Supports DDR4 2400/2133 non-ECC, un-buffered memory*
    - Supports ECC UDIMM memory modules (operate in non-ECC mode)


    You might want to do some more research on the difference between the file systems of freenas vs OMV
    with non-ECC memory and revisit your decision or plan a backup strategy that corresponds to the associated risks

  • People like to rave about Freenas, but I won't be using it because I have no interest in running ZFS pools and ECC memory. I'm also more familiar with Linux and am willing to overlook a minor install wrinkle in order to get a NAS system that does everything I need it to and does it very well.

  • You have more potential for data loss with ZFS than ext4 if you are not using ECC.

    Nope.


    I have no interest in running ZFS pools and ECC memory

    ZFS is even better when used on systems without ECC memory.


    Please read through http://jrs-s.net/2015/02/03/wi…n-ecc-ram-kill-your-data/ and stop spreading such nonsense.

    • It's really that easy: if you love your data and hate silent bit rot then choose ECC DRAM
    • If you love your data and hate silent bit rot then choose a filesystem with data and metadata checksumming (eg. ZFS, btrfs, ReFS, maybe even APFS in the future)

    Both are optional and do not depend on each other. So please stop spreading this 'ZFS is dangerous on non-ECC systems' BS.

  • the tech in USB drives

    ...is close to irrelevant compared to the real challenges when using any flash memory based product: fake flash / counterfeit products is the main problem, understanding what write amplification is the second.


    The majority of users is still unaware of counterfeit flash being such a problem and therefore not testing for fake flash directly after purchase. And that's where the majority of 'flash is unreliable' reports originate from. This is not a technical problem but only ignorance and misleading 'common knowledge' (just like this 'ZFS without ECC RAM is dangerous' BS)


    With an USB pendrive advertised as being 16GB in size we can assume a TBW rating (Terabytes written) of 8 TBW assuming worst technology possible (crappy controller, crappy flash cells). If this 16GB drive is a counterfeit one with just 2GB real capacity then game is over after already writing 2GB to the disk. The TBW with such a counterfeit drive is then 0.002 instead of 8. That's 4000 times lower or in other words: this counterfeit flash drive will fail 4000 times more early than a regular crappy 16GB pendrive.


    This is the real problem but widespread ignorance prevents things getting better. As long as not every install tutorial mentions this and the tools to check for this problem nothing will change.


    Wrt write amplification everyone should be aware how this affects flash memory but people again don't (and compensate with platitudes like 'SSDs are better than USB thumb drives'). If you activate eg monitoring on your OMV or FreeNAS box and the round robin databases are stored on flash media better check their write pattern immediately (talking about flashmemory plugin won't solve this problem -- only understanding)

  • Zitat von tkaiser
    • It's really that easy: if you love your data and hate silent bit rot then choose ECC DRAM
    • If you love your data and hate silent bit rot then choose a filesystem with data and metadata checksumming (eg. ZFS, btrfs, ReFS, maybe even APFS in the future)

    Both are optional and do not depend on each other. So please stop spreading this 'ZFS is dangerous on non-ECC systems' BS.

    Here's what freenas says:


    http://doc.freenas.org/11/intro.html#ram


    "If the hardware supports it, install ECC RAM. While more expensive, ECC RAM is highly recommended as it prevents in-flight corruption of data before the error-correcting properties of ZFS come into play, thus providing consistency for the checksumming and parity calculations performed by ZFS. If your data is important, use ECC RAM. This Case Study describes the risks associated with memory corruption."



    Perhaps it is no more dangerous than any other memory/fs configuration but I really don't care, still no plans to use it. Memory type aside, zfs is not for me.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!