Mounting physical drives on ESXI

    • Mounting physical drives on ESXI

      Hi,
      I'm a complete newbie to ESXI.

      I had OMV 2.0 on a baremetal N54L with 4 drives (under SnapRaid and mergerfs), and I'm planning to upgrade to OMV 3.0 as a VM and install ESXI on the N54L. So far so good.

      Now comes the question of mounting the physical drives in the OMV VM. EXSI displays them but OMV doesn't. I've read about converting the drives to VDMK datastores, but I don't want to lose the data, and I really want to keep them as plain ext4 drives for portability.

      So how do you guys mount physical drives into an OMV VM under ESXI ? Am I missing something obvious here ?
    • You have to passthrough the drive by creating a VMDK file pointing at the physical device. I did this on my ESXi box for a lot of drives. Read this. Alternatively, you can passthrough the sata controller to virtual machine from the vsphere client.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Thanks, yeah I found that out. So I have added my 4 drives as passthrough VDMK files in the EXSI datastore.

      Now, when I try to add them to the VM, I can only add two. THe others are greyed out in the UI:
      - Hitachi 1TB: No go
      - WD Green 2TB: Ok
      - WD Green 3TB: No Go
      - Toshiba 3TB: Ok

      Could it be that some drives are supported and others not ? That would be strange because the two WD Greens are behaving differently although they have pretty much the same firmware.

      Can you explain how to passthrough the SATA controller ?
    • You must have created the vmdk incorrectly. It is very picky about the device label you use.

      (Assuming you are using ESXi 6.0) If you go to the Configuration tab for the host and pick Advanced Settings (DirectPath I/O Configuration), you can configure the passthrough. Without being on the machine, I can't really tell you more.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      You must have created the vmdk incorrectly. It is very picky about the device label you use.
      The vmdk was properly created. I even checked it with vmkfstools '-q' and '-x check' options. I guess there's something that makes those drives incompatible.

      As for the DirectPath I/O settings, I get "host does not support passthrough configuration". The HP Microserver N54L has an AMD CPU with "AMD-V", which is supposed to be equivalent to VT-x, but I guess it doesn't support VT-d... so I guess I'm screwed.

      I'm amazed, because I thought that N54L + EXSI + OMV was a common setup. I guess I'm going back to a baremetal OMV install...
    • Check into Proxmox. It is what I use at home. Passing through an individual drive is much easier.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I always use virtio. Never worried about serials or smart passthrough. So, can't help there.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • You can also use RDM. That's how it made my drives available to OMV.
      That's the way to keep your files on the corresponding drive.

      It is really simple:

      1. Find the name of the disk:
      ls -l/vmfs/devices/disks

      2. Look for an existing datastore (for example the one, you've installed ESXi on)
      ls -l /vmfs/volumes
      You can also look it up with vSphere Client

      3. Add the RDM to an existing datastore (for example the one, you've installed ESXi on)
      You can physically map the drive with the following command
      vmkfstools -z /vmfs/devices/disks/<DiskName> /vmfs/volumes/<Datastore>/<Folder>/<Disk>.vmdk

      Here are two links to some tutorials for additional information:
      RAW Device Mapping
      HowTo create an RDM Mapping File cia CLI

      The post was edited 3 times, last by mnui ().