OMV6, NFS, ESXi 7 - best practice document ?

  • Guys,


    Just upgrading my home infrastructure - i have 3 x ESXi 7 servers in a cluster - each with 10GB ethernet back to a Brocade 6610 switch


    Jumbo frames are set

    The NFS data is being shared over a dedicated VLAN that is locked down on the switch to only allow specific MAC addresses for each port to allow connection - so i am happy with the security


    I currently have a Centos 8 server with a NFS share setup on it and the servers are accessing it fine


    This is what the exports file looks like on the Centos machine as a point of comparison


    /Storage/Virtual-Machines 192.168.202.0/255.255.255.0(no_root_squash,async,rw)

    /Storage/Virtual-Machines 172.16.200.0/255.255.255.0(no_root_squash,async,rw)



    I am now attempting to create a new share on a new OMV6 box that will eventually replace the Centos Server - first step is to create a shared datastore for Virtual Machines.


    I am using a mixture of genuine Intel 520 and 540 dual port cards in the systems.


    The VLAN for NFS is set to VLan 200


    OMV IP address is 172.16.200.26

    Host IP address is 172.16.200.9 ( i have 3 hosts with 7,8,9 as the addresses)


    I have created a new RAID1 disk set on OMV, then created an EXT4 filesystem.


    I have then created a shared folder


    And have then assigned the following



    I have then enabled NFS and created a NFS share as such



    I am able to mount this share on the ESXi servers - which have a dedicated 10GB adapter on the 200 VLAN for storage - however when i try and drill down into the filesystem of the datastores to find stored images etc i get a variety of errors which are a variation of timeouts waiting for the filesystem to respond.


    Does anyone have any idea what is going on ?


    When i look at the export file on the OMV box it appears to be written correctly with the options i have added


    Craig

    • Offizieller Beitrag

    And have then assigned the following

    Why are you using ACLs? Are you connecting from esxi via nfs4?


    however when i try and drill down into the filesystem of the datastores to find stored images etc i get a variety of errors which are a variation of timeouts waiting for the filesystem to respond.


    Does anyone have any idea what is going on ?

    Most likely a permissions issue but ACLs are not the correct way to fix that. You are going to have to open up permissions on the directory.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Why are you using ACLs? Are you connecting from esxi via nfs4?


    I started without ACLs and then gradually started attacking each stage of the problem trying to isolate it down.


    I think i solved it last night however - but will work backwards today and confirm.


    With the Brocade/Ruckus switches - when you enable jumbo frames for the first time it requires a restart - i do not believe this was done on this switch - so i did this last nigth and it all started working - however i have made so many other changes (ACLs, additional load parameters etc) that it is hard to say.


    So i will blow away the shares and their permissions today, and try again from the start and document it for anyone else in the same boat.


    Craig

    • Offizieller Beitrag

    try again from the start and document it for anyone else in the same boat.

    Just skip the ACLs. They aren't needed. I ran omv as an nfs server for a three node ESXi cluster and it worked great. I had 10GBe networking and not enough ports to separate traffic but it wasn't needed. I didn't run jumbo frames (we don't at work either). With nvme storage, I could easily break 800MB/s and never noticed any latency or iops issues on 30+ VMs (including nested ESXi cluster).

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK sounds good - so if i recreate the file level shares etc and leave the default options for NFS this should work out of the box ?


    Will test it out before i put it into production so i can add to the collective wisdom pool


    Thanks for taking the time to answer


    Craig

    • Offizieller Beitrag

    so if i recreate the file level shares etc and leave the default options for NFS this should work out of the box ?

    If you using nfs v3 and create new shares that create new directories (or reset an existing folder with the resetperms plugin with ACLs box checked) to allow everyone writes, the default options (changed to read write) will work. I would restrict the client to something like 192.168.1.1/32,192.168.1.3/32.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • If you using nfs v3 and create new shares that create new directories (or reset an existing folder with the resetperms plugin with ACLs box checked) to allow everyone writes, the default options (changed to read write) will work. I would restrict the client to something like 192.168.1.1/32,192.168.1.3/32.

    Yep will be NFS 3 - do not want to open up the NFS4 can of worms just yet. Not too worried about security as it is a port based Tagged VLAN and security is implemented on the switches at the MAC level per port - will update you as i progress.

  • OK just to document this so it remains fresh in my mind


    1) Dismounted the NFS Datastore on the SSD from my test ESXI 7 host

    2) Deleted the NFS Share on OMV

    3) Deleted the Shared Folder on OMV

    4) Unmounted the Filesystem (EXT4)

    5) Delete the MDADM drive

    6) at the CLI used the hpdarm Secure Erase to clean out the drives


    7) Created a new MDADM Mirror set

    8) Created a new ext4 filesystem and mounted it

    9) Created a new fileshare with the default permissions changed to Everyone Read:write



    10)Created an NFS share and changed the default permissions as follows - changed to Read/Write from default Read, and added Async to the options



    11) Created a new Datastore on ESXi 7 (through Vsphere) - specified NFS3 as the type, changed the default access to Read/Write and Mounted it


    12) copied files from another datastore to this one using the Datastore browser in ESXi - all worked OK !!


    13 When i use WINSCP to look at the file permissions within the Datastore (by pointing at the ESXi host) i see the following


    14) and then to close the loop i look at the filesystem in OMV using WINSCP


    and i can see that nobody has ownership



    So looks like all good now - thanks


    I will put the issues down to needing to reboot the brocade switch after enabling Jumbo frames


    Craig

  • curto

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!