Search Results

Search results 1-9 of 9.

  • One last post to close out the thread. If you are thinking of using SnapRAID as a data store for VM storage, don't. SnapRaid is for files that don't change much, and VM files will be changing all the time. I decided to go with ZFS instead. In doing so, I've been able to drop mergerfs as the zpool is now presented as once filesystem. In this configuration I get speeds of 140-150 MB/s over NFS. It's a much better solution. Cheers.

  • Finally! In diving into my last post, I decided to go from 4 vCPUs down to 1 vCPU and BINGO!! I now get sustained 112 MB/s on NFS writes.

  • I have noticed is that mergerfs CPU usage is around 50% during the NFS file copy and only 25% during the SMB file copy. During the SMB file copy, smbd is at about 25% as well, so I'm not sure if these numbers are a result of the CPU being virtualized within ESXi, or if NFS file copies are more mergerfs intensive.

  • NFS share access denied by server

    siddhartha - - NFS

    Post

    Mount as NFS3 using: sudo mount 192.168.1.71:/export/YAYAYA /home/pi/yayaya Mount as NSF4 using: sudo mount 192.168.1.71:/YAYAYA /home/pi/yayaya

  • Error Adding Shares

    siddhartha - - NFS

    Post

    Try once more then immediately SSH into the box and run: systemctl status monit.service journalctl -xn

  • OMV3 NFS is not running

    siddhartha - - NFS

    Post

    I had this problem as well. This fixed it for me: 1. Check to see if you have a file /sbin/start-stop-daemon.REAL 2. If you do: 1. mv /sbin/start-stop-daemon.REAL /sbin/start-stop-daemon Supposedly this should be done after the OS installation, but for some reason it is not triggered.

  • I've made some progress based on the high backlog wait times. I found the following tunable: Source Code (1 line) This increases my speed by almost 25%, bringing it up to around 55-60 MB/s. I'm not sure why this work, as according, to this article, these values (even udp?) are dynamically managed by the server. mountstats: Source Code (4 lines)

  • I'm working my way through everything I can find on troubleshooting performance and I think I might have found the culprit. After reading and writing a 1 GB file, the output of mountstats gives me this: Source Code (56 lines) Notice the lines: Source Code (4 lines)A backlog wait of 4610 seems excessively high. I'm looking at ways to lower it, but I'm not finding much. I found this article, but I don't have a subscription.

  • I'm having a nightmare of a time here. I have an ESXi 6.5 all-in-one with one VM for my media and one VM for my OMV. It's all running on a J3455B-ITX with 16 GB of RAM and four 3 TB WD Reds through an LSI 9211-8i. Since the LSI HBA (flashed to IT mode) is passed through to the OMV VM, I have 4 GB of RAM dedicated to it. Both VM's are set to use 4 CPUs of the quad core. The MTU is set to 9000 on every NIC in ESXi as well as in the interfaces settings for each VM OS (both Debian). The drives are s…