Not able to setup apple filing on OMV

    • OMV 4.x
    • Not able to setup apple filing on OMV

      I'm currently running OMV 4.x on my Odroid HC2. The image I downloaded from sourceforge. Everything works fine and I can use Samba to transfer file. My Odroid HC2 is connected on a Netgear 8 port gigabit switch and I can transfer large files (e.g 2GB) @ about 45~50 MB/s from my Macbook Pro 2011 (which also has Gigabit Ethernet port)

      I have seen youtube video and Odroid/Hardkernel Wiki they transfer @ about 80~90 MB/s. I thought because I'm using a Mac, I should try enabling (AFP) apple filing for better transfer speed - which I did in the OMV interface and setup user and access right just like I did on Samba. On my Mac's finder I can see a network shows up but when I try to connect it doesn't accept my username/password.

      I'm not sure what I did wrong? Do I need to set anything special on my Mac machine? note that I have no trouble sending files over SMB. Any hints is very much appreciated.

      Regards,

      Aatush.
    • awsmness wrote:

      Do I need to set anything special on my Mac machine?

      No, works out of the box.

      You should only take care that you never access the same shares with SMB and AFP at the same time. So define AFP shares that use different paths than the SMB share you already use (for reasons why search the forum for 'encoding metadata')

      The post was edited 1 time, last by tkaiser ().

    • tkaiser wrote:

      never access the same shares with SMB and AFP at the same time. So define AFP shares that use different paths than the SMB share you already use
      Unfortunately without any luck, I tried exactly same thing. I have an entire 4TB Seagate HDD with only 1 partition on it. I use SMB on a shared folder named 'Ironwolf' while I tried another shared folder by naming it 'OSx' - I believe you are not telling me to create two different partition for SMB and AFP?

      Here's some screenshot of my system.

      Images
      • shared_settings2.jpg

        78.34 kB, 681×653, viewed 55 times
      • users_right.jpg

        49.13 kB, 514×366, viewed 51 times
      • connect_fail.jpg

        113.06 kB, 1,240×620, viewed 44 times
    • awsmness wrote:

      I believe you are not telling me to create two different partition for SMB and AFP?

      Nope, different shared folder is sufficient. But you set quota to 0 which will prevent any data written ever and I don't know whether the volume password feature is supported by OS X (I've been in AFP/Netatalk development over 10 years ago the last time so forgot a lot of things).

      In other words: do not set a password and leave quota blank and try again please.
    • tkaiser wrote:

      do not set a password and leave quota blank and try again please
      That's it. Setting volume password was the problem. Disk quota says keeping the quota value 0 will allow time machine to use entire disk.

      2. A quick related question, I see both SMB and AFP transferring around similar speed. In my case 45~50 MB/s over gigabit ethernet connection from my system to OMV NAS. Is this due to Odroid HC2 performance?

      Thank again for all your help. :)
    • awsmness wrote:

      In my case 45~50 MB/s over gigabit ethernet connection from my system to OMV NAS
      I got always almost twice as much. With almost full 2.5" HDD 40-50 MB/s could be a reasonable value due to 'Zone Bit Recording' (ZBR) but with 3.5" HDD this seems way too low.

      Since you're using an Armbian based OMV install you could login through SSH and test your HDD:

      Source Code

      1. cd /srv/$your-share
      2. iozone -e -I -a -s 100M -r 1024k -i 0 -i 1
      This will give you an idea about the HDD's true performance (ZBR included -- empty HDDs are always faster then those with data on it!). If these numbers look sufficient (+100 MB/s in both directions) then check network performance with iperf3 (pre-installed on OMV, on your Mac use homebrew and then 'brew install iperf3') and if this also looks ok I would do a benchmark with HELIOS LanTest since this tool generates test data from memory so the client's storage performance is not influencing numbers.
    • tkaiser wrote:

      but with 3.5" HDD this seems way too low.
      Yes, I'm using Odroid HC2 which has 3.5 inch Seagate NAS (Ironwolf 5900 RPM) connected on its SATA port. Both my Odroid NAS and Mac are connected to a 8 port Netgear Gigabit switch. I wonder if my router (which isn't gigabit) causing any issue ?( ?

      However, I went ahead and check all you suggested. Test results are seems fine as far as iozone and iperf3 shows. I also did a rsync transfer which returns 46MB/s transfer for 1.7G movie file. HELIOS Lan suggest ~72MB/s write speed.

      ** also note that, CPU and HDD temperature looks fine to me.

      totally confuse about all these variable numbers. ?( :huh:

      Screenshot attached.

      Thanks.
      Images
      • iozone.jpg

        291.13 kB, 1,020×485, viewed 41 times
      • samba-nas.jpg

        269.13 kB, 638×638, viewed 44 times
      • iperf_rsync.jpg

        738.47 kB, 883×1,058, viewed 38 times
      • hc2.jpg

        328.82 kB, 900×506, viewed 36 times
    • awsmness wrote:

      totally confuse about all these variable numbers
      Storage performance fine, network performance fine. LanTest numbers lower than expected due to 'Gigabit Ethernet' settings (that use 128KB block size -- try again with 10 GbE settings and numbers will be higher, Finder normally will show again even higher numbers -- for reasons why: helios.de/web/EN/support/TI/157.html)

      Well, copying between A and B means bottlenecks might exist on both sides so better check your local storage on the Mac too.

      Since you're using Samba: the OMV images for ARM devices contain some tweaked Samba settings to improve performance. On the other hand the Internet is full of thousands of tutorials with differing settings. So in case you 'tuned' yourself Samba performance might be inferior. On the other hand I usually did all my quick benchmarks using Netatalk/AFP since still more familiar with the inner workings... so maybe there is an issue with current Samba settings. LanTest numbers with 10 GbE settings against both an SMB and an AFP share would be great.
    • awsmness wrote:

      I also did a rsync transfer

      Rsync in normal mode between two hosts can not be used to measure NAS performance since usually

      • encryption is involved (you could use the 'null' cipher but that requires building the software stack yourself -- usually the fastest cipher you get is arcfour)
      • on ARM devices and even older x86 gear the rsync performance can be the result of a single CPU core maxing out (single threaded)
      • The strongest cipher is negotiated between both sides so results are unpredictable anyway. If you test against a 'hardened' Linux distro your rsync/ssh throughput will be way lower compared to distros that do not enforce only strong ciphers)


      Old insights, still valid: linux-sunxi.org/Sunxi_devices_…F_Identifying_bottlenecks

      The post was edited 1 time, last by tkaiser ().

    • @tkaiser Sorry I went for a break this weekend. Now I have some results after testing..

      Also I installed a fresh copy of OMV for testing so there's no tweak made to Samba settings other than what came straight out of OMV.

      tkaiser wrote:

      copying between A and B means bottlenecks might exist on both sides so better check your local storage on the Mac too
      I've checked that, here's the finding attached.


      tkaiser wrote:

      LanTest numbers with 10 GbE settings against both an SMB and an AFP share would be great.
      ... Again pls check the attachments.

      Interestingly, I did a file transfer (3GB) test over to SMB share, from my local SSD which took around 29 Sec. while same file from local HDD took about 1.14 Sec.

      I believe my Mac's HDD is causing the slow performance which is my storage drive.
      Images
      • 10GEthernet_HDD.jpg

        259.85 kB, 637×634, viewed 31 times
      • 10GEthernet_SSD.jpg

        258.92 kB, 637×633, viewed 31 times
      • AFP Performance.jpg

        258.44 kB, 636×634, viewed 29 times
      • SMB Performance.jpg

        266.89 kB, 638×635, viewed 27 times
    • awsmness wrote:

      from my local SSD which took around 29 Sec. while same file from local HDD took about 1.14 Sec
      This must be some caching effect.

      Anyway: you identified the root cause: it's your local HDD that is the bottleneck. If this is a full 2.5" HDD then these numbers are 'fine' (or 'as expected') but in case this is a 3.5" HDD or an almost empty 2.5" these numbers are alarmingly low and I would prepare for your local HDD dying soon.

      The LanTest numbers show that OMV performance of your HC2 is excellent.