hello to all,
I have the existing setup:
OMV5(fully up-to-date) on an ODROID XU4 arm SOC by hardkernel.
Original power adapter from Hardkernel that can deliver up to 4A when needed.
One USB 3.0 external HDD case that supports 2X 2,5" HDDs connected to the one from the USB 3.0 ports of the ODROID. This HDD case uses its own Power Delivery adapter so the system is not somehow burden from this or the disks themselves! The case is this https://www.raidsonic.de/produ…x_en.php?we_objectID=3239
Samsung Evo Plus microSDXC 64GB U3 for the OS.
2X WD 1TB blue 2'5" HDDs as a RAID 1(mirroring) configuration EXT4 fs (configured through the OMV web portal as OMV suggests).
I am sure that most of you, using OMV you'll already know ODROID as Hardkernel recommends using it as a NAS solution among others. They also have a pretty good wiki on how to making the hole project come live from the easy process of building the image up to more advanced stuff...
The overall experience so far(running this 24/7 for somewhat 1+ year now!) is very smooth and never have encountered disconnections/delays/random restarts/freezes neither to my running services(smb/dlna/urbackup/ftp/fail2ban/downloader) nor to the OMV server(disk failure/not responding) in general!!!
The hardware monitoring is showing that even when I stream media OR/AND write/read form/to the OMV the memory/CPU bandwidth is more than enough!!!
So the problem that I have is generally located with the smb protocol.
I have the ODROID connected to a L2 managed gigabit switch and I have my PC WIN10(gigabit network adapter) on another port on that same switch. All the cabling is CAT6 and the only issue is the read/write performance of the smb protocol. I get only 40MB/s which is much below from the actual potentials of the hardware and the disks themselves!!! As you can see here https://wiki.odroid.com/odroid…as/eng/01_beginning#samba the performance if you use a gigabit network should be max out at about 90-100MB/s, which is correct and acceptable!!
I already have checked the disks read/write speeds and I get something like 125MB/s. Also because I have the drives as a RAID 1 configuration through USB3.0, I thought that maybe that was the problem so I pluged in an WD green M2 128GB SSD NTFS fs on the other USB 3.0 port, and make it sharable again with smb protocol. The read/writes WAS WORST THAN THE RAID and I got only 20 MB/s, which I get it, as NTFS is not recommended firstplace(in WIN10 performs like 500MB/s read and 350MB/s write speeds just to clarify that). So then I reformatted it from NTFS to EXT4(configured through the OMV web portal) AND THE SAME BAD PERFORMANCE as the RAID's one CAME OUT(40MB/s) which is just NOT seems to be correct!!!
So from all the above the problem doesn't seems to be either networking OR hardware OR RAID 1 performance on the slower HDD disks as the performance is pretty the same on the faster SSD. From my little experience the problem should be setting values on the SMB protocol(maybe some tweaks?).
Please anyone can help with this as I run out of ideas... and I really want to max my transfer speeds to the limits of my gigabit network!