Posts by kavejo

    Hello guys,

    Quick question for you.

    I have got OMV running on Microsoft Azure for quite a while (3+ weeks).
    I had absolutely no issue with that and i switched between different VM sizes with no issues at all.
    With the network ACL i can control the endpoints exposure and allow acces just to myself.

    To do that so did create the VM in Hyper-V and then uploaded the data drives and the OS drive to an Azure storage account.

    Suddenly, last weekend, I have lost access to the VM, even after allowing all traffic I still cannot reach the VM on any port.
    I have therefore downloaded the VHDs and created a Hyper-V VM; in this case everything works as it should.
    This confirms the system is fine and the issue is specific to this VHD running on Azure.

    I was suspecting a networking issue (such as the VM not getting a private IP from the Azure DHCP), therefore, after downloading the VHD i looked into the /var/log directory hoping for hints.

    Here in the boot log I found:

    Tue Jun 2 09:58:05 2015: Cannot find device "eth0"
    Tue Jun 2 09:58:05 2015: Bind socket to interface: No such device
    Tue Jun 2 09:58:05 2015: Failed to bring up eth0.

    This explains why the public IP was reachable but the requests, once forwarded to the VM, were timing out.
    When running on Hyper-V, instead, eth0 is found and is brought up.

    Now, given in Azure there no console access, I need to solely rely on logs to troubleshoot this further.

    I'm wondering, is there any log that shows me what devices are discovered during the boot?
    I can't rely on lshw as I have no SSH or console access (given an IP is not assigned tp OMV) and must only rely on logs.
    Perhaps the NIC may not be called anymore eth0 and may be identified as p4s1 or something else; so, if I manage to find this out I may just add an entry on the /etc/network/interfaces in Hyper-V and then re-upload the VHD.

    Any help would be really appreciated.


    Tried with 2 different Mac's.

    Unfortunately both had the same issue with Samba, AFP, SFTP and FTPS.
    Both were running Maveriks installed with an in-place upgrade.

    I have now re-imaged one of these and have a clean Maveriks installation.
    I will try tomorrow, probably, to perform the same test.


    Hi @subzero79

    I finally managed to get FTP (and SFTP with key auth) working.

    Possibly FTP was not working because when I added the shares there were no privileges set.
    This time, after restoring from a clean image, I had firstly added the privileges on the shared, then added the shares to FTP and enabled the plugin.

    I have just re-attempted the operation (saving a 12 GB iPhoto library).
    The first 10 GB - 6000 files were moved at an average of 35 MB/s (a quarter than the max speed) and the last 2 GB - 20000 files are now being transferred at 200 KB/s.

    With a another folder which contains a different set of less bigger files (MP3s and FLACC), the average speed was about 45 MB/s.

    I have performed the same tests via SFTP.
    In this case the transfer were ~ 5 MB/s faster than FTPS, peaking to 50 MB/s.

    Repeated the same (MP3 and FLACC) test from Windows, via both SFTP and FTPS.
    This again maxed out the Gigabit connection averaging to 120 MB/s and peaking to 125 MB/s.

    I think this pretty much rule out protocol based issued as well as OMV itself and leave me with the sad finding that the issue must be with the Mac.

    I think I will workaround the issue as follows:
    - Copy the files to back-up to a USB3/Thunderbolt SSD formatted in FAT32
    - Connect the SSD to the Windows client
    - Sync all the data to OMV from Windows

    Alternatively I think I could sync the drive directly via the USB backup plugin.
    The only downside is that if I use that plugin (or if i mount any other drive) the changes I performed on the FSTAB (adding noatime,nodiratime,discard) get lost.
    Do you think I can modify /etc/openmediavault/config.xml and add these parameters in there or is there any chance that a next release or an update would overwrite the change anyway?

    Thank you,

    I have just performed some tests using

    Transfer from Mac to OMV over 802.11n (450 Mbps):
    Write Speed = 3.2077040 Mbps
    Read Speed = 128.5908640 Mbps

    Transfer from Mac to OMV over LAN (1 Gbps):
    Write Speed = 27.7711200 Mbps
    Read Speed = 527.7187760 Mbps

    Transfer from Mac connected via LAN (1 Gbps) to Windows connected over 802.11ac (1.3 Gbps)
    Write Speed = 26.9346200 Mbps
    Read Speed = 525.8437470 Mbps

    As we know, from Windows to OMV the performances are extremely good instead.
    The same test has been repeated while connecting from the Mac to OMV over Samba and Apple Filing, with the same results.
    Of course when connecting to the Windows client I only tested Samba.

    I'm now keen to think this may be an issue with the Mac connectivity :-(

    I will try with another Mac soon.


    @subzero79, I'll check the ProFTP config file this evening and will let you know if I can find the shared.

    @tom_tav, I will try this software you've suggested anyway all the transfers were performed between Samsung 850 Pro SSD (on the Mac and the Win clients as well as on OMV).

    @subzero79, yes, I tried adding permission (R/W) for the user I logged on as. No ACL are in place and the files at file system level have a mask of 775.
    I will retry this evening, just to double check.

    @tom_tav, yes, that was the first thing I tried as suggested on other threads in this forum.
    Unfortunately it did't make any change to the transfer speed.

    Given AFP was performing as bad as Samba o this Mac, I will also try this evening with another Mac client, just to rule out issues with this particular one.


    Hey @subzero79, I;ve jsut tried the 2 paths you gave me but I must be ding somethign wrong.

    I have installed openmediavault-netatalk, gone to Apple Filing, added the shares and attempted to connect via Finder to afp://omv.
    This worked fine and the share was mounted on the Mac, anyway (unfortunately) the speed was the same achieved via Samba.

    I have tried to move exactly the same file, to the same destination via Windows, that gets solid 110 MB/s - 115 MB/s.

    I have then enabled FTP. This was pretty straight forward.
    Have set to use explicit SSL and have attempted to connect using FileZilla; this connected just fine but under the "directory listing" I was only able to see / and nothing else. So I haven't managed to test via FTP.

    Without taking away any more of your time, shall I kindly ask you for directions on how to share the folders via FTP please?

    Thank you,

    Thanks @subzero79!

    Yeah, given the switch I'm using, the NAS NICs and the laptops' NICs are all Gigabit Ethernet I will stick to that. it should be anyway capable of 125MB/s theoretically.
    The 2 raid devices I'm using are made of NAS HDD and therefore not capable of high speed anyway, opposite to SSD; as soon as the transfers are in the 80MB/s - 100MB/s ballpark I'll be happy.

    I will try AFP first as that is probably easier to integrate with Finder.
    If the transfer rate would be far from the 80MB/s - 90MB/s I will give it a try with FTP.

    Either way I have an image of the OS drive so I can revert the changes quite quickly when I made up my mind and just add to plugin I need :-)

    Thanks again,

    Hi @subzero79,

    Thank you again for the tip, really appreciate that.

    I will try to install openmediavault-netatalk so to use AFP if that performs better.
    In your experience do you get better performances out of FTP, AFP or NFS when transferring from a MAC to OMV?

    A side question, you were suggesting to get a thunderbolt to ethernet adapter, would this perform better than the built-in Gigibit NIC?
    As far as I remember even though the thunderbolt port supports 10 Gbps, the adapter is only 1 Gbps (unless things has changed lately).


    @subzero79, thank you for your reply.

    Is then FTP the only way to get fairly good performances over MAC?

    I don't mind disabling Samba and rely on NFS if this provides good performances.
    Or even rely on Samba and AFP (if the 2 plug-ins can coexist on the same NAS).

    Let's say, the main thing is to have something that works out of the box without needing to add 3rd party clients (such as FileZilla for FTP) and with the least number of plugins possible.


    To add some information, the drives I'm using for the test (in particular the one named R-1) exhibits the following speed:

    dd if=/dev/zero of=tempfile bs=1M count=1024
    SSD: 1073741824 bytes (1.1 GB) copied, 0.689193 s, 1.6 GB/s
    R-0: 1073741824 bytes (1.1 GB) copied, 0.679538 s, 1.6 GB/s
    R-1: 1073741824 bytes (1.1 GB) copied, 0.723264 s, 1.5 GB/s

    dd of=/dev/zero if=tempfile bs=1M count=1024
    SSD: 1073741824 bytes (1.1 GB) copied, 0.189237 s, 5.7 GB/s
    R-0: 1073741824 bytes (1.1 GB) copied, 0.189017 s, 5.7 GB/s
    R-1: 1073741824 bytes (1.1 GB) copied, 0.199969 s, 5.4 GB/s

    The NIC speed (single NIC configuration) is showing an average of 960 Mbps during the transfer and Windows shows an average transfer speed of 120MB/s.

    Unfortunately, seems like I cannot create anymore a bonded device, can't understand why.
    Anyway as the speed with the original bond0 were equal (while transferring to/from Windows) to the one on either one of eth0 or eth1 i take that the router does not support any bonding.

    Hey guys :-)

    I need again your help.

    I'm using OMV to back-up some data from my clients (Win and Mac).

    I'm currently running OMV on a HP MicroServer Gen 8 with the Intel 1610T CPU and 16GB of RAM.
    I have a RAID 1 with 2 3TB WD Red drives which I'm using to store the data; I have tried to transfer the data to a single SSD drive (Samsung 850 Pro) but I see no differences in the transfers' speed. The drives are formatted as Ext4 and are mounted adding the noatime and nodiratime parameters.
    The 2 NICs (both Gigabit) are teamed but i can reproduce the problem even if using a single NIC.

    I'm experiencing very slow transfers from the Mac clients to OMV.
    The other way around (OMV to MAC) is working just fine. Transfers to Windows clients are fine too.
    Looking at the NICs stats, I see transfers easily up to 300Mbps when connected with a Windows client over WiFi and up to 900Mbps when using the cable (Cat 7 STP).

    Anyway, with transfers from Mac OSX Maverick to OMV performances are really bad. It took around 4 hours to transfers 12 GB.

    I have tried to set SMB version 2, restarted the service (and the Mac), no difference.
    I have also tried to access the share via cifs://<omv-ip>/<share-name>; same issue.

    This issue only affects the Mac clients when connecting over SMB/CIFS as far as I can see.
    Just to rule out performance issues, I have attempted the same transfer while downloading via Sabnzbd (inboud, ~ 100 Mbps), streaming a 1080p movie (outbound, ~25Mbps) and have ascertained the CPU is running ~3-4%, RAM is ~2-4% and there is plenty of network bandwidth available. Even under this test, the transfer rate was around 4 hours for 12 GB, same as when the NAS was completely idle with all the plugins, extept samba, disabled.

    I'm sharing the data over SMB/CIFS and have not installed AFP plugin (netatalk, if memory serves me).
    How can I workaround the issue? Is there any other protocol I can use (NFS perhaps?) or it there anything else I can try?

    Thank you for any tip you may give me.


    Thanks davidh2k & tekkb for yor reply.

    Yes, you're correct, this server is indeed used in a home scenario and I'm pretty much the only user.
    I anyway like the fact of taking advantage of SSL when possible. Especially since I'm making my NAS available on internet.
    And since I got a clone of my home NAS running on Azure to which I indeed connect over untrusted networks.

    As the above setup is unpractical and would be overwritten at every update, I have just posted another question on a separate thread (around using nginx as a reverse [SSL] proxy).
    If I were to use such a workaround, theoretically it should be transparent during upgrades as the plugin would be independent form the reverse proxy (performed instead on nginx).
    So, if I get this right, the plugin will still listen, let's say on port 9091 after the upgrade and nginx would receive an incoming connection from HTTPS 19091 to HTTP 9091.


    Hello guys :-)

    Here I'm back with another (maybe silly) questions.

    I know that OMV WebUI relies on nginx, while other WebUIs doesn't (Sabnzbd, Transmission, etc).

    I'm wondering, is there an easy way to publish these plugins over HTTPS?

    Perhaps we know transmission is (usually) published on port 9091 and that is plain HTTP.
    If the website requires authentication, the credentials will be sent in clear text and can perhaps be sniffed via NetMon, WireShark, netsh, etc.

    Is anybody publishing Transmission and the other downloaders (Sabnzbd, SickRage, CouchPotato and Headphones) over SSL?Perhaps I would like to have nginx listening on port 19091 (opened on the firewall) and have nginx to proxy the connection to localhost:9091 (while port 9091 is not opened on the firewall and in OMV firewall is only allowed to/from localhost).
    I'd like to use a similar approach with the other downloaders, 10000 + actual port or something down this line.

    I would like to use the same certificate for all the above as the FQDN is going to be the same or, anyway, even in the case I were using other FQDNs instead of ports (i.e. I would solely use one SAN certificate so to have a single point where it needs to be updated upon renewal.


    Hello eveyone :-)

    Lately I have been playing around with my OMV NAS (running on a HP MicroServer Gen 8).
    I have been setting up all the plugin nicely so that SickBeard, CouchPotato and Headphones interact with the folders where Tranmission and Sabnzbd stored the downloads.

    As I'm a big fan of HTTPS and I do want all the connections to be encrypted, even in the home network, I was wondering few things:
    - Is it possible to bind the OMV Web UI to a single IP instead of to both the IP's (have got 2 NICs)?
    - Assuming the above is possible, would it be possible to bind the plug-ins mentioned above to the 2nd NIC and enforce HTTPS for these as well?

    Ideally what I would like to achieve is:
    - NIC 1 is dedicated to OMV Web UI (port 443) and published to internet using one certificate
    - NIC 2 is dedicated to plug-ins web interfaces (ports 8080, 5050, 8081, 9091) again, over SSL using a different certificate.

    Would this be something already attempted and if so, is there any direction I can follow?

    Thank you,

    Great stuff, thanks.

    I have just commented the 3 USB lines and will reboot once the raid sync will be done.
    It's very suspicious anyway that sda1 was the main SD partition where OMV was installed, sda2 was the extended partition containing sda5 which was instead the swap partition.

    Just to confirm, if I add any USB device, these will be recongnized/mounted via the "File System" tab and would perhaps be listed as sdh/sdj/sdk, correct?


    Thanks @ryecoaaron,

    I have removed the swap partition and the entry in the fstab.
    You're right even with fs2ram enabled the RAM utilization is pretty low (1% of 16GB).

    Not that the server is busy as it's only creating a Raid volume.

    On a side note, very silly question.

    I have 2 3TB drives, sda and sdb, these have no partitions as I'm building the raid 1 (currently 6% throughout the process).
    In the fstab files I see some entries referring to sda (1,2 and 5) which in my opinion are incorrect.

    # <file system> <mount point> <type> <options> <dump> <pass>
    UUID=4804f249-fd46-4f71-85c9-62b7b0fa9737 / ext4 noatime,nodiratime,errors=remount-ro 0 1
    /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    /dev/sda1 /media/usb0 auto rw,user,noauto 0 0
    /dev/sda2 /media/usb1 auto rw,user,noauto 0 0
    /dev/sda5 /media/usb2 auto rw,user,noauto 0 0
    tmpfs /tmp tmpfs defaults,noatime,nodiratime 0 0

    Shall I remove these, as there are no USB drives connected to the NAS?
    I suspect these were added during the setup (which was done via USB key) as there were few USB thumb drives connected (the target SD and its clone).

    On a side note, something I'd like to share and perhaps worth sharing with who aims to reduce the writes to drive (not only the SD card or the USB flash)

    I added the "noatime,nordiartime" options on every drive in the fstab (including data drives, ssd, flash drives such as SD and USB thumb drive).
    Additionally on the SSD entry I added the "discard" option so to trigger the Trim.

    The goal is to reduce the data written as such, so that potentially useless information are not committed to disk (such as the last time a file/directory was accessed).
    Without the noatime ext4 would log any read performed on a given file; Same applies to directories when the nodiratime option isn't specified.


    Hi @davidh2k,
    Yes, that was clear, only wanted to have the option to stare anything non-data related on the system drive :-)null
    I will try to mount the OMV_DATA partition so to see if I'm allowed to store the plug-ins config backup there.
    If that's the case, and hence this is due to user error, it'd be marvelous.


    Wonderful plugin ryecoaaron!

    I had been just tweaking my OMV installation on the HP MicroServer Gen8.

    Before plugin, with no data drive set up and nothing being pretty much done it did write ~ 450M in ~ 15 minutes.
    After the plugin has been installed I can see 0.5M in the same time frame.
    I'm actually assuming "awk '/sd/ {print $3"\t"$10 / 2 / 1024}' /proc/diskstats" gives me the output in MegaBytes.

    I'm wondering, given the Swap partition has been commented out, is it suggested to actually remove the partition from the drive and completely remove the entry from the fstab?
    This would free some space in the SD card :-)

    Additionally, is there anything that can be done to flush the data written to tempfs every now and then, without needing to reboot?
    Just to avoid to have much RAM allocated for logs and files which are not needed.