Posts by namirda

    Just in case anybody else is thinking of using my proposed .automount unit file mentioned in a previous post, please note that while it does work it also has two unfortunate side-effects:

    1) The 'mountpoint' command which was pointed out to me by votdev no longer seems to work - it reports that /sharedfolders/backup is a mountpoint even when the disk is disconnected.

    2) If you try to access the shared folder (by using rsync for example) then it hangs for 90 seconds before returning a "no such device" message. This is a lot better than previously when it simply wrote stuff in the wrong place but it would be nice if you could change the timeout interval - I don't believe that you can!

    I have yet to find a way to determine if a shared folders is actually available or not other than simply trying and then waiting 90 seconds for an answer!


    Hi Volker,

    Thanks for the reply.

    I have been looking into this problem a little already. The issue is that as well as the systemd mount unit you also need a systemd automount unit which should contain something like the following:

    I have tried this out and it seems to work for me. You need to autogenerate the automount unit file at the same time as you generate the mount unit file.

    The mount service can then be disabled - the automount service handles it all

    I hope that helps


    Quote from votdev

    and everything should work as expected

    Perhaps I'm missing something here - but it really doesn't work as I expected. Please take a look at the following :

    Initially I look at my backup folder which is a shared folder created in OMV. In line 12 it is mounted on /dev/sda1 and line 15 confirms it is a mountpoint

    I then unplug the disk and repeat - as expected the shared folder is now mounted on / and /sharedfolders/backup is no longer a mountpoint

    I then reconnect the disk and, as I now expect, your unimplemented feature does not repair the mountpoint on /sharedfolders/backup

    Finally I do "sudo mount-a" s you suggested - but the mountpoint on /sharedfolders/backup is not repaired and so everything does not work as expected

    You can see from the df -h in line 61 that the disk has been remounted, this time as /dev/sdb1 but the mountpoint is not repaired.

    So I don't think that simply doing mount -a will make everything work as expected. I think you need to mount all the shares on the disk individually.?



    I have a Shared Folder called "backup" on device "disk1" which is mounted on /dev/sda1 - all configured using the OMV GUI.

    The command df shows the following as expected:

    neil@rock64:/sharedfolders$ df /sharedfolders/backup
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda1 95625896 46183644 49425868 49% /sharedfolders/backup

    If for some reason "disk1" goes offline then it shows as "Missing" in the GUI and df now shows

    neil@rock64:/sharedfolders$ df /sharedfolders/backup
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/mmcblk0p7 29861840 4010624 24597708 15% /

    So now /sharedfolders/backups is mounted on /. However if I look at the share in the GUI it still shows the share as being on /dev/disk/by-label/disk1 - nothing has changed there!

    As a result of this, my daily backups which write to /sharedfolders/backup were written to the root directory rather than /dev/sda1 - you can guess what happened!

    I would have expected/hoped that the backup job would fail because /sharedfolders/backup no longer exists - but instead it created the directory /sharedfolders/backup on / and ran the job as normal.

    If the missing drive is subsequently reconnected, it is detected by OMV and re-mounted but the mount point for the share is not updated. So even after the drive is reconnected, the daily backups will continue to be written to /. I think the only way to correct it is to delete and then re-create the share.

    So 3 questions:

    1) Is it a bug that the mount point /sharedfolders/backup is not removed when disk1 becomes unavailable??
    2) Is it another bug that the mount point is not re-created when the disk is reconnected?
    3) How should scripts check for correct mounting of shared folders before writing to them?



    Quote from ryecoaaron

    Nope. I have lm-sensors installed but I just let the system manage the fan.

    Sorry to keep coming back on this one!

    OMV4 is running fine on my TS-451 except that the fan runs all the time where it never used to do so using qnap software.

    I have installed lm-sensors and fancontrol but I am concluding from "sensors-detect" and "pwmconfig" that there are no pwm-capable sensor modules installed.

    Does that sound right to you?
    What does the output of "sensors" look like on your machine?
    Does your fan also run 24/7?
    What have you got in your lm-sensors config file?
    In what way does the "system manage the fan"?

    Thanks for your help.


    EDIT - Please ignore these questions. The problem was that SmartFan was enabled in BIOS which somehow hides the fan from lm-sensors. With SmartFan disabled in BIOS all works as expected.

    Quote from tkaiser

    If you want to discard these messasges simply add a file below /etc/rsyslog.d/ (see in the already existing ones for example syntax)

    Thanks - that hides the problem at least!

    These messages look as if they are highlighting an incompatibility in the image.Is it true that rpi and armhf images should always run on the rock64 or is it more complicated that that?

    I have been using 0.6.28 for the last week or so - all very stable. The only issue I have is that syslog is being spammed hard with thousands of messages like :

    Apr 10 15:14:56 rock64 kernel: [26477.001283] "python3" (27710) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.004184] "python3" (27710) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.044135] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.047536] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.049852] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.052597] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.054978] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.057314] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf703a198
    Apr 10 15:14:56 rock64 kernel: [26477.238250] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf70a9f04
    Apr 10 15:14:56 rock64 kernel: [26477.240532] "python3" (27636) uses deprecated CP15 Barrier instruction at 0xf70a9f0c

    The errors are coming from a number of different docker containers running armhf images - the containers run as expected but the log file is suffering!

    What is this about? Is this something that will disappear when a release version of OMV is available? If not can anybody help point me towards a solution - I have no idea!



    Quote from ryecoaaron

    It probably still works on systems that have upgraded from 3.x to 4.x because omv-mkgraph is still around. I guess the plugin needs more changes for OMV 4.x. While I use lm-sensors, I don't use the graphs in OMV and none of my test OMV boxes are physical.

    I understood from a post some time ago that you are running OMV4 on your QNAP TS451. Do you use lm-sensors to manage fan control on that box? How exactly? If not then how is it done?


    Quote from tkaiser

    Or you use 0.6.28 / OMV 4 and don't do any updates until you see a 0.6 release version appearing at the above URL.

    Thanks - I was using the url at the beginning of this thread and didn't realise there were other releases elsewhere.

    0.6.28 now up and running. Thanks for your help


    OMV images can be found here:

    I would prefer the armhf variant (32-bit userland) since it has a lower memory footprint and it's easier to run Plex media server with it. Background info: ayufan and I worked close together to integrate all the improvements for ARM boards and the images are fully supported (kernel updates as deb packages so all you need to do is to 'apt upgrade' from time to time to keep all components of the image up to date).

    I have just received my rock64 4gb with 16gb emmc.

    The jessie image jessie-openmediavault-rock64-0.5.15-136-armhf.img.xz boots successfully from either the sd card or the emmc and all seems fine.

    However when I try booting the stretch image stretch-openmediavault-rock64-0.6.26-195-armhf.img.xz from either the sd card or the emmc I get what I think is a kernel panic.

    I realise that OMV4 is still in beta but it works so smoothly on x86 and I was hoping to keep my systems with the same version.

    Any ideas?

    Hi Paul - Sorry - can't help you there. It would mean building your own filerun docker image including ssl and I have no idea how to do that!

    SSL must be quite a common requirement though - you might try asking on the filerun site...


    This works perfectly to add pushbullet or some other notification method to OMV.

    However, I would like to suppress the emails and only send the pushbullet - how can I stop the emails being sent?

    I could change the transport parameters in /etc/postfix but won't those changes be overwritten by OMV?

    What's the best way of doing this in OMV4 ?

    Indeed, I can always recreate the symlink and/or share the union12 folder directly but it seems kind of messy. Maybe that's just how it is?

    I had expected all files in the union12 share to show up in the disk1 share AS WELL - but it doesn't seem to work that way


    I have a new and much simpler question about mergerfs which I hope somebody can put me straight on... I think I must be missing something somewhere.

    Let's say I use the mergerfs plugin to create a file system called union12 combining two devices with labels disk1 and disk2.

    I then create a shared folder from union12, share it with samba and copy a few files into it.

    These files now show up in

    /sharedfolders/union12 and also

    So union12 is being treated as a shared folder resident on disk1 - all OK so far.

    My question then is, what happens if I now decide to remove the union file system without deleting its contents? How do I then access the files that were uploaded to union12? I understood that this is one of the strengths of mergerfs...

    Deleting the unionfs results in the removal of the folder /sharedfolders/union12 but the files still exist in /srv/dev-disk-by-label-disk1/union12.

    However without some cli fiddling they are no longer accessible with smb. Is that how it is meant to be?

    I can only access them again if I move them like this:

    mv /srv/dev-disk-by-label-disk1/union12 /sharedfolders/disk1

    and then they appear in a subfolder of the shared folder disk1 and all is fine. Am I missing something or is this how it is?



    How about writing a small [how-to]?


    1) As well as the Docker plugin you will need docker-compose because two images are required. Install latest version like this

    # curl -L`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
    # chmod +x /usr/local/bin/docker-compose

    Modify the above curl command to make sure you get the latest version.

    2) Check the docker-compose installation

    # docker-compose --version
    docker-compose version 1.19.0, build 1719ceb

    3) Pull the filerun image

    #docker pull afian/filerun

    4) Create a file docker-compose.yml containing the following

    A few things to notice in the yml file above:

    • a) Two containers are required and both have the restart flag set to "unless-stopped" so that filerun will start automatically on boot.
    • b) I have forwarded port 81 from the container so that filerun does not clash with the OMV gui
    • c) Filerun looks for its files in the folder /user-files and so this has been mapped to the omv folder /sharedfolders

    5) Now start filerun

    #docker-compose up -d

    6) The first time you run the above yml it will pull mariadb. You should see two new containers appear in the omv docker plugin - one for filrun and the other for mariadb. They should both be 'running'

    7) You can now access filerun by pointing your browser to http://your-omv-host:81. The default username/password is superuser/superuser.

    Once logged in you should have access to all your omv shared folders - you may need to fiddle about with permissions to achieve this! I have not been successful in using the docker plugin to modify the filerun containers but I haven't really tried too hard. It's just as easy to modify the yml file.

    That's it - good luck


    Have you tried afian/fileRun - it looks like the perfect web-based file manager to me. WINSCP, Samba, webdav etc just don't cut it when the files get large!

    I have installed filerun in Docker in OMV4 and it runs perfectly - quick too. And it's free for up to 3 users.

    Anybody else tried it?


    I am having some trouble using the Remotemount plugin to mount my smb shares. I hope somebody can help ...

    After successfully completing the add dialog, I get the standard "Please wait... Applying configuration changes" but it never finishes. Seems to be looping somewhere.

    While waiting for the GUI to complete, I took a look at fstab from a terminal and it now contains

    proc /proc proc defaults 0 0
    /dev/mmcblk0p1 /boot vfat defaults 0 2
    /dev/mmcblk0p2 / ext4 defaults,noatime,nodiratime 0 1
    # >>> [openmediavault]
    UUID=fa36508a-b3c4-4499-b30a-711dd5994225 /media/fa36508a-b3c4-4499-b30a-711dd5994225 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-uuid/F474B7AA74B76DCC /media/F474B7AA74B76DCC ntfs defaults,nofail,noexec,noatime,big_writes 0 2
    // /media/571d7507-e025-4457-9dd9-6864fcb45d14 cifs guest,_netdev,iocharset=utf8 0 0
    # <<< [openmediavault]
    # a swapfile is not a swap partition, no line here
    # use dphys-swapfile swap[on|off] for that

    which looks about right to me - Remotemount has indeed added a line for the share Valgraph on

    I don't know how to get out of the GUI loop other than closing the browser tab so I opened another GUI session and tried to delete the problematic remotemount but another loop. I closed the GUI tab again. In yet another tab I tried to delete the plugin but that also failed.

    So finally I tried to reboot OMV only to find that it would not boot and so no access via ssh. Using a keyboard and a local display I was able to remove the aforementioned line in fstab that was created by RemoteMount and then OMV booted OK and with some messing about I was able to delete the plugin.

    So a few questions:

    • In the event that the GUI hangs up somewhere, what is the recommended means of escape?
    • It seems as if RemoteMount is trying to mount the remote share when it is "applying configuration changes" but gets stuck if there is something wrong with the share. Surely there should be a timeout?
    • RemoteMount creates a valid entry in fstab but does not by default add qualifiers like nofail or ,x-systemd.device-timeout. Therefore if there is a problem with the share, the next boot fails to complete. Is it the intention that users should add failsafe qualifiers themselves - or should they be default
    • 4) In case the GUI fails in this way, what is the best procedure for removing a plugin from the CLI? How can I recover if I make a mess??

    thanks for your help



    A couple more questions please about OMV on TS-451 series..

    1) Do the front panel indicator lights works as normal with OMV - and does the quick copy button still work?

    2) Have you seen the new(ish) TS-451A with the quick access front USB port? I presume that non standard hardware like that is not currently supported with OMV? Do you know how it works?