Posts by kavejo

    Hi all,

    Im rather new to Docker as I just installed the Docker plugin (running Plex, Emby, Transmission-OpenVPN) in the last month.

    While my containers were running, a new version of Plex was released and I realized that in order to upgrade I had to stop the running container, start a new container based on the new image while pointing at the existing configuration.
    I soon realized that I had to set again all he options (paths, PUID, PGID, etc.); while this I straight forward for Plex and Emby (5 shares, 3 options), it would be a pain for Transmission-OpenVPN where I customized 30 or more options.

    So, here comes my question.
    Is there a way to backup/store a configuration file and then start a new container using this file (without having to set again manually all the options)?

    Thank you all.

    Best regard,

    Hi all,

    Since one of the last updates I keep getting from time to time a message stating "filesystem flags changed to 0x1009" and "filesystem flags changed to 0x1000" on the data drives.
    That happens even if there is no activity on the drives.

    The same type of messages is received from both NASes:

    • HP MicroServer with mdadm raid has thrown 2-3 times "filesystem flags changed to 0x1000"
    • HP DL360 with P822 hardware raid has thrown 1-2 times "filesystem flags changed to 0x1009"

    Is that something concerning?
    What do 0x1000 and 0x1009 flags means?

    Thank you.

    Just to confirm, this is what I have done and it worked like a charm:
    - Booted up GParted and resized the Swap partition from 192GB to 32 GB; the OS drive was ~ 46 GB so I left it as it was, shrunk the extended partition and moved it close to the OS so to leave all empty space at the end of the drive
    - Booted up CloneZilla and taken an image of the drive
    - Booted into HP ACU, deleted the Raid-0 volume (2 x 128GB HP SSD), created a new Raid-1 volume (2 x 128GB HP SSD)
    - Booted up CloneZilla and restored the image to the newly created drive (had to use the Expert menu and select the -icds option to ignore the drive size mismatch and have selected to use the partition table from the image)
    - Booted up GParted, moved the extended partition and the enclosed 32 GB Swap partition to the end of the volume, resized the OS partition to fit the drive

    Do you really want to sync?
    Even If in NAS1 everything got encrypted by ransomware in NAS1?
    Or If you accidentally deleted a Folder in NAS1 one month later, after sync

    Hi @henfri,

    At present yes, I want to sync.
    I am in the process to migrate the data from the old NAS (NAS1) to the newer one (NAS2), then I would re-image NAS1, change the raid configuration and use it as offsite backup.

    Why use remotemount? Just use rsync. https://forum.openmediavault.o…s/?postID=65611#post65611

    Thank you @ryecoaaron - that's a precious tip.

    I was looking to follow the guide however as soon as I enable "user authentication" I am unable to save the module.
    I have tried to set the user and group to different ones (root, admin, my user account) and the configuration can be saved unless I select "user authentication" in which case the "save" button does nothing.
    I have tried a couple of browsers (Edge, Chrome) and the issue reproduces across both of them.

    Have you ever faced this?

    Thank you.


    I'm looking to sync the content from my old NAS (soon to be decommissioned) to my new NAS.
    Both are running OMV 4 and presents the same shares via SMB (i.e. NAS1 presents NAS1_MyDocuments with the content, NAS2 presents NAS2_MyDOcuments which is empty).
    There are a number of shares (around 9) that needs to be sync'd from NAS1 to NAS2.

    What would be the best way to copy (and keep in sync) the content?
    I was thinking to mount the shares presented from NAS2 via RemoteMount on NAS1 and then use RSync to copy the content from one to the other.

    Is there any other or better way to achieve the same goal otherwise?

    Thank you.

    Best regards,

    I'm probably a little more comfortable with just keeping the current install (on 2 128GB SSD in Raid 0), shrink the partitions to 60+60GB, take an image with CloneZilla, wipe the RAID configuration and re-initialize the drives as Raid 1, then restore from the image. That's something I'm kind of familiar as on my other OMV NAS I use SD cards for the boot drive and I often back them up or restore with CloneZilla.

    Thank you.

    Thank you @ryecoaaron.

    Would you suggest that it'd be cleaner to do Debian NetInst followed by installing OMV or would you suggest to install OMV on a larger drive, resize the partitions, then clone it to the smaller one?

    Thank you.

    Hi all,

    I've just tried to install OMV on my server, on a 128 GB SSD and failed miserably a number of times, with the error message "disk too tiny" prior to realize that the setup worked just fine on a 1TB SSD.
    When inspecting the 1TB SSD to review why the installer failed, I realized that setup had allocated a 192GB swap partition which - in this case - is wasted space as I'm never going to run out of memory.

    The server where I'm running OMV has, effectively, 192 GB of ECC RAM.
    Would it be possible to install OMV without swap or, with just a limited amount of space allocated (let's say 16GB)?

    Alternatively I'm thinking of installing as is (with 192 GB swap), then run GParted, re-size the partition to 64GB OS + 32GB Swap, then with Clonezilla clone the 1TB drive (96GB + unallocated space, to be sure it fits) to the 128 GB SSD, then use GParted to extend the OS partition.

    Would anyone have a suggestion?

    Thank you!

    Got that feeling - I was used to running Plex via the plug-in but seems like most of the services are being moved to Docker.
    I'll switch Plex from native plug-in to Docker and run Emby in Docker too.

    Thanks for the confirmation.

    Best regards,


    Is there any plan for the Emby plug-in to be released for OMV 4? I’d love to run Emby alongside Plex so to have the ability to use one or another.

    Thank you!

    Hi guys,

    I have just upgraded using plexmediaserver_1.8.1.4139-c789b3fbb_amd64.deb via dpkg -i.
    This usually created the file /etc/init.d/plexmediaserver.dpkg-bak when I upgraded the previous times.
    At this time the file was not created and as a consequence Plex cannot start.
    Does anyone have the init script handy so I can restore that?


    Replying to myself in case someone else faces the same issue.

    I think there are 2 factors that have a barring on this issue.

    /etc/fstab contained some '\n' as opposed to have one entry per line.

    The Remote Share plugin was not in fact able to fetch the share from /etc/fstab as the entries were separated by '/n' and there wasn't an entry per line.

    The file looked like:

    Code /media/75ea105e-129b-4406-b371-7ee2ac17d7e9 nfs4 rsize=8192,wsize=8192,timeo=14,intr 0 0\n10.0.0.1:Share2 /media/acce9ca3-32cc-4bfe-ba4c-992bb5fcccc0 nfs4 rsize=8192,wsize=8192,timeo=14,intr 0 0

    I had to manually alter the file so that there was an entry per line, as follows.

    Code /media/75ea105e-129b-4406-b371-7ee2ac17d7e9 nfs4 rsize=8192,wsize=8192,timeo=14,intr 0 0 /media/acce9ca3-32cc-4bfe-ba4c-992bb5fcccc0 nfs4 rsize=8192,wsize=8192,timeo=14,intr 0 0

    This has addressed the issue with the Remote Share plugin which is now able to retrieve both the NFS Shares.
    Needless to say that if I make any change via the Web UI, that reintroduces the '\n' in /etc/fstab and the issue comes back.

    The shares were not getting mounted on reboot.

    I also noticed sometimes neither of the 2 shares were getting mounted, due to a latency in acquiring an IP via DHCP.

    Adding ',ro,noexec,noauto,noatime,x-systemd.automount' to the default 'rsize=8192,wsize=8192,timeo=14,intr' seems to have addressed this problem.
    Now both the shares gets resiliently mounted after each reboot.

    /etc/fstab now looks like the following.

    Code /media/75ea105e-129b-4406-b371-7ee2ac17d7e9 nfs4 rsize=8192,wsize=8192,timeo=14,intr,ro,noexec,noauto,noatime,x-systemd.automount 0 0 /media/acce9ca3-32cc-4bfe-ba4c-992bb5fcccc0 nfs4 rsize=8192,wsize=8192,timeo=14,intr,ro,noexec,noauto,noatime,x-systemd.automount 0 0

    Hope this can be of help to others as well.


    I have installed OMV on a RPi3 - all went well for some time.

    I then decided to install Plex Media Server and to mount some remote share (from another NAS running OMV in the same network).
    These share are shared via SMB/CIFS as read only and equally mounted with the "ro" switch.

    Every time I try to browse to Storage > File Systems I get the following error.

    Could not fetch a matching mount point from the provided fsname: '/media/56ce3c2c-9f9a-47c9-94ce-802744ba02b8'.

    The details of the error are the following.

    If I look at /etc/fstab, the entry '/media/56ce3c2c-9f9a-47c9-94ce-802744ba02b8' is there.
    Equally the mountpoint exists in the '/media/' directory and is fully accessible (II can browse the content of the share).

    Has anyone got any suggestion on how to overcome this?

    Thank you.

    Best regards,

    Good evening,

    This evening I started to deploy OMV on a RPi3 using the latest image available.
    Once deployed I have added testing & community repo's and update the system.

    I am facing a very strange behaviour with the on-board networks though.
    I have enabled both eth0 and wlan0. If I leave the ethernet cable plugged in (and connected) then I can browse to both the RPi IP's (the LAN and the WLAN ones).
    As soon as I unplug the ethernet cable, OMV becomes unreachable and I cannot browse to its webpage or connect via SSH.
    If I re-plug the ethernet cable all return functional and I can connect via HTTPS or SSH on both the IP's.

    I thought it might have been losing the connection however checking the router administration page I can see the RPI connected and transmitting/receiving data.
    Even after a reboot (that would disconnect and re-connect from the Wireless LAN), I can see it connects successfully to the router however for some reason it is unreachable via SSH or HTTPS.

    Just to tule out the obvious I have tried with different clients as well, some wired, other wireless.

    I have tried with both static and dynamic IP's, that made no difference.

    I might plug in another Wireless dongle (I have a spare D-Link DWA-121) just to see if it works using that one.
    Nevertheless I thought to post on the board, just to see if anyone else has seen this odd behaviour?