Posts by johnvick

    OK have solved this.


    In the Portainer screen above use:


    Name - your choice

    Address - IP of OMV

    Mount point - /name of your share (not /export/name of your share)


    Ensure user is the same on all devices with same UID and GID and the shares are owned by that user.


    Adding the share is done through Portainer add volume to container screen.


    If anyone knows how to add the nfs shares to a container with Docker-compose I'd love to know.

    Thanks for your input I may not have explained myself fully but I want the OMV nfs shares to be accessible from the second (Ubuntu) device. I can't use the remotemount plugin therefore. The shares are accessible using usual methods (fstab, autofs) but not from within Docker apps using the Docker/Portainer nfs mounting methods I have seen in the above links and others. Hence the request for a working example. nfs-utils (now called nfs-common) is installed.

    I have several nfs shares for media folders on OMV 5 and it all works fine for streaming to Kodi, Jellyfin etc.


    On a second (Ubuntu) server I want to create Docker nfs volumes to use in Docker apps such as Medusa. I get to this screen and I've tried all the obvious settings but I can't get things to work - the volumes are created but when I add them to a Docker app there is nothing in the share folders. Can anyone give me a working example of what to put in the Address and mount point fields? Thanks.


    My approach to this and it works locally, I haven't tested it remotely, is to install Jellyfin in a docker and use this as a back end database for Kodi. Jellyfin access over the net can be made more secure with a reverse proxy setup - lots on this in Jellyfin forum. You'll need duckdns or similar and Letsencrypt (or whatever it's new name is) to enable secure remote access.


    Then use the Jellyfin for Kodi plugin and point it to your Jellyfin web address. This bit works locally, haven't tested it remotely but if it doesn't work your father could see your files with the Jellyfin web client.

    Title says it all

    I have an NFS share on Ubuntu server 192.168.1.2 that I know I can remote mount on other devices.


    On OMV (192.168.1.3)I have the plugin installed and the remote share details entered but when I click the mount button I get:


    Error #0:
    OMV\Rpc\Exception: Invalid RPC response. Please check the syslog for more information. in /usr/share/php/openmediavault/rpc/rpc.inc:187
    Stack trace:
    #0 /usr/share/php/openmediavault/rpc/proxy/json.inc(97): OMV\Rpc\Rpc::call('RemoteMount', 'mount', Array, Array, 3)
    #1 /var/www/openmediavault/rpc.php(45): OMV\Rpc\Proxy\Json->handle()
    #2 {main}


    Any clues?

    To recap on how I did this on a similar CPU with Ubuntu and on OMV.


    Install linuxserver/jellyfin docker (It does the HW transcoding config for you)

    set up your media folders

    Set up Jellyfin to use VAAPI - disable QuicSync (hard to get going and if VAAPI works then little need)


    Optional step - install vainfo on base system to confirm encoding/decoding capabilities of your Intel chip.


    Find a 10 bit HEVC file and play it. Open an HTOP window to see what's happening - this will tell if HW transcoding is working.


    Optional step - install intel_gpu_top and run it - this will tell if HW transcoding is working.

    It's stopped happening now.


    root@omv:~# blkid

    /dev/nvme0n1p1: UUID="393f61cd-21b1-4adc-b552-b8ae580edd22" TYPE="ext4" PARTUUID="61bfbf69-01"

    /dev/nvme0n1p3: LABEL="NVME" UUID="e8dff64b-7555-43e0-a84b-7efc9f954276" TYPE="ext4" PARTUUID="61bfbf69-03"

    /dev/nvme0n1p5: UUID="33b39126-8ec0-4a9d-91bb-9f26bb8395c9" TYPE="swap" PARTUUID="61bfbf69-05"

    /dev/sdc1: LABEL="Disk3" UUID="214fb3f7-7f1d-4ade-af12-a4444f84e402" TYPE="ext4" PARTUUID="66af03cc-281d-4f33-9861-aada72b3a3b6"

    /dev/sda1: LABEL="Disk1" UUID="7e0f12ca-1f8b-4415-bd27-eee48075ed0d" TYPE="ext4" PARTUUID="5ec281fb-c830-4db1-85ec-362db1aec3df"

    /dev/sdb1: LABEL="Disk2" UUID="086a3ffe-b293-4e3e-91e1-a6652e796de4" TYPE="ext4" PARTUUID="ed26f723-c22e-4b5a-bfc9-be453a0f8c6f"

    /dev/sdd: LABEL="Disk4" UUID="8eea6c95-b331-49fb-91fa-cd6fc59d0adb" TYPE="ext4"

    /dev/sde: LABEL="Parity" UUID="c05d236d-cbe9-4115-b935-cf92ddd4fc24" TYPE="ext4"

    /dev/sdh1: LABEL="1TBHD2" UUID="5f3a7897-0c90-40f7-a35d-95504be848d1" TYPE="ext4" PARTUUID="280bb5d4-c5f2-49a9-aec6-a58ed1faf454"

    /dev/sdg1: LABEL="1TBHD1" UUID="03a8835a-fc59-47a8-a2cf-c21ebd754b75" TYPE="ext4" PARTUUID="df5bf7b4-ae83-4c2f-bea4-55b1b37fde07"

    /dev/sdf1: LABEL="300GBHD" UUID="d3320905-e848-4000-b012-b4cca2d13889" TYPE="ext4" PARTUUID="ae01cc05-8e8c-4f40-96a6-1a491412b8d3"

    /dev/nvme0n1: PTUUID="61bfbf69" PTTYPE="dos"


    # /etc/fstab: static file system information.

    #

    # Use 'blkid' to print the universally unique identifier for a

    # device; this may be used with UUID= as a more robust way to name devices

    # that works even if disks are added and removed. See fstab(5).

    #

    # <file system> <mount point> <type> <options> <dump> <pass>

    # / was on /dev/nvme0n1p1 during installation

    UUID=393f61cd-21b1-4adc-b552-b8ae580edd22 / ext4 errors=remount-ro 0 1

    # swap was on /dev/nvme0n1p5 during installation

    UUID=33b39126-8ec0-4a9d-91bb-9f26bb8395c9 none swap sw 0 0

    # >>> [openmediavault]

    /dev/disk/by-label/300GBHD /srv/dev-disk-by-label-300GBHD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/NVME /srv/dev-disk-by-label-NVME ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2

    /dev/disk/by-label/Parity /srv/dev-disk-by-label-Parity ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/Disk1 /srv/dev-disk-by-label-Disk1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/Disk2 /srv/dev-disk-by-label-Disk2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/Disk3 /srv/dev-disk-by-label-Disk3 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/Disk4 /srv/dev-disk-by-label-Disk4 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/1TBHD1 /srv/dev-disk-by-label-1TBHD1 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /dev/disk/by-label/1TBHD2 /srv/dev-disk-by-label-1TBHD2 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2

    /srv/dev-disk-by-label-Disk3:/srv/dev-disk-by-label-Disk1:/srv/dev-disk-by-label-Disk4:/srv/dev-disk-by-label-Disk2 /srv/144ab994-0e0f-4a42-a06b-f37e84454803 fuse.mergerfs defaults,allow_other,cache.files=off,use_ino,category.create=epmfs,minfreespace=4G,fsname=MergerFS:144ab994-0e0f-4a42-a06b-f37e84454803,x-systemd.requires=/srv/dev-disk-by-label-Disk3,x-systemd.requires=/srv/dev-disk-by-label-Disk1,x-systemd.requires=/srv/dev-disk-by-label-Disk4,x-systemd.requires=/srv/dev-disk-by-label-Disk2 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/American/ /export/American none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/British/ /export/British none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Cartoons/ /export/Cartoons none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Documentaries/ /export/Documentaries none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/European/ /export/European none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Extras/ /export/Extras none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/J45/ /export/J45 none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Magic/ /export/Magic none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Movies/ /export/Movies none bind,nofail 0 0

    /srv/dev-disk-by-label-NVME/Music/ /export/Music none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Music-videos/ /export/Music-videos none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Photos/ /export/Photos none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Reality/ /export/Reality none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Triathlon/ /export/Triathlon none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/World/ /export/World none bind,nofail 0 0

    /srv/dev-disk-by-label-300GBHD/ /export/300GBHD none bind,nofail 0 0

    /srv/144ab994-0e0f-4a42-a06b-f37e84454803/Downloads/ /export/Downloads none bind,nofail 0 0

    # <<< [openmediavault]

    root@omv:~#

    I'm getting email error messages like the following on boot, one for each drive. Everything work fine just reporting in case it is a bug.


    Host: \omv.workgroup

    Date: Fri, 11 Dec 2020 01:00:13

    Service: filesystem_srv_dev-disk-by-label-NVME

    Event: Does not exist

    Description: unable to read filesystem '/srv/dev-disk-by-label-NVME' state

    This triggered the monitoring system to: restart