Problem with metadata files being remotely deleted on MergerFS / UnionFS drive pool.

  • I run Medusa (a TV downloader) on a Chinese NUC clone running Ubuntu server and it copies new media files over to pooled NFS shares on my NAS (specs below) and generates metadata files on the fly. Been doing it this way for years, firstly on QNAP and later OMV4 and now OMV 5.


    Recently Medusa started to delete a whole folder's metadata files but it left the media files intact. No big deal they can be recoveded by SnapRAID or recreated with Tiny Media Manager. I can't say with certainty but I think this started when I moved to OMV 5 - does this version handle NFS shares and/or MergerFS drive pool mounts any differently?


    I posted in the Medusa Github forum: https://github.com/pymedusa/Medusa/issues/8022.


    Advice - if Medusa does not see the media files it deletes the related metadata files.


    NFS shares were thought the culprit and I was referred to this article: https://stackoverflow.com/ques…irectory-file-that-exists


    Naturally I'd like to prevent this - any ideas?


    John

    Inwin MS04 case with 315 W PSU

    ASUS Prime H310i-Plus R2.0 board

    Four port PCI-E SATA card

    8GB Kingston DDR4

    Intel Pentium Coffee Lake G5400 CPU

    Samsung Evo M.2 256GB OS drive (28 GB partitioned for OS)

    4x4TB WD Red NAS drives - UnionFS pool

    Seagate 5TB USB drive - SnapRAID parity

    1x1TB Seagate HD

    1x300GB Toshiba HD

    Seagate 2TB USB drive

    Edited once, last by johnvick ().

  • johnvick

    Changed the title of the thread from “Problem with metedata files being remotely deleted on MergerFS / UnionFS drive pool.” to “Problem with metadata files being remotely deleted on MergerFS / UnionFS drive pool.”.
  • I doubt it is NFS.


    More likely you have permission issues and/or problems with the mergerfs pool.


    I recently had some strange issues with some drives with existing data that I pooled. Also accessed over NFS. Files not found or duplicated. Or locked.


    I fixed it by:


    1. Running resetperms on each shared folder that was part of the pool.

    2. Running resetperms on the pool.

    3. Running mergerfs.fsck -f newest on the pool.


    https://github.com/trapexit/mergerfs-tools


    After that everything worked fine.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • Thanks for the input - the shares in question need to be rw for the device running Medusa. I have them 777 as this was the only way I could get it working.


    I'll try your suggestions and report back.

    Inwin MS04 case with 315 W PSU

    ASUS Prime H310i-Plus R2.0 board

    Four port PCI-E SATA card

    8GB Kingston DDR4

    Intel Pentium Coffee Lake G5400 CPU

    Samsung Evo M.2 256GB OS drive (28 GB partitioned for OS)

    4x4TB WD Red NAS drives - UnionFS pool

    Seagate 5TB USB drive - SnapRAID parity

    1x1TB Seagate HD

    1x300GB Toshiba HD

    Seagate 2TB USB drive

  • Make sure to run Medusa as a user that also exist on the OMV server. Uid (and gid?) must match. Make that user a member of the group users on the OMV server.


    I always create a new user account for this purpose as the first account directly after installation of a Linux box. Then that user will have the same uid and gid on all boxes. And they can all talk nfs to each other freely. You can change uid and gid later. But it is cludgy to say the least...


    I had to do it on a RPi4, because the first user created there was the useless pi user. I added another user, member of sudo and ssh, then I could delete the pi user and create another user and give that user the right uid and gid to talk nfs with all the other boxes.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • OK that's all taken care of by virtue of the fact the first user on each device was john.


    Testing so far looks OK but I'll need to wait a day or two until Medusa has done its background stuff to see if the problem is fixed.


    Thanks again for your help.

    Inwin MS04 case with 315 W PSU

    ASUS Prime H310i-Plus R2.0 board

    Four port PCI-E SATA card

    8GB Kingston DDR4

    Intel Pentium Coffee Lake G5400 CPU

    Samsung Evo M.2 256GB OS drive (28 GB partitioned for OS)

    4x4TB WD Red NAS drives - UnionFS pool

    Seagate 5TB USB drive - SnapRAID parity

    1x1TB Seagate HD

    1x300GB Toshiba HD

    Seagate 2TB USB drive

    Edited once, last by johnvick ().

  • Three days later and no recurrence of this problem so looking like you tips were correct Adoby - thanks again.


    John

    Inwin MS04 case with 315 W PSU

    ASUS Prime H310i-Plus R2.0 board

    Four port PCI-E SATA card

    8GB Kingston DDR4

    Intel Pentium Coffee Lake G5400 CPU

    Samsung Evo M.2 256GB OS drive (28 GB partitioned for OS)

    4x4TB WD Red NAS drives - UnionFS pool

    Seagate 5TB USB drive - SnapRAID parity

    1x1TB Seagate HD

    1x300GB Toshiba HD

    Seagate 2TB USB drive

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!