Installation of Emby on new and pretty clean OMV system fails

  • I'm new to OMV and docker, but have successfully installed both, OMV starting from a Debian netinst installation, then Docker and also some Docker images, which run perfectly fine.
    Then, I've been trying to install the latest emby/embyserver docker image on the system. I've followed the instructions, as well as the video tutorial by TechnoDadLife, but the container ends up in an endless restart loop.


    Here the log file: https://pastebin.com/tj7qD0Ms


    I've suspected permission issues and tried different UID, GID and GIDLIST versions, from users to root, without success. Do you have any idea how to fix it? Or is the image broken?

  • @geaves: Thanks for looking into this.


    I'm running it on a newly built DIY NAS:
    - MB: Gigabyte C246-WU4
    - Intel Core i3-8100
    - 2x16 GB Crucial DDR4, ECC RAM
    - 2U SilverStone SST-RM208 rackmount case, backplane with 2x miniSAS connectors connected to the onboard SATA slots


    And I use Snapraid with UnionFS.

    • Offizieller Beitrag

    with UnionFS

    That's the problem, two choices, what I did was to add a small laptop drive for docker configs so it's independent of the UnionFS, or this with further information here
    Removing direct_io and using the two links there is a way to resolve it, me I preferred the easier option :)

  • @geaves: Followed your hint and rechecked the Snapraid and unionfs setup. Realized I had UnionFS on eMFS. Switched to MFS, rebooted, and now the Emby container runs. This does not make any sense to me and I'd be more than happy to learn why that is. So, Emby runs now, but please do let me know. In any case, many thanks for your help!

  • Just cross-posted. OK, my Debian/OMV installation sits on an NVMe SSD. I should be able to store the Docker configs there as well. Just need to back them up separately. Gee, did not see that one coming. Again, many thanks!

    • Offizieller Beitrag

    So, Emby runs now,

    Well at least you have it running, :thumbup: if you look at the first link but go to the start of the thread, the OP actually resolves this himself in the last post (which is the link) he found that mmap will not work with the option direct_io, but removing it causes double caching hence he replaced this with dropcacheonclose=true. You need to read tips and tricks on github for it too make sense.

  • Yeah, read it. No way I would have suspected UnionFS to be the problem. I'm still just testing my NAS so can afford to start from scratch. I think I'll go the old-school way, get rid of Snapraid / UnionFS and revert to good old SW-Raid5... :rolleyes:


    Cheers, mate. This drove me crazy the last two days.

    • Offizieller Beitrag

    I think I'll go the old-school way, get rid of Snapraid / UnionFS and revert to good old SW-Raid5

    :D well I've just moved from the old school way to UnionFS + Snapraid, the standalone drive for just docker configs was just easier + I had a spare drive anyway.

  • :D Brave @geaves...


    I've got a 250GB NVMe SSD, just to run Debian Netinst + OMV, which is already overkill. My storage comprises 3 x 6TB WD Red drives, soon 4 as I'll be running out of space in a couple of months, but not a lot. I do have HDD slots as well as old, small hard drives left. But I feel uneasy about installing a HDD just for Docker config files. And I feel uneasy trusting Snapraid / UnionFS, when I ran into these really, really weird problems within the very first days of trying. Initializing Raid5 as I write. Old school is ok for me. I'm pretty old already... :rolleyes:

    • Offizieller Beitrag

    And I feel uneasy trusting Snapraid / UnionFS, when I ran into these really, really weird problems within the very first days of trying

    So did I but I had to replace 2 drives primarily due to age, but it would have mean't getting 2 drives of identical size to replace in the Raid, so by getting 2 larger drives anyway and moving to UnionFS and Snapraid allowed me use all my current drives, 4 in total.


    You need @crashtest to explain the benefits over Raid, the Docker configs are the only issue I have come across, but this can be overcome.

    • Offizieller Beitrag

    The problem with Dockers running from a UnionFS drive is that both use versions of overlayFS which may result in very long and complex file paths. For the same reasons, I won't store UrBackup client backups on a UnionFS drive or even on a ZFS array (ZFS' snapshots may create a similar problem). These issues are easy to head off with a dedicated drive.
    ___________________________________________________________


    The benefits of SNAPRAID+UnionsFS (when using the default storage policy) are;


    Users can back out of either one, at any time.
    Hard drives of any size can be used. (Provided the parity drive is the same or larger.)
    Files, folders, or even entire hard drives can be restored. (To their state as of the last SYNC)


    And last but not least - Bit-Rot protection.
    Any CoW filesystem implementation that protects a full hard drive from Bit-Rot requires 2 times the storage space to implement, or the rough equivalent of RAID1. That's 50% or 1/2 of storage space lost. SNAPRAID provides similar protection with a 33% loss (if protecting 2 hard drives) or even less if protecting more.
    While they're anecdotal tests - I've found that SNAPRAID Bit-Rot protection works. And in ongoing real world testing, I've intentionally used SNAPRAID with older drives. As these drives age, SNAPRAID has scrubbed a couple silent errors, here and there, on more than one occasion.)


    Lastly, in recovery, resilvering a new drive added into a RAID5 array beats the rest of the (older) drives hard. Recovering a drive, SNAPRAID simply rebuilds the old drive onto a new one. There's less stress involved for the older drives in the pool.
    ____________________________________________________________


    You won't get anywhere near the above list of benefits from SW RAID5. Your call.

  • Thanks, guys. I understand. I did not have Snapraid on my radar at all, until someone at kodinerds mentioned it favorably. I got the idea immediately, which is why I tried it. I may revisit it, but probably just in years from now: I have 8 hot-swap bays and four left to fill-up with (more) 6TB HDDs. I use my NAS as home server and as build machine from time to time. My sensitive information barely changes over time. I do regular backups. In server terms my needs are VERY basic. I run RAID5 to reduce potential hassle if a drive fails (happened once before), nothing more, nothing less. RAID5 will do for me at this stage.


    But again, many thanks for your support and your explanations. I'm sure future readers will appreciate them as much as I did. Would still be walking in the dark without you guys. Much, much appreciated!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!