Posts by half_man_half_cat

    For those having the same issue - I found out it was caused by my boot drive being plugged into sata 5 instead of sata 0. Other drive expansion cards were initing before it and breaking the order it seemed. plugging into sata 0 for the boot drive made this go away instantly

    Hey all,


    I’ve been running an OMV server for quite a few years and somehow have been in the dark regarding MergeFS + SnapRaid.


    I’d like to set both up on my OMV server, but have a couple of questions.


    Currently, my setup looks like:


    Data drives, all ext4:

    Fry 4tb

    Leela 2tb

    Bender 10tb


    Backup drive, ext4:

    Nibbler 16tb


    Currently for backups, I run varying frequencies of selective Rsync to Nibbler. In some sense, like a less optimal version of SnapRaid.


    I’m wondering the following:


    For MergeFS, I would plan to pool the three data drives together. Ideally, I’d like to preserve the top level drive names as root folders, e.g.:

    mnt/pool/fry

    mnt/pool/leela


    I assume it should be possible, would epmfs be the correct policy to maintain this?


    Also, I assume the correct path of action afterward would be to migrate my container configurations to use the new pool mount point?


    Finally, I believe this should not impact the existing data on the drives in anyway, I just wanted to double check before rolling this. :)


    Would correct configuration look something like this?


    Then, I plan to use SnapRaid to replace the incremental Rsyncs to nibbler. Nibbler is 16tb in total, so is the biggest drive and also larger than all drives in the pool that require parity.

    I’d simply mark all data drives as data and content and Nibbler as parity.


    I was wondering how the parity drive specifically works in this case. In my current setup, if I suffer a drive failure, I simply rsync data back up to a new drive. Secondly, if I have issues with user error, e.g accidental file deletion, I have some hours to go to the backup drive and scoop the deleted file back up (due to delayed rsync).


    I believe that if I have a drive failure with SnapRaid, I’d use the commands to swap drives over and then rebuild from parity. If I accidentally deleted a file and needed to easily recover, I don’t believe it would be particularly easy?


    Therefore, should I retain my existing incremental rsyncs on the backup drive and add another parity drive for SnapRaid?


    And for general recovery, I assume the following is the go to: https://github.com/trapexit/ba…_(mergerfs%2Csnapraid).md


    Many thanks.

    Hey guys,


    I just resolved an issue on OMV4 where an rsync job was causing reboots. The solution was to delete the erroneous rsync job and recreate it in OMV UI.


    For reference, here was the faulty command:



    After recreating, this was the working command



    I'm not sure if there were more changes elsewhere but posting here in case anyone finds this useful.

    As per the title, not sure if this is possible.


    I'm currently running a let's encrypt docker container which is correctly routing to several other services I'm using.


    I'm wondering if I could also leverage this for the web gui, however I'm not sure if it's possible.


    I believe to do this, I would need to run the let's encrypt container in host mode rather than bridge?


    Has anyone here successfully done this, or have a better recommendation for how to do this please?


    Thanks!

    Hey all,



    Trying to set up LetsEncrypt reverse proxy with Docker, basically following https://www.youtube.com/watch?v=TkjAcp8q0W0 - the issue is, whenever I enter `--network my-net` into extra arguments for the docker container, I get the following error message:


    Code
    cannot attach both user-defined and non-user-defined network-modes
    
    
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; docker run -d --restart=always -v /etc/localtime:/etc/localtime:ro --net=none -e LANGUAGE="en_US.UTF-8" -e TERM="xterm" -e AIRSONIC_HOME="/app/airsonic" -e PGID="100" -e PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" -e HOME="/root" -e LANG="C.UTF-8" -e AIRSONIC_SETTINGS="/config" -e PUID="1000" -e TZ="redcated" -v "/home/docker/airsonic":"/config":rw -v "/srv/dev-disk-by-label-Media/Music":"/music":rw -v "/podcasts" -v "/media" -v "/playlists" --name="Airsonic" --label omv_docker_extra_args="--network my-net" --network my-net "linuxserver/airsonic:latest" 2>&1' with exit code '125': docker: conflicting options: cannot attach both user-defined and non-user-defined network-modes. See 'docker run --help'.


    I have even tried disabling the network option on the container entirely but it still persists. I have changed and tested different configurations in nginx. Nginx subdomain configurations for the services was setup also, e.g. for Airsonic, but only the default start page showed - probably because the services could not be discovered as the --network command wasn't used. Even though the services show on the same bridge in the networks tab, I do not think this is enough.


    Any idea what I'm doing wrong?



    Thanks!