Beiträge von mervincm

    Everything has been working really well, and I have not logged in to do any updates in a while and something is amiss.



    When I try ti update via GUI it errors out with

    Upgrade system

    ** CONNECTION LOST **


    Close

    500 - Internal Server Error
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; export DEBIAN_FRONTEND=noninteractive; apt-get --yes --allow-downgrades --allow-change-held-packages --fix-broken --fix-missing --auto-remove --allow-unauthenticated --show-upgraded --option DPkg::Options::="--force-confold" dist-upgrade 2>&1' with exit code '100': Reading package lists... E: No priority (or zero) specified for pin



    if I SSH to it as root and try just OMV


    root@omvdirect:~# sudo omv-upgrade

    Get:1 file:/var/cache/openmediavault/archives InRelease

    Ign:1 file:/var/cache/openmediavault/archives InRelease

    Get:2 file:/var/cache/openmediavault/archives Release

    Ign:2 file:/var/cache/openmediavault/archives Release

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Ign:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Get:3 file:/var/cache/openmediavault/archives Packages

    Get:4 file:/var/cache/openmediavault/archives Translation-en

    Ign:4 file:/var/cache/openmediavault/archives Translation-en

    Hit:5 http://deb.debian.org/debian bookworm InRelease

    Hit:6 http://deb.debian.org/debian bookworm-updates InRelease

    Hit:7 http://repository.netdata.cloud/repos/edge/debian bullseye/ InRelease

    Hit:8 http://security.debian.org/debian-security bookworm-security InRelease

    Hit:9 http://packages.openmediavault.org/public sandworm InRelease

    Hit:10 https://openmediavault.github.io/packages sandworm InRelease

    Hit:11 http://httpredir.debian.org/debian bookworm-backports InRelease

    Hit:12 https://openmediavault-plugin-…github.io/packages/debian sandworm InRelease

    Hit:13 https://download.docker.com/linux/debian bookworm InRelease

    Get:14 https://openmediavault.github.io/packages sandworm-proposed InRelease [5451 B]

    Hit:15 http://repository.netdata.cloud/repos/repoconfig/debian bullseye/ InRelease

    Hit:16 http://packages.openmediavault.org/public sandworm-proposed InRelease

    Fetched 5451 B in 1s (5462 B/s)

    Traceback (most recent call last):

    File "/usr/sbin/omv-mkaptidx", line 236, in <module>

    sys.exit(main())

    ^^^^^^

    File "/usr/lib/python3/dist-packages/click/core.py", line 1130, in __call__

    return self.main(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "/usr/lib/python3/dist-packages/click/core.py", line 1055, in main

    rv = self.invoke(ctx)

    ^^^^^^^^^^^^^^^^

    File "/usr/lib/python3/dist-packages/click/core.py", line 1404, in invoke

    return ctx.invoke(self.callback, **ctx.params)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "/usr/lib/python3/dist-packages/click/core.py", line 760, in invoke

    return __callback(*args, **kwargs)

    ^^^^^^^^^^^^^^^^^^^^^^^^^^^

    File "/usr/sbin/omv-mkaptidx", line 128, in main

    cache = apt.cache.Cache()

    ^^^^^^^^^^^^^^^^^

    File "/usr/lib/python3/dist-packages/apt/cache.py", line 170, in __init__

    self.open(progress)

    File "/usr/lib/python3/dist-packages/apt/cache.py", line 232, in open

    self._cache = apt_pkg.Cache(progress)

    ^^^^^^^^^^^^^^^^^^^^^^^

    apt_pkg.Error: E:No priority (or zero) specified for pin

    Reading package lists... Done

    Reading package lists... Done

    E: No priority (or zero) specified for pin


    It seems that there was an issue with netdata, found a solution on StackExchange

    apt suddenly broke with error No priority (or zero) specified for pin
    I'm facing an error returned by apt and after many troubleshooting steps I couldn't find the cause or solution. Here is the apt update output: $ sudo apt…
    askubuntu.com


    edit /etc/apt/preferences.d/80netdata and change the line:

    Code
    Priority: 1000

    to:

    Code
    Pin-Priority: 1000



    worked for me.

    Indeed that is the image I am using


    I guess this is the basis of my confusion.

    I get it that the app in the container normally uses 7878.

    I changed the app setting for it to use 7879. (I have another instance of the same container on 7878, one for kids movies, one for regular movies)

    I thought that would result in "inside" the running image the app would then be listening to (only) 7879

    I thought that my docker compose file with port settings of 7879:7879 would result in only connections to the omv_ip:7879 forwarded to the containers dockerrange_ip:7879


    and thus I don't understand why 7878 is exposed on this image even though (I think!) there is no server on the inside that will listen to it.


    If I understand it now, the folks at Linux server put something in their docker config file to expose 7878 specifically, and nothing that I have done here changes that fact. looking at the radarr version of the docker file they made I see this

    # ports and volumes

    EXPOSE 7878


    so I think you answered my question .....

    THIS is where the 7878 comes from ...


    Thank you so much!


    Now I am going to research to see if there is a way to override that dockerfile based "expose" command with something in my docker compose .. clean it up a bit.

    I have done that. I only have the IP for the new DNS listed there, yet OMV continues to connect to the old one.

    old DNS is at 10.0.0.42. new DNS is at 10.0.0.254.



    and strangely the only domain it is querying is google.com and its every 5 minutes ... like some sort of a connectivity test.


    I thought perhaps the GUI just wasn't sync'd to the debian underneath so I checked /etc/resolv.conf and it had a 127.0.0.53 something that didn't make sense to me, so I searched and that seems to mean that it is using systemd-resolved as a name resolver service, so I checked around /etc/systemd/ for something with the old IP, but no luck.

    looking at the docker file in services>compose>files I have a single port open via this statement


    network_mode: bridge

    ports:

    - ${radarrk_ui_port}:7879


    the first part is a variable defined in global env variables

    radarrk_ui_port=7879


    I also have the app internally set at port 7879


    so this should be a simple omv_ipaddress:7879 to get to the app and indeed that works


    the piece I find "off" is under services>compose>services

    forum.openmediavault.org/wsc/index.php?attachment/35917/


    where is 7878 coming from?

    I am moving DNS from a container to a dedicated hardware appliance (pi w adguard home)

    I managed to track down and change all my DHCP scopes and static configs ... Except for my OMV system... It is the only client the DNS server at the old IP address receives queries from.


    I confirmed network>interfaces>devices had been changed to the new IP address. and restarted OMV system


    Any thoughts on what else in OMV might need to be updated?


    I have quite a few docker apps hosted, but I didn't see any of them with a locally configured DNS ... they all just use the hosts network stack.


    any ideas would be appreciated.

    I use OMV to host applications and TrueNAS to host storage.

    I know this is not as simple as would be ideal, But I love the way the docker compose OMV add in works. Similarly I love the maturity of the ZFS storage options in TrueNAS. Apps need access to the storage, and NFS is usually the way to go for remote storage with linux to linux, thus I am trying to remotely mount TrueNAS NFS shared in OMV via the remote mount plugin.


    on the TrueNAS side the folder structure looks like this (stolen from best practices guides)

    /mnt/hddpool/data/ is the root and I have three folders inside it.

    /mnt/hddpool/data/usenet

    /mnt/hddpool/data/torrents

    /mnt/hddpool/data/media

    inside each of these folders is a a series of other folder such as

    /mnt/hddpool/data/media/tv

    /mnt/hddpool/data/media/music


    the important folders here are

    /mnt/hddpool/data is media:media drwxrwxr-x

    /mnt/hddpool/data/torrents is qbittorrent:media drwxrwxr-x

    /mnt/hddpool/data/usenet is sabnzbd:media drwxrwxr-x

    /mnt/hddpool/data/media is media:media drwxrwxr-x

    /mnt/hddpool/data/media/books is media:media drwxrwxr-x

    /mnt/hddpool/data/media/tv is sonarr:media drwxrwxr-x

    /mnt/hddpool/data/media/music lidarr:media drwxrwxr-x


    I have NFS shares on trueNAS created for ALL of these folders and as far as I can see created equally. I did this for troubleshooting and to get working. in the end I want to claw it back to only what is actually required.



    from the OMV side


    if I remote mount /mnt/hddpool/data All I see is three empty folder (media,torrents,usenet)

    if I remote mount /mnt/hddpool/data/torrents I see the full folder structure, all files and folders that exist there

    if I remote mount /mnt/hddpool/data/usenet I see the full folder structure, all files and folders that exist there

    if I remote mount /mnt/hddpool/data/media/books I see the full folder structure, all files and folders that exist there

    if I remote mount /mnt/hddpool/data/media/tv I see the full folder structure, all files and folders that exist there

    if I remote mount .... anything .... other than the /data root .. I have access


    since I use the same ID from the OMV side .... I don't understand how this can be explained by permissions to be honest.

    Indeed. rebooting w/o any autostarting docker containers ... the folder was not auto-recreated. I found a docker container that was referencing the remotely mounted data folder. removing that volume mapping cleared up this auto creation.


    Unfortunately ... my hope was not realized .. this didn't seem to be related my problem .... when I remote mount data its a "partial mount" srv/remotemount/data is created .. and its live as I see the folders there and if I add a folder locally to this folder from a third system, it is updated in my remote mounts view. but that's it. I see folders in data, but nothing in them. I thought it was permissions related .. but that doesn't make sense as I showed earlier, I can NFS mount to these subfolders directly ... and then they are populated. It is only when I remote mount to the "data" folder that I see this issue.



    In any case this mystery is solved, and I think the ongoing "partial mount" issue should be a seperate thread.


    TLDR: I had a data folder in the srv/remotemount folder that were being automatically created, and it turned out to be the fact that I had a docker countainer that referenced it in a volume mount.

    root@omvdirect:~# ls -al /etc/systemd/system/srv-remotemount*

    -rw-r--r-- 1 root root 347 Apr 14 18:51 /etc/systemd/system/srv-remotemount-Media_Backups.mount

    -rw-r--r-- 1 root root 329 Apr 14 18:51 /etc/systemd/system/srv-remotemount-MoviesK.mount

    -rw-r--r-- 1 root root 326 Apr 14 18:51 /etc/systemd/system/srv-remotemount-Movies.mount

    -rw-r--r-- 1 root root 323 Apr 14 18:51 /etc/systemd/system/srv-remotemount-Music.mount

    -rw-r--r-- 1 root root 326 Apr 14 18:51 /etc/systemd/system/srv-remotemount-Photos.mount

    -rw-r--r-- 1 root root 328 Apr 14 18:52 /etc/systemd/system/srv-remotemount-tdarrcache.mount

    -rw-r--r-- 1 root root 338 Apr 14 18:52 /etc/systemd/system/srv-remotemount-Test_Media.mount

    -rw-r--r-- 1 root root 326 Apr 14 18:51 /etc/systemd/system/srv-remotemount-torrents.mount

    -rw-r--r-- 1 root root 314 Apr 14 18:51 /etc/systemd/system/srv-remotemount-TV.mount

    -rw-r--r-- 1 root root 332 Apr 14 18:51 /etc/systemd/system/srv-remotemount-TV_Shows.mount

    -rw-r--r-- 1 root root 320 Apr 14 18:51 /etc/systemd/system/srv-remotemount-usenet.mount

    I continue to try to troubleshoot some remote mount issues. ... but I notice something that I think must be related.

    Despite the fact that I do not have a remote mount with the name of "data" I have a persistant /srv/remotemount/data folder that is somehow recreated on reboot.


    What can I check to cleanup / determine how / why this is happening?

    OMV/plugins are all updated to latest.


    I have a workaround, I created additional NFS shares (deeper into the folder structure) and have those corectly remote mounted in OMV, but I really would like to get to the bottom of this one and greatly simplify the configuration.

    Thanks for the ideas.


    It occured to me that Cloudflare might also allow for redirected email, and indeed it does on the free DNS acount I use. I created an email address (on my domain) and forwarded it to my personal gmail and bingo I had a "work" email address to use. with that I setup an account at smtp2go, authorized my full domain as senders, and now I have a SMTP relay I can use in my homelab ... tested via openmediavault!


    Thanks again.

    I appreciate another option!


    interesting .. this requires a work email to get started ... I want to keep this outside my employment. ignoring the fact that I am a contractor and only work for myself at the moment :)


    I own my own domain, but I currently don't pay for email to that ... again making do with gmail for personal email so far .... I previously had email redirect via my name registrar .. but that doesn't work since I moved from their own DNS to cloudflare DNS to support let's encrypt certificates.

    Looking for suggestions / advice on how to handle notification email for my homelab. It seems most systems / appliances / devices etc can support email notification and I would like something a bit better than using my personal gmail account. I read about an option of using postfix through sendgrid/twilio and that looks promising, but in an attempt to spin up postfix via docker compose plugin on my OMV ... I saw a port conflict. This led me to learn that omv actually already has postfix installed.


    I guess I have three options 1) configure the installed postfix to relay through my sendgrid account via api and point my homelab stuff at the omv for SMTP 2) host the docker on non standard ports and "hope" all my stuff supports non-standard smtp ports. 3) host the docker on non-standard ports, config OMV postfix to use the dockerized instance, point my stuff to the OMV instance.


    Any better ideas or advice is appreciated.

    PC I am assuming that the /srv/remountmount is the local mount location of that external NFS share. I had no intention of resharing locally or over the network. It seems to me that this is the cleanest way to get OMV hosted apps access to my media files stored elsewhere.