Posts by erbsenzaehler

    My guess looking at the compose settings and a quick look at the source code (here) is that what you see is the result of caching. It makes viewing stuff smoother but obviously can have side effects. You could try to reduce caching time in the settings to 0 and try again. Default is only 60 seconds so you probably were very quick with what you did.

    Probably #3: Login as the correct user in the web interface.

    It is now trying to access the home directory because you used onedrive from command line and are missing all the other configurations that are added by OMV on the command line and in a special configuration file. The configuration file resides in /var/cache/onedrive/config and is manged by OMV and includes all the options you set.


    The template (sort of) for the systemd service looks like this. Do not simply copy and paste something from it and execute on the command line.

    Code
    [Unit]
    RequiresMountsFor="{{ sf_mnt_path }}"
    
    [Service]
    ExecStart=
    ExecStart=/usr/bin/onedrive {{ systemd_execstart_args }} --monitor --confdir=/var/cache/onedrive/
    User={{ config.username }}

    You can use the scripts-plugin or the scheduled task in omv to execute a script once per day that does the deletion. The script you have to write yourself. ChatGPT, Gemini etc. can show you what to do. There is a documentation about the script-plugin that you should read carefully (here). Test whatever you write/code carefully so you won't loose data you don't want to.


    prompt with something like this:

    "create a python script that deletes files from directory A that are older then 8 days. inlcude a command line argument for a dry run."


    I hope this helps - if not, then we can quickly create a script for you.

    Despite the file numbering, your code is run first and is overwritten by the plugin's code. I don't know why saltstack is executing them in what seems like reverse order.

    I took a quick look at the saltstack files and compared to what is in omv - a sort is missing. Might that be the reason?


    Code: from default.sls in compose plugin
    {% set dirpath = '/srv/salt' | path_join(tpldir) %}
    
    include:
    {% for file in salt['file.readdir'](dirpath) %}
    {% if file not in ('.', '..', 'init.sls', 'default.sls') %}
    {% if file.endswith('.sls') %}
      - .{{ file | replace('.sls', '') }}
    {% endif %}
    {% endif %}
    {% endfor %}


    Code: from omv cron default.sls
    {% set dirpath = '/srv/salt' | path_join(tpldir) %}
    
    include:
    {% for file in salt['file.readdir'](dirpath) | sort %}
    {% if file | regex_match('^(\d+.+).sls$', ignorecase=True) %}
      - .{{ file | replace('.sls', '') }}
    {% endif %}
    {% endfor %}

    You could write a python or bash script to copy the content from dir a to dir b and from dir b to dir c based on creation date thresholds (15 and 30 days). Using a llm (Chatgpt, Gemini, ...) is a good starting point. Then you can use either the scripts plugin or the default scheduled task in OMV to have the script run once or multiple times per day.


    No guarantee that the following code will work - straight out of Gemini. Best would be that you do your own prompt and alter the script and prompt the way you need.

    It is highly likely that the next time you add another script, the files that you changed outside of the plugin (and therefore database) will get overwritten by the content that is in the database!

    If you want to make changes outside of the OMVUI then maybe take a look at the scheduled tasks section of base OMV.


    I haven't done much with these parts of OMV but I guess that would be the way to go (correct me if I am wrong):

    manage script in OMV UI -> OMV Scripts Plugin

    manage script outside of OMV UI -> OMV Scheduled Task

    A bug was fixed and that's why it is happening now - something that should have happened anyway. There was a thread a few days ago that mentioned the bug and then it was fixed. If you would show your compose file (without any secrets) then people could say if the fix to the bug was the cause for what you're seeing.


    I don't think that read only flag has anything to do with the logic of the compose backup.

    On the part Customize the compose file the /config directory get's put below the appdata path.


    Code
        volumes:
          - ${PATH_TO_APPDATA}/jellyfin/config:/config   # See Comment 4
          - CHANGE_TO_COMPOSE_DATA_PATH/media:/media   # See Comment 4
    Quote

    In the first line we are mapping the /config folder of the jellyfin container to a folder on our system. The /config folder is the one that contains the jellyfin configuration files, the database, users and passwords, plugins, etc. We want this folder to be located on a drive with access speed. So we map it to our /appdata folder that we have configured on a fast disk, ideal for managing a large database. A fast disk will allow Jellyfin to quickly read movie covers, etc. from the television and everything will be better.

    This might be the same problem I had! I had the directory below appdata directly in the docker compose file - not a good idea.


    E.g. for heimdal

    DoDon't
    volumes: - ${PATH_TO_APPDATA}/heimdall/config:/config volumes: - ${PATH_TO_APPDATA}/heimdall:/config



    -> my problem recently



    It is intended that the root directory for a new service is owned by root (or the user defined in the plugin). Whenever a new file/compose/service is added in the compose-files part, the saltstack runs and manages the directories again. Meaning creating if not created and changing ownership to root. After adding a new file the apply changes button comes a bit late. If you skip that one and click edit again somehow the apply changes doesn't come back. That's what I did additionally with the wrong volumes mapping.


    2nd. edit: I am 99.99% sure! Down the service. Create subdirectory. Move files (don't move compose.override.yml, service-name.yml, service-name.env). Edit compose volume path to subdirectory. Restart.

    It seems to be working. Thanks everyone for the input and help!


    So for anyone having the same problem: Don't have the volume for heimdall config directly in the base appdata directory for heimdall. Do it like the following and it will work.


    Code
        volumes:
          - ${PATH_TO_APPDATA}/heimdall/config:/config

    PUID and PGID are for my user appuser and have correct setting (as fas as I know).

    Code: from global env
    APPUSER_PUID=1001
    APPUSER_PGID=100

    The logrotate.status file gets created at some point after the container is running, probably when the service encounters the error. I have reset and deleted everything before and did not change any permissions for files.


    I will try and have a subfolder /config in the appdata/heimdal directory. Maybe that helps in this case. I will report back in a few days if that worked. I tried a bit with a second heimdall container and forgot to accept changes in the yellow bar. The heimdall directory then was not owned by root but appuser. I did not see the yellow bar and could not apply changes after. Might be that a subfolder like /config is required for the heimdall image.

    Well, to top it off I just had my brother send me screenshots of how I have configured his OMV install and the heimdall container. Exactly the same compose file and all the folders have exactly the same permissions - he has no error...


    I used the calibre-web example from the plugin where it is defined similar.

    Is your heimdall directory in appdata? I mean the directory where the compose file is located.


    It should actually be placed in the data directory.

    Yes. It's in appdata as seen in the tree command. I used the same way of creating that as it's described in omv-extras example on how to configure jellyfin. There it's also put in the appdata folder. Both my calibre-web and the config from jellyfin are setup the same way and they work without a problem.


    I don't get it...


    I am not sure if that is the correct answer. I actually did that some time ago and it seemed to work at first. BUT that folder is the root for the heimdall appdata and it looks the same as with calibre-web and jellyfin - they work fine. Additionally it's the permission that is default set from compose plugin and it gets set whenever the saltstack files are being run again (at least that's what I understand from here).


    Code
    appdata# ll
    total 16
    drwx------ 2 root root 4096 Mar 30 12:08 calibre-web
    -rw------- 1 root root 1145 Apr  2 22:24 global.env
    drwx------ 7 root root 4096 Apr  3 22:51 heimdall
    drwx------ 3 root root 4096 Mar 11 11:04 jellyfin