Posts by auanasgheps

    Hi, I am hijacking this thread just to make ryecoaaron aware that I've found another issue related to the flashmemoryplugin:


    It's just an alert, the service doesn't get stuck like postfix, but I wanted to make you aware.

    I meant the docker storage field on the Settings tab in the Compose plugin. If you leave it blank, the plugin won't change daemon,json. Then you can just manually edit daemon.json instead of trying to add saltstack code.

    Allright, I can go with this approach!


    The salt code itself is working, but I'm sure I'm doing something wrong since the Compose plugin isn't applying it (albeit no errors are displaied right now)

    Unless you leave the docker storage blank. Then the plugin/salt doesn't touch the daemon.json file.


    If your code posted is the code you are using, the formatting is causing the problem.

    And yes, the docker storage has been added to the Compose config, since I'm using a NVMe for Docker but the system is run from a slow thumb drive.


    The code I posted is exactly the one I'm using, with the exception of real paths to cert files.

    I've tested it locally and it works if I execute


    Code
    salt-call --local state.sls omv.deploy.compose.60docker-tls
    systemctl restart docker.service


    So what formatting should I check?

    EDIT: woops, the copy paste didn't work correctly, let me fix that. The code is now updated with the actual one I'm using.

    Hi,

    I'd like to add TLS config to the Docker daemon, because I have a Docker container that uses such API to get some data.


    The Compose plugin doesn't have this option and it overrides any configuration to /etc/docker/daemon.json as expected.


    I'm aware it's an advanced config so I'm not asking to expose it in the GUI.


    I'm trying to add it via a Salt config, but I'm doing something wrong.


    I created the file /srv/salt/omv/deploy/compose/60docker-tls.sls

    With this content:


    The file is picked up by the compose plugin during the save process, because earlier on I was using a wrong syntax and was throwing an error.
    Now it doesn't error out but it simply doesn't apply the configuration.

    Can you help on this?

    Hi,

    Few weeks ago I've done a fresh install of OMV 7 on my main system. Since then I am receiving nginx alerts at every start up.

    They are two and are always delivered together:


    So it seems that it recovers, but I'd like to fix the issue.


    This issue is very similar to this thread, but it's not the same.


    I am running the flashmemory plugin which is causing issues to postfix (apparently fixed with a delayed restart of the service)


    OMV version is: 7.7.7-1


    Here's my systemd-analyze blame output:


    Sorry for the spam, but the solution unfortunately is not reliable, I can't get postfix to work reliably.
    I've done another reboot and it's stuck:


    Hi, I made the changes you recommended.

    I rebooted and doublechecked they have been apllied, but my Postfix is still a mess:




    EDIT: I've run systemd-analyze blame and these tewo services take more than 30 secs to start up:


    33.489s openmediavault-issue.service

    33.217s folder2ram_startup.service


    I have no idea of what's going on should I open a separate discussion?
    I'll try to increase your reccomendation to 40 seconds.

    EDIT2: 40 seconds is enough for my system!

    Hi, I just upgraded (reinstalled) to OMV7 and I'm seeing this issue for the first time ever.

    Using flashmemory plugin.


    I am getting in the logs.

    Code
    May 16 12:45:15 nas postfix/smtpd[4601]: warning: connect #3 to subsystem private/proxymap: Connection refused
    May 16 12:45:25 nas postfix/smtpd[4601]: warning: connect #4 to subsystem private/proxymap: Connection refused
    May 16 12:45:35 nas postfix/smtpd[4601]: warning: connect #5 to subsystem private/proxymap: Connection refused
    May 16 12:45:45 nas postfix/smtpd[4601]: warning: connect #6 to subsystem private/proxymap: Connection refused
    May 16 12:45:55 nas postfix/smtpd[4601]: warning: connect #7 to subsystem private/proxymap: Connection refused



    Emails are not being delivered, but sometimes they do.


    What's the status of this issue?

    For the records: the feature in my script is called "delayed scrub" but does exactly the same thing, because It is supposed to be run daily. I know it's a lazy, but it works.


    I wanted to help to get my script integrated in OMV, but I can barely find the time to maintain my own project. It's a busy year at work. Maybe towards the end of the year we could start working on this.


    I'll open an issue/discussion on GitHub to properly track it when I'm ready.

    With a USB stick you should disable the swap partition and eventually move it to a "swap file" on a SSD if you have it. That's what I've done.

    "Failed to suspend system. System resumed again: No space left on device" comes from, it refers to the missing swap partition or the space on the boot partition itself?

    It could be either a missing swap partition or a small swap partition.

    I don't use hybernation, just standby (suspend to ram)

    Hi all,


    From time to time I need to execute long jobs and I need need to disable the shutdown tasks that I configured in OMV via Power Management > Scheduled Tasks.

    Currently this step is manual and I want to automate it.


    I want to know if there's a proper way to interact with the OMV configuration.

    My goal is to know how to properly disable (and later enable) the jobs, save the configuration, all with shell commands.


    I've found the command to apply the changes herebut I don't know how to correctly make them.

    These commands do not return anything except the last one:

    Code
    root@nas:~# sudo grep '0 0' /etc/crontab /etc/cron.d/* /var/spool/cron/crontabs/*
    /etc/cron.d/openmediavault-powermngmt:40 0 * * 1,2,3,4,5 root systemctl poweroff >/dev/null 2>&1

    But it's the job that turns off my nas at 00:40 every working day.

    Do you have any rsync jobs or scheduled jobs?

    The disk that it's being waken up at midnight it's my SnapRAID pairty disk that is ONLY USED by SnapRAID, which job starts during the day and not at midnight. There's nothing except SnapRAID and OMV (that mounts the partition of the drive) that are aware or use such disk.

    Hi all,


    I noticed that every day at midnight my HDDs are being woken up, and this is being done by OMV itself.


    I'm 100% sure there's no Docker App or other service which is waking up drives.


    My current setup is:

    - USB Thumb drive where OMV6 is installed

    - SSD NVMe for Docker Apps

    - 1 HDD for Data, 1 HDD for Parity (SnapRAID)


    My Data HDD is always running, my Parity Drive is in spindown because is only used by SnapRAID once a day. It's the drive being woken up at 00:00 every day.


    I know OMV performs some cleanup activities at midnight, like logrotate or SMB recycle bin clenup, but these activities do not involve my Parity drive.


    Can somebody share some light on this behaviour? In my opinion it needs to be addressed.