Beiträge von raws99

    I needed the same. I am running signal-cli-rest-api within node-red so I simply can use an http endpoint on node-red like this:


    Bash
    #!/bin/bash
    content=$(echo -e "${OMV_NOTIFICATION_SUBJECT} on the ${OMV_NOTIFICATION_DATE} \n\n $(cat ${OMV_NOTIFICATION_MESSAGE_FILE})")
    curl -X POST -H "Content-Type: text/plain" --data "${content[@]}" http://node-red.hostname.url/notify/omv 



    Maybe this helps someone :)

    Got it!


    In /var/log/fail2ban.log I found that the date pattern wasn't recognized correctly, therefore the match was not counted.


    Code
    2021-03-19 23:15:16,514 fail2ban.filter         [24451]: WARNING Found a match for 'host [19/Mär/2021:23:15:16 +0000] "PASS (hidden)" [1.1.1.1] 53 no valid date/time found for 'host [19/Mär/2021:23:15:16 +0000] "PASS (hidden)" [1.1.1.1] 530'. Please try setting a custom date pattern (see manjail.conf(5)). If format is complex, please file a detailed issue on https://github.com/fail2ban/fail2ban/issues in order to get support for this format.


    I now use SystemLog (but had to define the logfile in OMV..)


    OMV Extended Options for FTP:

    Code
    # Record all logins
    UseReverseDNS off
    SystemLog          /var/log/proftpd/proftpd.log


    Filter


    Jail

    Code
    [proftd]
    enabled  = true
    port     = ftp,ftp-data,ftps,ftps-data
    filter   = proftpdneu
    logpath  = /var/log/proftpd/proftpd.log
    maxretry = 3
    action = iptables-allports[actname=proftpd,name=proftpd,protocol=all]
             iptables-allports[actname=proftpd-docker,name=proftpd-docker,protocol=all,chain=DOCKER]



    This finally gives me

    Code
    021-03-20 12:43:31,590 fail2ban.filter         [11215]: INFO    [proftd] Found 1.1.1.1 - 2021-03-20 12:41:16
    2021-03-20 12:43:31,593 fail2ban.filter         [11215]: INFO    [proftd] Found 1.1.1.1  - 2021-03-20 12:42:23
    2021-03-20 12:43:31,595 fail2ban.filter         [11215]: INFO    [proftd] Found 1.1.1.1  - 2021-03-20 12:42:32
    2021-03-20 12:43:31,597 fail2ban.filter         [11215]: INFO    [proftd] Found 1.1.1.1  - 2021-03-20 12:42:37
    2021-03-20 12:43:31,599 fail2ban.filter         [11215]: INFO    [proftd] Found 1.1.1.1  - 2021-03-20 12:42:40
    2021-03-20 12:43:31,751 fail2ban.actions        [11215]: NOTICE  [proftd] Ban 1.1.1.1 

    dleidert thanks for the quick reply.


    How did you match it? If I try to match it, I don't get a result:


    My fail2ban

    Code
    root@aries:/etc/fail2ban# fail2ban-client --version
    Fail2Ban v0.10.2


    My filter looks simple


    I am more than happy to use the default logfile, but in standard OMV5 it's empty (/var/log/proftpd/proftpd.log) that's why I opted for the custom logfile.

    I am trying to get fail2ban to work with proftpd. I used the default config without success. I then tried this hint: https://stackoverflow.com/ques…/regexp-proftpd-auth-logs which SHOULD work. Testing it with https://regex101.com worked (replacing "<HOST>" to look for anything + using "%a" instead of "%h" to get the ip adress).


    log file

    Code
    hostname [19/Mär/2021:22:43:19 +0000] "PASS (hidden)" [111.111.111.111] 530

    filter

    Code
    failregex = \[<HOST>\]\s+530$


    Trying the filter with fail2ban-regex always returns 0 matches.


    I looked around the forum using fail2ban + proftpd as the search input but wasn't very successful. Maybe othere faced the same issue?

    The latest version of OMV should have fixed this issue completely.

    I'm not understanding where or why you're trying to add this "with ssl" part...

    Sorry for the confusion, I am not adding anything directly to the file. I simply implemented the pull request manually (https://github.com/openmediava…f934f24e199eebe98b381ae53) which, if you read carefully is adding with ssl (protocol ftp with ssl to be 100% correct) to the monit config if tls is activated.


    It wasn't in there, even though I am running on 5.6.1..


    Disabled monit watching for ftp for now.. Will check back with the next update.

    cubemin thanks for pointing this out, but this is still not successful on my system.


    Side note: I am on OMV 5.6.1-1

    Code
    user@aries:~$ sudo apt list openmediavault
    openmediavault/usul,usul,now 5.6.1-1 all  [installed]


    I edited the files, did a deploy and can see that the config for monit is adopted correctly and added the with ssl statement.

    Monit is still complaining about that ssl error.

    Code
    Mar  2 12:39:36 aries monit[12266]: 'proftpd' failed protocol test [FTP] at [localhost]:2221 [TCP/IP TLS] -- SSL connection error: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
    Mar  2 12:39:36 aries monit[12266]: 'proftpd' trying to restart
    Mar  2 12:39:36 aries monit[12266]: 'proftpd' stop: '/etc/init.d/proftpd stop'
    Mar  2 12:39:36 aries systemd[1]: Stopping LSB: Starts ProFTPD daemon...
    Mar  2 12:39:36 aries proftpd[13294]: aries.local - ProFTPD killed (signal 15)
    Mar  2 12:39:36 aries proftpd[13294]: aries.local - ProFTPD 1.3.6 standalone mode SHUTDOWN
    Mar  2 12:39:36 aries proftpd[13529]: Stopping ftp server: proftpd.
    Mar  2 12:39:36 aries systemd[1]: proftpd.service: Succeeded.


    monit config

    Code
    check process proftpd with pidfile /run/proftpd.pid
        start program = "/etc/init.d/proftpd restart"
        stop program  = "/etc/init.d/proftpd stop"
        mode active
        # Do not specify a protocol here, so Monit will use a default connection test
        # where we do not need to take care about whether SSL/TLS is enabled or not.
        # BACKUP: if failed port 2221 for 3 cycles then restart
        if failed port 2221 protocol ftp with ssl for 3 cycles then restart


    I tried to look for a more generic solution with proftpd, but could only find posts dating back to 2006 suggesting to force a specific TLS version.


    Anyone fixed it only by adding the above (with ssl) to monit? Any one else having the same issue?

    Thank you votdev !


    I had to do a minor correction, after this it was accepted by omv-salt. I will see tomorrow if the email will stop being sent.


    Above code runs into error on my version of SaltStack:

    Code
    salt.exceptions.SaltInvocationError: 'contents' is an invalid keyword argument for 'file.append'


    Changing it to the following fixed it:


    Code
    custom_postfix_smtputf8_enable:
      file.append:
        - name: "/etc/postfix/main.cf"
        - text: smtputf8_enable = no

    Hi everyone,


    I get the following mail every day:


    I searched for this error and found a fix by applying the following fix: https://unix.stackexchange.com…bounce-of-smtputf8-emails


    Because openmediavault is managing the `main.cf` of postfix, I cannot apply the fix directly in the config.


    I found the following thread touching the same topic. But it seems this is still an old mkconfig fix:

    custom postfix configuration for local email



    What's the best way of applying custom config entries for postfix with OMV 5?

    Duplicati performs de-duplication. It is available as plugin and docker. You can also install from CLI.

    I used duplicati before switching to restic. I find duplicati to be very unstable using large amounts of data. Maybe the backend I used was unstable.. Also I like the snapshot feature restics uses: It splits the encrypted file in smaller chunks and checks each chunk against the already uploaded version, so when I only have minor changes in large files, it will not upload the whole file. Duplicate (or duplicity) will upload the whole file when only a few KB changed. This safes a lot of bandwidth but more important traffic on b2 =)


    I will stick to restic for now, but will check if it supports hardlinking, so I can backup via rsnapshot and not rsync, as I prefer my local backup to be done by rnapshot now (really like it..) :)

    Looks like a good solution. I am wondering if this is the way to go for remote backups (cloud) as well? Hardlinking isn't supported in most of the cloud drives, therefore it will add up space quite much, I guess?


    Currently I am using restic to send over my rsync backup to the cloud. Rsync backup runs once a week from my main system to the backup system and is transfered from there to the cloud.

    I found the issue. As @tkaiser pointed out, md raids aren't the best. My raid configuration was the bottleneck resulting in high I/O whenever I turned my TV on or my smart home pushed multiple sensor data.


    I switched to an additional SSD for all the docker stuff, so this is separated from my data. Data is now on one 6 TB disk, formated with EXT4. For the second disk I was thinking of using the rsync method to have the data duplicated on both disks.


    Any hints / ideas on how to implement the rsync method? Is this a good approach?


    @m4dm4x Hope you've fixed your issues, too.

    My /var/log/messages has a lot of those:



    and those:

    Great to see you have similar issues. Not great, but good to know there's someone else ;) Since my system is currently clogged, I started investigating again..


    No high cpu usage / processes taking ram or cpu. BUT CPU waiting is high with around 24
    Now after 10min being clogged (docker containers are not processing) I recognized my light going off (which is controlled by my smart home..) So the system is "free" again, load drops immediatly. I've done nothing but opening top or iotop..


    top (clogged)


    Healthy top:


    How can the waiting be investigated further? I check iotop (nothing special, not much writing..). I'll let iostat 120 run overnight, to see if there is something useful in it..



    UPDATE:


    I ran this while true; do date; ps auxf | awk '{if($8=="D") print $0;}'; sleep 30; done overnight (no clogging this night..)
    and get some locking processes (rsync backup job) for about 15min, but no message about high load this time, so seems uncritical.


    iostat 120 was running and is showing some output like this: https://pastebin.com/g85TLKgj

    This is correct, iostat 120 gives me the output posted (added new iostat, too).


    This the output of iotop -oPa -d 2 (running for 30min or so)



    It shows me alot of writing from just the journaling service.. This could be the reason for the constant traffic?



    iostat 120 from the last hour or so: https://pastebin.com/g885FkAc



    UPDATE:


    High I/O of the journaling resulted in mysql being the cause. If I follow the guide here: https://medium.com/@n3d4ti/i-o…l-import-data-a06d017a2ba I get it down to 0.X% and almost no traffic. But as pointed out in the post, this is not always a good setting for production. So what do you think? I leave it off for 1-2 days to see if it stops the spikes.


    Also I was thinking about my docker container having services logging into sqlite database which are stored on my raid. Is it a good practice to include another SSD into the system for the appdata folder? Since my raid is constantly written to, it never sleeps..


    UPDATE 2:
    I've found that using Nextcloud App on iOS to scroll through a bunch of images, will cause very high load on my system. Mostly apache process will spike. I run the official docker image and will try to investigate further

    Okay, today I got something new. I tried to access my nextcloud from remote and got the following logs and a lot of emails telling me nginx crashed (running the omv gui) etc.



    And after that I get around 400 lines of this


    And after that mysqld complains


    This is my `iostat -x` now with everything running:


    I now suspect docker being the bottleneck. Since I have all containers running on the default bridge, it might cause delays? Any recommended write up I can check for such errors?