Posts by raws99

    Thank you votdev !


    I had to do a minor correction, after this it was accepted by omv-salt. I will see tomorrow if the email will stop being sent.


    Above code runs into error on my version of SaltStack:

    Code
    salt.exceptions.SaltInvocationError: 'contents' is an invalid keyword argument for 'file.append'


    Changing it to the following fixed it:


    Code
    custom_postfix_smtputf8_enable:
    file.append:
    - name: "/etc/postfix/main.cf"
    - text: smtputf8_enable = no

    Hi everyone,


    I get the following mail every day:


    I searched for this error and found a fix by applying the following fix: https://unix.stackexchange.com…bounce-of-smtputf8-emails


    Because openmediavault is managing the `main.cf` of postfix, I cannot apply the fix directly in the config.


    I found the following thread touching the same topic. But it seems this is still an old mkconfig fix:

    custom postfix configuration for local email



    What's the best way of applying custom config entries for postfix with OMV 5?

    Duplicati performs de-duplication. It is available as plugin and docker. You can also install from CLI.

    I used duplicati before switching to restic. I find duplicati to be very unstable using large amounts of data. Maybe the backend I used was unstable.. Also I like the snapshot feature restics uses: It splits the encrypted file in smaller chunks and checks each chunk against the already uploaded version, so when I only have minor changes in large files, it will not upload the whole file. Duplicate (or duplicity) will upload the whole file when only a few KB changed. This safes a lot of bandwidth but more important traffic on b2 =)


    I will stick to restic for now, but will check if it supports hardlinking, so I can backup via rsnapshot and not rsync, as I prefer my local backup to be done by rnapshot now (really like it..) :-)

    Looks like a good solution. I am wondering if this is the way to go for remote backups (cloud) as well? Hardlinking isn't supported in most of the cloud drives, therefore it will add up space quite much, I guess?


    Currently I am using restic to send over my rsync backup to the cloud. Rsync backup runs once a week from my main system to the backup system and is transfered from there to the cloud.

    I found the issue. As @tkaiser pointed out, md raids aren't the best. My raid configuration was the bottleneck resulting in high I/O whenever I turned my TV on or my smart home pushed multiple sensor data.


    I switched to an additional SSD for all the docker stuff, so this is separated from my data. Data is now on one 6 TB disk, formated with EXT4. For the second disk I was thinking of using the rsync method to have the data duplicated on both disks.


    Any hints / ideas on how to implement the rsync method? Is this a good approach?


    @m4dm4x Hope you've fixed your issues, too.

    My /var/log/messages has a lot of those:



    and those:

    Great to see you have similar issues. Not great, but good to know there's someone else ;-) Since my system is currently clogged, I started investigating again..


    No high cpu usage / processes taking ram or cpu. BUT CPU waiting is high with around 24
    Now after 10min being clogged (docker containers are not processing) I recognized my light going off (which is controlled by my smart home..) So the system is "free" again, load drops immediatly. I've done nothing but opening top or iotop..


    top (clogged)


    Healthy top:


    How can the waiting be investigated further? I check iotop (nothing special, not much writing..). I'll let iostat 120 run overnight, to see if there is something useful in it..



    UPDATE:


    I ran this while true; do date; ps auxf | awk '{if($8=="D") print $0;}'; sleep 30; done overnight (no clogging this night..)
    and get some locking processes (rsync backup job) for about 15min, but no message about high load this time, so seems uncritical.


    iostat 120 was running and is showing some output like this: https://pastebin.com/g85TLKgj

    This is correct, iostat 120 gives me the output posted (added new iostat, too).


    This the output of iotop -oPa -d 2 (running for 30min or so)



    It shows me alot of writing from just the journaling service.. This could be the reason for the constant traffic?



    iostat 120 from the last hour or so: https://pastebin.com/g885FkAc



    UPDATE:


    High I/O of the journaling resulted in mysql being the cause. If I follow the guide here: https://medium.com/@n3d4ti/i-o…l-import-data-a06d017a2ba I get it down to 0.X% and almost no traffic. But as pointed out in the post, this is not always a good setting for production. So what do you think? I leave it off for 1-2 days to see if it stops the spikes.


    Also I was thinking about my docker container having services logging into sqlite database which are stored on my raid. Is it a good practice to include another SSD into the system for the appdata folder? Since my raid is constantly written to, it never sleeps..


    UPDATE 2:
    I've found that using Nextcloud App on iOS to scroll through a bunch of images, will cause very high load on my system. Mostly apache process will spike. I run the official docker image and will try to investigate further

    Okay, today I got something new. I tried to access my nextcloud from remote and got the following logs and a lot of emails telling me nginx crashed (running the omv gui) etc.



    And after that I get around 400 lines of this


    And after that mysqld complains


    This is my `iostat -x` now with everything running:


    I now suspect docker being the bottleneck. Since I have all containers running on the default bridge, it might cause delays? Any recommended write up I can check for such errors?

    Thanks for the hint. I think I understand the concept - I wrote a little check script to run, whenever high load is found and give me the iostat, free memory and process list of my system.


    I am confused by the spike of load > 100 therefore I mentioned it.


    Both disks are brand new, SSD is also new, everything is not older than one month - but I will investigate if the disks are somehow causing errors.

    Yeah, all docker services are not available if system "locks" but SSH is accessable (with issues as described above) and WebUI seems to clear the lock from the system.


    File system on all disks is ext4 on storage and system disk.
    The machine is running 24/7, no powersaving on the machine itself.
    The disks have the following modes:
    Storage Raid1 Disk1: spindown
    Storage Raid1 Disk2: disabled (don't know why, maybe set this to spindown, too?)
    System Disk: Disabled (SDD)


    Drives seem fine, what do you describe as "weird"? Temperature is okay, SMART is okay..


    EDIT: Added list of docker containers..

    Files

    • docker.jpg

      (66.75 kB, downloaded 320 times, last: )

    Hi,


    I am reading here for a while, now i encounter an (for me) unsolvable problem..


    My system randomly hangs and is unresponsive (all docker container / services are not reachable). SSH login works but if I for example run "top" it hangs/freezes as well.



    I think (haven't verified if this is always the case) I can make the system responsive again, if I login to the webinterface (OMV GUI). Once logged in, the system will return back to idle state and remain with low load..


    Until now I couldn't find a single source for this, checked the process logs, iotop etc.


    Is it possible that the GUI will "lock" the system somehow? How can I get logs if the system hangs?


    I currently run a OMV backup with fsarchiver, which is utilizing the CPU quite a bit, but runs smoothly with load of 2.2-2.4. I don't know if this one is the cause for the initial high load and the freeze of any other processes..


    My system:


    8GB RAM
    Intel Celeron 1.6Ghz Quad-Core
    2x6TB Data (SATA3)
    1x128GB SSD System (SATA3)


    OS


    No LSB modules are available.
    Distributor ID: Debian
    Description: Debian GNU/Linux 9.6 (stretch)
    Release: 9.6
    Codename: stretch


    OMV


    Release: 4.1.17-1
    Codename: Arrakis


    System


    Linux aries 4.18.0-0.bpo.1-amd64 #1 SMP Debian 4.18.6-1~bpo9+1 (2018-09-13) x86_64 GNU/Linux


    ps auxf (with normal load): https://pastebin.com/raw/PvWR8v80

    Files

    • img.jpg

      (52.54 kB, downloaded 598 times, last: )