Posts by ChrisBuzz

    When you go into Configure DNS there is an option search domains I have this set to my omv domain System -> Domain name which is also the same in SMB/CIFS

    Configure DNS in the pi hole settings? Sorry, still don't know how to change that option.

    Did you do the dig tests at the end of Unbounds configuration? One should have failed, the other should have worked.
    I.E.


    dig sigfail.verteiltesysteme.net @127.0.0.1 -p 53dig sigok.verteiltesysteme.net @127.0.0.1 -p 53In the above the first command fails. The second produces an IP address. This confirms that unbound is working.

    Yes, I have done that and seemed for me that unbound works in background.
    But please have a view on the result on your own:


    When I am watching at my pi hole dashboard I am wondering that a part of the queries are already answered through unbound (192.168.178.82). But some still go through other DNS servers:
    Bildschirmfoto 2019-11-03 um 14.19.13.png

    Unfortunately I still don't get unbound to work after I changed the DNS in my iOs devices to the pi hole address.


    I configured unbound & pi-hole as described through the following guides from crashtest:
    [How To] Install Pi-Hole in Docker: Update 02/25/19 - Adding Unbound, a Recursive DNS Server


    My current pi-hole DNS setup:


    If I activate OpenDNS and Cloudflare as DNS upstream server, Pi-hole will work without any problems.


    My unbound config file looks like the following:


    What I really don't understand is that the clients have internet access but I don't can load any page through Safari. But Telegram for example is able to send out text messages.


    Would be real cool to get that to work. Think there is still some minor changes missing. But I am really at the point where I tried everything out and don't know what else to do?! :sleeping:


    I tried it with several other machines with the same situation. Then I went to my router and decided to try some single-switch setting changes. The first modification fixed everything. See the screen shot below. I checked "Use DHCP". So now I have two fully functional piholes. Should I just shutdown one and keep it ready as a backup? Thanks for your suggestion to just statically change one client without going through the router. @flmaxey.
    Screen Shot 2019-03-12 at 11.33.44 AM.png


    I configured Pi-Hole & Unbound the same way you did and have to face the same issues with my WiFi Clients(MacBook and iPhones).


    Have you only changed your WiFi settings to the DNS server as your Pi-Hole address and 0.0.0.0 or have you change anything else as ipv6 settings too?


    Thanks and just want to learn from your long journey to a successful setup.

    It doesn't matter if log rotate is working or you have Plex or not (I don't think anyone suggested you did)...


    Those log entries suggest there's an issue with your sdcard...


    There is a lot of fake storage media on the way. New SD cards, USB sticks etc. should always be checked with Hwtest2, Etcher or similar tools before using.

    Thanks, guys!
    So since yesterday there are no more logs like there was the ones before. Really don't know what is different now. But hopefully that topic is solved and won't come back in future. :thumbup:
    I am using a brand new SanDisk Extreme 32GB ordered from Amazon. Is there really a chance to get a faked one? 8|

    I am not using Plex. So seems that this can't be the reason for filling up the space in my syslog and debug log file.


    Still don't get it! After I opened both files yesterday no additional space was filled. So it seems that log rotate is working correctly now?!


    Can someone explain me how to check if there is an issue with logrotate?


    Found something in an other thread that a workaround is to use the following common in cron job:

    Code
    find /var/log/ -type f -mtime 90 -exec rm -rf {} \';


    If i put this command in shell I get the following and it seems not working right:


    Code
    root@Netzwerkspeicher:~# find /var/log/ -type f -mtime 90 -exec rm -rf {} \';
    find: missing argument to `-exec'
    Try 'find --help' for more information.

    Is it only possible to execute via cron?


    Hope that is not required to use a new SD card and start from scratch. But I agree with Adoby that I deactivated every physical disk properties and everything is working fine so far. ;)

    I installed the image from sourceforge.net
    And the flash memory plugin was already installed and my understanding was that I don t have to change anything on that?!
    Isn't that correct? DO I have to change anything via fstab?


    fstab has the following input:

    Code
    UUID=d0da7bbe-e3af-4588-8715-aa5c4478eb88 / btrfs defaults,noatime,nodiratime,commit=600,compress=lzo 0 1
    UUID=5bab0a55-56f1-4443-8cac-297e1181425c /boot ext4 defaults,commit=600,errors=remount-ro 0 2
    tmpfs /tmp tmpfs defaults,nosuid 0 0
    # >>> [openmediavault]
    /dev/disk/by-label/Backupintern /srv/dev-disk-by-label-Backupintern xfs defaults,nofail,noexec,usrquota,grpquota 0 2
    /srv/dev-disk-by-label-Backupintern/Backupintern/Eigene\040Dateien/Christoph/30\040Back\040Ups/20\040ioBroker /export/BackUpioBroker none bind,nofail 0 0
    # <<< [openmediavault]

    How about taking a look inside the logs to see what is filling them? And do something about that.


    If you can figure out what is writing to the log files, and fix that, I suspect that you will find that Bob is your uncle. Or at least a very close friend.

    Done!


    Debug has 24MB and has a lot of entries with the following:

    Code
    Oct 24 22:52:00 Netzwerkspeicher kernel: [100557.897144] database(12390): WRITE block 3306944 on mmcblk1p2 (32 sectors)


    syslog.1 has 22MB and showing also the same message:



    Unfortunately also uncle Bob can't help me on that.


    Why are there that many entries in the log files?

    Hello,


    tried to get an answer already on an existing thread. Unfortunately the thread was already tagged as solved. So it seems that no one was reading it.Hope a new thread won't be seen as spam from my side.


    On my OMV 4.1.26 installation var/log is running out of space after a while. Available space is only 50M. When the space is out I receive error messages when I want to do a USB back up.


    So here is what I found out so far after some debugging trials via shell:


    Code
    root@Netzwerkspeicher:~# armbianmonitor -u
    System diagnosis information will now be uploaded to http://ix.io/1ZVr
    Please post the URL in the forum where you've been asked for.

    Disabling the armbian-ram-logging service in /etc/default/armbian-ramlog is changing it to folder2ram. Will there be any issues in the future disabling the log service?
    Is there any other workaround for this topic how to switch from ram2log to folder2ram vor not running out of space for var/log?


    Solved the topic temporarily with increasing the space from 50M to 100M. But won't solve the cause of it.


    Would be great If someone can help me on that! :thumbsup:

    Same issue over here.


    So far I figured out that log2ram is running out of space for var/log.


    After a reboot


    df -h is giving:

    folder2ram -status:

    Code
    /var/log is mounted
    /var/tmp is mounted
    /var/lib/openmediavault/rrd is mounted
    /var/spool is mounted
    /var/lib/rrdcached is mounted
    /var/lib/monit is mounted
    /var/lib/php is mounted
    /var/lib/netatalk/CNID is mounted
    /var/cache/samba is mounted
    Code
    root@Netzwerkspeicher:~# armbianmonitor -u
    System diagnosis information will now be uploaded to http://ix.io/1ZCQ
    Please post the URL in the forum where you've been asked for.

    Why is var/log using ram2log instead of folder2ram as shown in the folder2ram status?


    Disabling the armbian-ram-logging service in /etc/default/armbian-ramlog is changing it to folder2ram. Will there be any issues in the future disabling the log service?
    Is there any other workaround for this topic how to switch from ram2log to folder2ram vor not running out of space for var/log?


    Sorry seems I am still a linux noob. :(