Posts by thesorcerer

    No that is not correct. That files stores environmental variables an their settings.

    But no mention of btrfs scrub in that file. I need to disable it since it interferes with my current setup.


    Where are:


    Code
    OMV_BTRFS_SCRUB_ENABLED 
    OMV_BTRFS_SCRUB_PRIORITY
    OMV_BTRFS_SCRUB_READONLY

    I am looking at openmediavault and openmediavault.dpkg-dist in:


    Code
    /etc/default

    Update:


    Was looking in the wrong place. Found this:


    Code
    /etc/cron.weekly/openmediavault-scrub_btrfs

    There is mentioned:

    Code
    OMV_BTRFS_SCRUB_ENABLED=${OMV_BTRFS_SCRUB_ENABLED:-"yes"}
    OMV_BTRFS_SCRUB_PRIORITY=${OMV_BTRFS_SCRUB_PRIORITY:-"-c 2 -n 4"}
    OMV_BTRFS_SCRUB_READONLY=${OMV_BTRFS_SCRUB_READONLY:-"no"}

    Will attempt to modify yes into no.

    Hi,


    I am also using btrfsmaintenance script for many months without any problems. Now there are two things running on my nas that perform maintenance. How will they interact with each other? Does one take precedence over another? My scrub jobs run once a month which is the recommended amount according to various sources. It also does it when ''idle'' not in ''normal mode'' because that uses high cpu an IO. How is this handled by omv in the latest update? And what parameters are used for weekly scrubs regarding dusage and musage?


    I've also read in another thread on this forum that modifying /etc/default/openmediavault will revert every time when omv gets updated. That means that changes made in that file are temporarily and need to be modified over and over again so it doesn't conflict with the btrfsmaintenance script.


    My suggestion to the omv team would be to undo previous update regarding the scrub and balancing function and use the btrfsmaintenance script as a base to make it function as part of the gui. For example under system menu or storage menu create a submenu called maintenance where you can configure defrag, balance, scrub and error scan with optional parameters like idle, normal, dusage, musage, when and email notifications. Perhaps also include the option to start a manual job. I believe truenas already incorporated such a method. By adding this to omv it would make it a more complete storage solution. Not having the option to configure it according to ones need or tailor it to what is considered best practice is a missed opportunity imho and maintenance of disks and therefore data should be an integral part of the functionality of an nas os .


    Don't interpreted this as an attack. I think omv is a great nas os. :)


    Best regards.

    hi,


    I have been playing around with said docker combinations. Have a look:



    I did encounter a unresolved issue which I am still investigating.

    Regaring the ram+cpu hog question, on recent hardware that's not something to worry about. You can look at the attached pictures in my post. I'm running on a j4105 quadcore with 16Gb dual channel.

    Hello,

    Can someone help me with the following problem? I have deployed a prometheus+node exporter+cadvisor+grafana stack with portainer to monitor a fresh OMV6 install and running containers using 2 youtube tutorials:


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    The problem is that the graph that suppose to show memory usage and memory cached for running containers displays 0 B. I have search the forum and the internet but I'm still stuck. In the grafana webgui I have imported dashboards 14282 (below image) and 1860.



    I've have added port 9100 to ''node exporter section'' since it was not accessible through the browser at first. Here is my own cooked up docker compose:



    Here is my etc/prometheus/prometheus.yml file:


    All dockers are running, are accessible through the browser. When I browse to node exporter on 192.168.1.100:9100 the following is listed (amongst many others) :


    Code
    # HELP node_memory_Cached_bytes Memory information field Cached_bytes.
    # TYPE node_memory_Cached_bytes gauge
    node_memory_Cached_bytes 1.372033024e+10

    So all the memory data including memory_cached is being gathered right?


    When I browse to cAdvisor on 192.168.1.100:8080 and look at the ''docker containers'' section all is listed and when selecting qbittorrent a new page appears with the following metrics:







    * On a side note. For those who want to try it out but have already a container running that uses port 8080 like qBittorrent should change the torrent container to something else like 8081 since I was not able to change port 8080 in the above stack successfully. It resulted in cadvisor not being accessible in the browser.


    Thanks in advance.

    Code
    root@openmediavault:~# journalctl | grep 'cron.weekly'
    dec 08 08:48:39 openmediavault anacron[917]: Will run job `cron.weekly' in 10 min.
    dec 08 09:43:57 openmediavault anacron[917]: Job `cron.weekly' started
    dec 08 09:43:57 openmediavault anacron[12722]: Updated timestamp for job `cron.weekly' to 2021-12-08
    dec 08 09:44:05 openmediavault anacron[917]: Job `cron.weekly' terminated (mailing output)
    dec 08 09:44:05 openmediavault postfix/smtp[12696]: 4201047: replace: header Subject: Anacron job 'cron.weekly' on openmediavault: Subject: [openmediavault.localdomain] Anacron job 'cron.weekly' on openmediavault
    root@openmediavault:~#


    Mail output from yesterday:

    In a few hours the problem will repeat itself. My nas however didn't powered off last night because of uploading files so... We will see in a moment what will happen.


    Update.

    No mail received today. Will ensure that nas shuts down tonight and see what happens tomorrow.

    Code
    root@openmediavault:~# ls -la /var/spool/anacron
    totaal 12
    drwxr-xr-x 2 root root 100 sep 15 19:14 .
    drwxr-xr-x 7 root root 160 sep 15 19:14 ..
    -rw------- 1 root root   9 dec  8 09:43 cron.daily
    -rw------- 1 root root   9 nov 10 18:31 cron.monthly
    -rw------- 1 root root   9 dec  8 09:44 cron.weekly

    Scrub (monthly) hasn't run yet as explained earlier.

    Received the notification mail on 9:44. Yesterday at 9:31. My server does shuts down at midnight and starts at 9:00 with autoshutdown plugin. Maybe it thinks it missed a run and tries it again?

    As I suspected:


    There is no entry under cron.monthly and cron.weekly. Didn't I follow all the required steps above?? Shouldn't the following command prevent the above problem:


    Code
    ./btrfsmaintenance-refresh-cron.sh

    Greetings.


    Update. I ran above refresh command again and now:


    Strange that it didn't take effect before. I remember clearly that I ran that refresh command multiple time to be shure...

    doscott


    I am checking up on scrub jobs and found this:


    Code
    root@openmediavault:~# btrfs scrub status /dev/sdb
    scrub status for 3f95b8a7-a00d-4467-aa8d-21e7ea955134
            no stats available
            total bytes scrubbed: 0.00B with 0 errors

    This means that not a single scrub have taken place since installing btrfsmaintenance script right? Do you have a similar output? What am I doing wrong?


    Thank you.

    Hi,


    Again I've run into some problems and request kindly your assistance resolving this. The goal is to put the server to either S3 (suspend to RAM) or S4 (hibernate). Around midnight autoshutdown will put the server to S3. The following morning when the WakeAlarm is triggered the server spits out a load of error messages and the system is unusable:

    Code
    ext4-fs error ext4 find entry:1536:  inode #.....comm nginx, comm cron, comm smartd, com master, comm monit: reading directory lblock blablablabla

    When I try to put the server to standby via the webgui and wakeup using magic packet this message appears:

    Code
    acpi hardware changed while hibernated, succes doubtfull!

    The above is with Restore on ac power loss disabled and S3 on auto in the bios.


    The setup:


    - Asrock J5040-ITX

    - Samsung Fit usb flasdrive 64GB. Resized the ext4 part to aprox half and left that unpartitioned for overprovisioning. Had to move the swap partition to the left a bit so that the unpartitioned space is on the right

    - S3 is set to auto in the bios

    - Restore on ac power loss is set to enabled

    - Deep sleep S5 is set to disabled in the bios

    - Swap is enabled, however swappiness is set to 10 default 60

    - 16GB ddr4 installed

    - Autoshutdown en flash plugins are installed. No error messages during install.

    - System is up to date.


    I have to cold reboot the server in order to get it to work again. I have tried the following:


    - In the plugin select S3 and S4

    - Set Restore on ac power loss enabled and disabled

    - Scan the flashdrive for errors (inode), which are found after a bad wakeup

    - Removed the entries in fstab, noatime and nodiretime / Removed the flashplugin/reboot and the following morning the same error happened

    - Restore on ac power loss enabled/disabled



    I have searched the forum and could not find any solution. Any help is appreciated.


    Kind regards

    New problem:


    When I check in gui dashboard both dots are not green. I have to re-enable it manually. For some reason autoshutdown plugin stops working to. Need to re-enable it the same way. :/

    update.

    On my ovm6 vm it is working now. Next I tried it on my real nas running ovm5 from flash drive with flas plugin and that is working now too. Green lights accross the board.

    Very strange... I have no explanation why it suddenly has no errors when enabling... I haven't changed anything or doing something new.