Prefered disk setup with 2 new disks

  • Quote

    So background scub is enabled by default? But when initiated manually (or via cronjob) the -B option must be removed?

    The command line parameters can be found here:

    https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-scrub


    When I run the command manually, which is only after I do something with the btrfs settings, I don't use the B option, and then I use the status command with watch.


    In any case, the referenced maintenance scripts insert the scrub into cron.monthly and the heart of it contains:

    Code
    run_task btrfs scrub start -Bd $ioprio $readonly "$MNT"

    -Bd keeps the task running in the cron job (matches what you have) and prints out the stats at the end, which is what you want. The $ioprio sets the -c and -n flags based on the configuration settings, and $readonly sets the readonly flag (which I don't see in the manual so have no idea if it does anything other than what its name implies). If you configure for "auto" the the command is run for each btrfs disk found (the $MNT parameter).


    The default configuration does a weekly balance and a monthly balance/scrub .


    I don't use defrag (and it's not enabled by default) or trim. You definitely don't need snapshotting.

  • Did this:


    Code
    root@omvvm:~# apt install git
    Code
    root@omvvm:~# git clone https://github.com/kdave/btrfsmaintenance.git
    Code
    root@omvvm:~# cd /root/btrfsmaintenance
    Code
    root@omvvm:~/btrfsmaintenance# ./dist-install.sh
    Code
    root@omvvm:~/btrfsmaintenance# nano /etc/default/btrfsmaintenance


    Changed:

    Code
    ## Path: System/File systems/btrfs
    ## Type: string
    ## Default: "/"
    #
    # Which mountpoints/filesystems to balance periodically. This may reclaim unused
    # portions of the filesystem and make the rest more compact.
    # (Colon separated paths)
    # The special word/mountpoint "auto" will evaluate all mounted btrfs
    # filesystems
    BTRFS_BALANCE_MOUNTPOINTS="auto"
    Code
    ## Path: System/File systems/btrfs
    ## Type: string
    ## Default: "/"
    #
    # Which mountpoints/filesystems to scrub periodically.
    # (Colon separated paths)
    # The special word/mountpoint "auto" will evaluate all mounted btrfs
    # filesystems
    BTRFS_SCRUB_MOUNTPOINTS="auto"
    Code
    ## Path: System/File systems/btrfs
    ## Description: Configuration for periodic fstrim - mountpoints
    ## Type: string
    ## Default: "/"
    #
    # Which mountpoints/filesystems to trim periodically.
    # (Colon separated paths)
    # The special word/mountpoint "auto" will evaluate all mounted btrfs
    # filesystems
    BTRFS_TRIM_MOUNTPOINTS="auto"


    And final:

    Code
    root@omvvm:~/btrfsmaintenance# ./btrfsmaintenance-refresh-cron.sh
    Refresh script btrfs-scrub.sh for monthly
    Refresh script btrfs-defrag.sh for none
    Refresh script btrfs-balance.sh for weekly
    Refresh script btrfs-trim.sh for none
    root@omvvm:~/btrfsmaintenance#
  • And the only and final cron jobs should be (in webgui):


    1. /usr/bin/btrfs fi show

    2. /usr/bin/btrfs device stats /srv/dev-disk-by-id-ata-VBOX_HARDDISK_VBba0d6228-f189346f

    3. btrfs check --force --readonly -p /dev/sdb (or should I leave this out??)

  • Perhaps replace 3 with:


    Code
    btrfs scrub status -d /dev/sdb

    One question thou, which day of the week and month does the btrfsmaintenance script run its tasks?

  • So the nas is up and running. Files are transferred and balanced afterwards etc. I was looking at the smart attributes and found some strange numbers for my brand new ironwolf disks. Can someone shed some light on this? Especially the raw read error rate and seek error rate.


    As you can see the old wd red has better values. Or am I reading it wrong?


    Update:

    Never mind, found the answers. Seagate uses different way in reporting values. Try for raw read error rate:

    Code
    smartctl -a -v 1,raw48:54 /dev/sda


    And for seek error rate:

    Code
    smartctl -a -v 7,raw48:54 /dev/sda
  • doscott


    I am checking up on scrub jobs and found this:


    Code
    root@openmediavault:~# btrfs scrub status /dev/sdb
    scrub status for 3f95b8a7-a00d-4467-aa8d-21e7ea955134
    no stats available
    total bytes scrubbed: 0.00B with 0 errors

    This means that not a single scrub have taken place since installing btrfsmaintenance script right? Do you have a similar output? What am I doing wrong?


    Thank you.

  • Code
    root@N5550:~# btrfs scrub status /dev/sdf
    scrub status for 3c116019-c3d4-46f4-856c-cd624761c77e
           scrub started at Mon Nov  8 07:54:50 2021 and finished after 06:48:19
           total bytes scrubbed: 2.38TiB with 0 errors

    The above is a complete listing, but the cron.monthly and cron.weekly are the applicable ones.

  • As I suspected:


    There is no entry under cron.monthly and cron.weekly. Didn't I follow all the required steps above?? Shouldn't the following command prevent the above problem:


    Code
    ./btrfsmaintenance-refresh-cron.sh

    Greetings.


    Update. I ran above refresh command again and now:


    Strange that it didn't take effect before. I remember clearly that I ran that refresh command multiple time to be shure...

  • I bought my first IBM compatible, a portable with dual 5.25" floppy drives, in 1986. With MS-DOS it was a very reliable setup. It's been downhill every since.


    Anyways, the sequence you listed should have worked, but maybe there was an error message you missed. Looks like it should work for you now.

  • okay so now i am getting daily(?) notifications per email:

    Code
    [openmediavault.localdomain] Anacron job 'cron.weekly' on openmediavault

    So a balance is run every day which is strange since it is configured to run weekly.

  • I'm not an expert on anacron, but with a bit of googling I've found the following. The command (from a root login):

    Code
    ls -la /var/spool/anacron

    should give the cron timestamps, something like:

    Code
    -rw------- 1 root root    9 Dec  8 08:23 cron.daily
    -rw------- 1 root root    9 Dec  8 08:23 cron.monthly
    -rw------- 1 root root    9 Dec  7 08:17 cron.weekly

    which shows time timestamps of the jobs when last run. In the above, the weekly jobs should not run again until on/after Dec 15


    I went through this thread again and everything you have shown looks correct. If your timestamps are not current I would assume it to be a permissions problem, but using root as the user I don't see how that would be the case.

  • Code
    root@openmediavault:~# ls -la /var/spool/anacron
    totaal 12
    drwxr-xr-x 2 root root 100 sep 15 19:14 .
    drwxr-xr-x 7 root root 160 sep 15 19:14 ..
    -rw------- 1 root root 9 dec 8 09:43 cron.daily
    -rw------- 1 root root 9 nov 10 18:31 cron.monthly
    -rw------- 1 root root 9 dec 8 09:44 cron.weekly

    Scrub (monthly) hasn't run yet as explained earlier.

    Received the notification mail on 9:44. Yesterday at 9:31. My server does shuts down at midnight and starts at 9:00 with autoshutdown plugin. Maybe it thinks it missed a run and tries it again?

  • anacron will never miss a run, but it is possible the run didn't complete before you shut down.


    Try the following:

    Following is the mailout from a successful balance:

  • Code
    root@openmediavault:~# journalctl | grep 'cron.weekly'
    dec 08 08:48:39 openmediavault anacron[917]: Will run job `cron.weekly' in 10 min.
    dec 08 09:43:57 openmediavault anacron[917]: Job `cron.weekly' started
    dec 08 09:43:57 openmediavault anacron[12722]: Updated timestamp for job `cron.weekly' to 2021-12-08
    dec 08 09:44:05 openmediavault anacron[917]: Job `cron.weekly' terminated (mailing output)
    dec 08 09:44:05 openmediavault postfix/smtp[12696]: 4201047: replace: header Subject: Anacron job 'cron.weekly' on openmediavault: Subject: [openmediavault.localdomain] Anacron job 'cron.weekly' on openmediavault
    root@openmediavault:~#


    Mail output from yesterday:

    In a few hours the problem will repeat itself. My nas however didn't powered off last night because of uploading files so... We will see in a moment what will happen.


    Update.

    No mail received today. Will ensure that nas shuts down tonight and see what happens tomorrow.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!