How to check BTRFS scrubs

  • Hi


    I'm a new OMV user and was looking forward to using BTRFS with media scrubs and status output, however I'm puzzled as to how to use this feature.


    https://docs.openmediavault.or…esystems.html#filesystems says:


    "Shared Folders: Simple shared folder administration. Within this section is also possible to assign ACLs and/or privileges to the shared folders. Snapshots can be taken manually or via scheduled tasks for shared folders that are located on Btrfs file systems. Automatic scheduled tasks for Btrfs file systems to scrub them and check for errors, including notifications via email."


    But I see nothing in the UI that references it.


    Forum searches reveal similar questions - some answers refer to scrubs being enabled automatically and how to change Environment variables if you want to change schedule etc. but nothing about how to check scrub status and receive it via email (which is what I think I want).



    I found the CLI 'btrfs scrub status; but the output is identical (save UUID) for both BTRFS volumes:


    UUID: e209b9f6-c2ce-4833-ba4f-77d1cdd4ddce

    no stats available

    Total to scrub: 288.00KiB

    Rate: 0.00B/s

    Error summary: no errors found


    when I was expecting something like:


    scrub status for d5b21634-9671-47b5-8e9f-9c6388aa5b14

    scrub started at Fri Oct 15 09:15:00 2023 and finished after 00:20:45

    total bytes scrubbed: 500.00GiB with 0 errors



    Then I found another cmd but I'm mystified as to which commands it takes.


    omv-btrfs-scrub --help

    Illegal option --

    Usage:

    omv-btrfs-scrub [options] <command>


    Perform a scrub on all mounted Btrfs file systems.


    OPTIONS:

    -e Send reports per device via email.

    -h Show this message.


    I chose OMV for an easy to admin UI-based NAS so I'm leery of using potentially data-impacting commands where I don't know what they're doing! ;)

  • KM0201

    Approved the thread.
  • Anyone ?


    Call me old fashioned, but I'm extremely wary of trusting my data with data protection features which sound great, yet do not appear to be exposed in the UI or documentation i.e. opaque to the extent that I don't know how the functionality is meant to work, nor if it is even occurring with no apparent way of checking. I really want it to be present and adding value but I just have no way of verifying it.


    I tried reading https://btrfs.readthedocs.io/en/latest/Scrub.html but TBH my head started to spin after the fourth para and almost detached completely on the sixth!

    • New
    • Official Post

    my head started to spin after the fourth para and almost detached completely on the sixth!

    I followed the same path to test my brain's resilience, and when I reached the fourth paragraph, the same thing happened to me as to you: my head started spinning. So I decided to stop before reaching the sixth to prevent it from unraveling.


    I just suggest you trust the systems implemented by the OMV developer. I assure you, he rarely makes missteps and doesn't do anything without thinking. If OMV has implemented this cleaning system in BTRFS, it's because it's useful. OMV's philosophy is to offer an easy-to-use GUI for standard users, while also offering customization possibilities for advanced users. Sometimes it's not worth looking too closely under the hood; others have already done that for you.

  • Tx for the reply and glad you kept your head!


    Much as I'd really like to, I'm just too long in the tooth to "trust the systems" especially if neither the docs or UI contain any concrete info. I think one could extend the 'vault' aka 'hidden away' analogy a little too far - even bank vaults are open for scrutiny sometimes.

    One such example is to check whether scrubs will wake up sleeping drives.

    It also concerns me rather that similar queries have been raised previously without any definitive answers. BTRFS availability is one of the features that led me to OMV so I admit to feeling rather disappointed.


    Ideally the UI could have a traffic light style indicator of filesystem health/scrub status and a ref to the scrub task schedule, an optional email notification of "no errors detected" results and some logs that could be inspected for more detail if required. But of course this is just a guess at what might be possible.

    • New
    • Official Post

    I don't think I'm the right person to answer these kinds of questions. There are people on this forum who know a lot about BTRFS. I don't even use it myself. I switched to ZFS when BTRFS wasn't officially supported in OMV yet, and I'm still using it.

    Anyway, just by reading the forum posts, I think I have a good idea of how it works. As far as I understand, the BTRFS cleanup system is implemented by OMV following the recommendations of the BTRFS developers, and notifications are integrated into the system so you receive regular emails. If you want to customize any aspect of the cleanup, there are ways to do so. You can search the forum and you'll find some threads with instructions on how to do this. All of that, in my opinion, is sufficient for most users.

    But perhaps you're right; perhaps the OMV GUI could be improved in this regard, so I'd open a thread on GitHub to present those suggestions. https://github.com/openmediavault/openmediavault/issues

  • Not sure if this answers, but I wrote my own scripts to handle these BTRFS tasks:


    BTRFS Balance:


    btrfs filesystem df /mount_point/folder

    -to take a before balance snapshot


    btrfs balance start -dusage=25,limit=1 /mount_point/folder

    -usage=<percent>, usage=<range>

    Balances only block groups with usage under the given percentage.

    • -m for metadata block groups (also implicitly applies to -s)
    • -s for system block groups

    btrfs filesystem df /mount_point/folder

    -to take a after balance snapshot


    Link to btrfs balance command:

    btrfs-balance(8) — BTRFS documentation


    Hope this helps!

    Proxmox VE 8.3, 64-bit, 7.6.0-1 (Sandworm). Plugins - ClamAV, Diskstats, LVM2

  • Tx for the info.


    I think my issue is that having cut my teeth on SCO Unix way back in the day (I did say I was old fashioned), before I went to Windows and then appliances, I really don't want to go back >35y to /dev/this and /dev/that. Plus I'm leery of learning more cmdline, only to hose myself when I misplace a character.

    Thus I was sold on OMV for this https://wiki.omv-extras.org/doku.php?id=omv7:new_user_guide


    "One of the ambitions of the openmediavault project is to make advanced NAS technologies and features available to inexperienced users in an easy to use WEB GUI, thereby making it possible for people, without extensive knowledge of Linux, to gain easy access to advanced technologies."


    I find it a bit surprising that it seems that one has to delve under the hood to check that (IMO) core functionality of data verification is working.

    I'll post a request on Github.

    • New
    • Official Post

    If you're looking for something similar to Windows, you won't find it in OMV. OMV provides a GUI where you can manage the system, and what that quote says is true. But, likewise, you'll occasionally need to type a command in the CLI. If you have a problem and ask a question on the forum, it's very likely that the answer will ask for the output of a command. So, to use OMV, you need a minimum knowledge of the CLI. You don't need to be an expert. But you should at least be able to type a command if necessary.

    Keep in mind that OpenMediaVault isn't commercial software with a team of well-paid developers behind it. On the contrary, it's free software developed by a few people in their spare time. So, you can't expect fancy stuff. If you're looking for a NAS operating system where you just need to press buttons, you might be interested in a commercial system like QNAP or Synology, something closer to Windows. With their advantages and disadvantages (many disadvantages, in my opinion).

    As you'll understand, I'm biased, and I defend OMV above any other system. The freedom that OMV gives me to do what I want on the hardware I want was never offered to me by QNAP or Synology in the 10 years I used them. I wouldn't trade OMV for anything. But if it's not what you're looking for, don't worry; there are other alternatives on the market.

    If you decide to stick with OMV, you should try to overcome that initial learning curve to feel a little more comfortable. Install Putty and access the command line. Experiment a little.

    You only need to dig under the hood if you want to know how things work. But if your NAS use is going to be basic, you don't need to know that.

    Good luck with your decision, and, as you know, if you have questions or need help, we're here to help you (free of charge). You won't find that with either QNAP or Synology.

  • As a BTRFS scrub is designed to read all the data of a BTRFS filesystem it must wake up sleeping drives.


    I wonder why you choose to use BTRFS if this copy on write filesystem is new to you?



    OMV does a pretty good job with BTRFS, it let's you create a BTRFS system that spans one or more drives. Any new "shared folder" created on BTRFS is created as new sub volume. You can "snaphot" these shared folders on demand, or by schedule. You can create SMB/CIFS shares of these folders that automatically use "shadow copies" to make "previous versions" visible in Windows. All via an easy to use WebUI.


    One weakness of BRTFS is that its has no "daemon" to monitor the health of the filesystem like for example MD RAID ( a.ka. linux sofware raid) or ZFS. All BTRFS has are "device statistics" which keeps a count of device io errors and the BTRFS scrub tool to repair errors when possible. This is a limitation in the nature of BTRFS not OMV. If you are using a BTRFS with a redundant profile, then BTRFS repairs data inconsistency per read/write io on access. For the data that's on disk, but not accessed, you need a regular BTRFS scrub for this data integrity check to be made.


    To fill the gap in the lack of a BTRFS daemon monitor OMV runs a daily scheduled task ( cron job ) to check the BTRFS device stats. which can be linked to OMV email notifications. You get altered to potential problems. It also runs a monthly scheduled scrub. Again this is linked to email notifications.


    The BTRFS device stats are visible in the WEBUI under filesystem details anytime you care to look. No need to delve under the hood.

  • I scheduled omv-btrfs-scrub to run every 22 days.

    Version7.7.2-1 (Sandworm)
    ProcessorAMD EPYC 7302P 16-Core Processor :evil:
    KernelLinux 6.1.15-1-pve
    HardwarePowerEdge R7515 | 128 GiB mem | 154.62+ TiB store
  • Tx for the info. I'm used to using the CLI - and that is precisely why I don't want to any more! ;)

    My use case is pretty typical, FTP server plus multiple CIFS/SMB shares on multiple drives, which should sleep when not being used for backups, but with data integrity paramount. For my use case, I find that drives provide better value when used for multiple point-in-time backups rather than mirrors (which increase system complexity).

    Windows fails in terms of system and drive power usage, poor filesystem support but it wins on enabling apps to provide rich levels of info e.g. Hard Disk Sentinel has sophisticated displays of drive issues and comprehensive drive stats monitoring and Filezilla Svr has live FTP session display.

  • "One of the ambitions of the openmediavault project is to make advanced NAS technologies and features available to inexperienced users in an easy to use WEB GUI, thereby making it possible for people, without extensive knowledge of Linux, to gain easy access to advanced technologies."

    I wrote that particular passage. Note that there is a huge difference between "gaining easy access to advanced technologies" and "understanding advanced technologies". Many people can drive [access] a car. Very few people understand how they actually work and fewer still can "get under the hood".


    Anyone ?


    Call me old fashioned, but I'm extremely wary of trusting my data with data protection features which sound great, yet do not appear to be exposed in the UI or documentation i.e. opaque to the extent that I don't know how the functionality is meant to work, nor if it is even occurring with no apparent way of checking. I really want it to be present and adding value but I just have no way of verifying it.


    I tried reading https://btrfs.readthedocs.io/en/latest/Scrub.html but TBH my head started to spin after the fourth para and almost detached completely on the sixth!

    I followed the link. That was a poor way to describe a scrub. I suspect that it was written in a foreign language and translated.

    TBH, while it has matured and improved, I'm not a fan of BTRFS. The development path is just not panning out. Known BTRFS issues, that have been ongoing for years, are not being resolved in any time frame that could be considered "reasonable". Do I trust BTRFS? No. I wouldn't use it as a file system on my primary server. I am, however, using a BTRFS mirror on a backup.
    ____________________________________________________________________________________________


    I can tell you, in simple terms, how a ZFS scrub works and how it auto corrects. Sometime back (many years ago), I tested it.


    - ZFS assigns a checksum to all files. Each checksum is unique. If file content is changed, the checksum is changed in lieu of the new content.

    - In the case of a mirror (RAID 1 equivalent) there are two copies of the same file with 2 identical checksums for the two identical files.

    - In a scrub and in the event that a file does not match it's checksum, ZFS will look at the 2nd file. If the second file matches it's checksum, ZFS overwrites the first file that doesn't match its checksum, using the file with the good check sum. In scrub results, that's reported as a "corrected error".

    - Again, assuming a mirror, in a scrub and in the event that BOTH files do not match their checksums, no corrective action is taken and it's reported as an "uncorrected error".

    As far as I could tell from testing, ZFS data integrity is at the file level. I used a sector editor, on another PC, to open a ZFS drive and "flip bits" (I changed hex values) in known text files with patterns that were relatively easy to find. These injected errors were detected and corrected. When it comes to ZFS' metadata and housing keeping, I don't know. I flipped bits outside of known files, in what "appeared" to be data, that were not detected. That's certainly not conclusive because, for all I know, those areas could have been deleted files or other drive locations that were not monitored.


    Hope this helps.

  • As a BTRFS scrub is designed to read all the data of a BTRFS filesystem it must wake up sleeping drives.


    I wonder why you choose to use BTRFS if this copy on write filesystem is new to you?

    ...

    The BTRFS device stats are visible in the WEBUI under filesystem details anytime you care to look. No need to delve under the hood.

    I chose to try out BTRFS as it appears to be better on resource usage and less restrictive on pool layouts. Aside, I was using a ROW filesystem before COW (with ZFS or BTRFS) i.e. WAFL.


    I checked for the stats (and so that I could check for the impact on sleeping drives) . Under filesystem details I see things like:


    Label: none uuid: 55301681-44a3-40ab-a5ed-bc05a89fd2b0

    Total devices 1 FS bytes used 238.98GiB

    devid 1 size 596.17GiB used 596.17GiB path /dev/sdb


    Data, single: total=592.16GiB, used=238.63GiB

    System, DUP: total=8.00MiB, used=80.00KiB

    Metadata, DUP: total=2.00GiB, used=356.84MiB

    GlobalReserve, single: total=512.00MiB, used=0.00B


    [/dev/sdb].write_io_errs 0

    [/dev/sdb].read_io_errs 0

    [/dev/sdb].flush_io_errs 0

    [/dev/sdb].corruption_errs 0

    [/dev/sdb].generation_errs 0


    Which doesn't say anything about when scrubs are run, when they were run, nor what the previous results were, or any notifications of scrub results. There appears to be no log file. So it's remains a bit of a mystery to me.

  • I wrote that particular passage.

    Ha, in fact I found that guide to be *really* well written and it helped pique my interest in OMV! :)

    The rest of your post is welcome food for thought espec as I'd seen somewhere that BTRFS may be used as a default FS in future, and so needs some careful consideration - tx!

  • All BTRFS scrubs are logged in the systemd journal whether instigated by the in-built OMV scheduled task, your own scheduled task or by ad-hoc use of "brtfs scrub start .." on the CLI. In the case of the OMV ( or your own scheduled task) that calls the in-built OMV program

    "/usr/sbin/omv-btrfs-scrub" with the "-e" switch, an email is generated which you'll receive if & when you configure OMV notifications on "file system" events.


    The logs appear in the OMB WebUI under "Diagnostics | System Logs | logs" , but it's worth learning how to use the "systemd journal" at the CLI for easy searching of logs.


    So, for example, kick of a scrub with email notice at the CLI as a test:


    Then check the email contents sent:



    Then check the "systemd journal" ( filtered both by keyword and time):


    Code
    root@omv7vm:/# journalctl -g scrub --since "09:45:00" --until "now"
    Apr 20 09:46:48 omv7vm omv-btrfs-scrub[25801]: Performing a scrub on all mounted Btrfs file systems.
    Apr 20 09:46:48 omv7vm omv-btrfs-scrub[25806]: Scrubbing the file system mounted at /srv/dev-disk-by-uuid-0371eae9-24da-4de7-87d6-32cfd5f9f96b [UUID=0371eae9-24da-4de7-87d6-32cfd5f9f96b] ...
    Apr 20 09:46:48 omv7vm kernel: BTRFS info (device sdd): scrub: started on devid 1
    Apr 20 09:46:48 omv7vm kernel: BTRFS info (device sdd): scrub: started on devid 2
    Apr 20 09:46:50 omv7vm kernel: BTRFS info (device sdd): scrub: finished on devid 2 with status: 0
    Apr 20 09:47:00 omv7vm kernel: BTRFS info (device sdd): scrub: finished on devid 1 with status: 0
    Apr 20 09:47:00 omv7vm omv-btrfs-scrub[25856]: Scrubbing the Btrfs file system mounted at /srv/dev-disk-by-uuid-0371eae9-24da-4de7-87d6-32cfd5f9f96b [UUID=0371eae9-24da-4de7-87d6-32cfd5f9f96b] has been finished.
    root@omv7vm:/# 


    Start an ad-hoc BTRFS scrub at the CLI, there's no email, but it will still be logged:


  • I'm using BTRFS as my main files system for over a year now.

    I can tell you, that regular (monthly) scrubs are implemented by OMV automatically, you don't need to do anything about it. You can change that interval to weekly by changing an environment variable.

    If you have email notifications activated, you get an email after each scrub telling you the result of such a scrub.

    It looks somewhat like this:


    Code
    Scrub device /dev/... (id 1) done
    Scrub started:    Thu Apr 10 07:46:12 2025
    Status:           finished
    Duration:         15:53:37
    Total to scrub:   9.80TiB
    Rate:             172.70MiB/s
    Error summary:    no errors found

    TerraMaster T6-423

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • I'm using BTRFS as my main files system for over a year now.

    I can tell you, that regular (monthly) scrubs are implemented by OMV automatically, you don't need to do anything about it. You can change that interval to weekly by changing an environment variable.

    If you have email notifications activated, you get an email after each scrub telling you the result of such a scrub.

    It looks somewhat like this:


    Code
    Scrub device /dev/... (id 1) done
    Scrub started:    Thu Apr 10 07:46:12 2025
    Status:           finished
    Duration:         15:53:37
    Total to scrub:   9.80TiB
    Rate:             172.70MiB/s
    Error summary:    no errors found

    I've already told the OP all this an hour ago. Did I waste my time?

  • I've already told the OP all this an hour ago. Did I waste my time?

    No, I think they just wanted to be certain that I'd heard properly. ;)


    But I did the first time, and tx for the comprehensive info. Unfortunately I had no luck. The first omv-btrfs-scrub did nothing for 30mins and the next with '-e' has been going an hour or so. Also I've never received any scrub-related emails (all Notification tickboxes are enabled) and the system has been going since 30th March.


    I did get a rather impenetrable email the other day "Monitoring alert -- Resource limit matched filesystem_srv_dev-disk-by-uuid-46d0eb72-8d65-4700-8ebe-f5ff4c8a9547" so something relating to filesystem notifications is working.


    Here's what I tried:



    I couldn't see any systemd log under "Diagnostics | System Logs | logs".

    Syslog seems to be swamped with ProFTP-related entries save for a single btrfs scrub one at system start.

    2025-03-30T00:49:53+0000 openmediavault2 openmediavault-check_btrfs_errors[3811]: Performing an error check on Btrfs file systems.


    So, collating the answers to my questions about BTRFS scrubs i.e.

    Is it enabled? When is the schedule? Can it be changed?

    How long did it take? What were the previous results? Can I have all or just error reports emailed?


    were variously (I've paraphrased):


    - just trust it - it's automatic (i.e. don't bother your pretty little head about it)

    - wrote my own script

    - use cmd btrfs scrub status

    - use cmd omv-btrfs-scrub

    - use cmd btrfs balance

    - it's under File system details (I couldn't find it)

    - it's under "Diagnostics | System Logs | logs" (I couldn't find it, so I'll have to try the cmd line at some point)

    - I wouldn't use BTRFS (LOL)


    Thanks again to all who took the time to reply, but for now I'm not feeling confident enough to trust my data to it so I'm going to see if the ZFS implementation is any more transparent.

    • New
    • Official Post

    I'd seen somewhere that BTRFS may be used as a default FS in future, and so needs some careful consideration

    I have doubts about BTRFS becoming the default FS for OMV. There was talk about that going back to ver6. I believe such a change would send a substainial portion of the user base elsewhere. EXT4 and other filesystems are proven and work too well to risk standardizing on an FS with known long standing issues (some are glaring.)

    If BTRFS Dev's get serious, maybe. On the other hand, given past performance, I don't think that will be anytime soon. (As in anytime shorter than the next 5 years.)

    Thanks again to all who took the time to reply, but for now I'm not feeling confident enough to trust my data to it so I'm going to see if the ZFS implementation is any more transparent.

    I've been using ZFS for several years and have never been disappointed.

  • mr-brunes You seem to be unduly hung up on the topic of BTRFS SCRUBS which Quacksalber pointed out above is pretty straightforward. My pervious post was to show how all BTRFS scrubs are logged and where to find and view those logs. Viewing "logs" via the WEBUI is rather basic, hence using the CLI gives you all the features of the "systemd journal" via the "journalctl" command.


    Whoever mentioned "btrfs balance" w.r.t scrubs was off track.


    AFAIU, you are not combing the data drives you've formatted in BTRFS by using a RAID1 profile and want to spin down disk which are not in use. So you lose all the data on a given disk if/when it fails. This is somewhat at odds with your goal of "keeping data safe" as opposed to using a RAID1 that keeps all data "safe" when a single disk is lost.


    Irrespective of how its supported within OMV, ZFS is totally unsuited to working in this way. In ZFS disks are combined to form a "pool" where individual disk need to be online all the time. Loose a disk in a non-redundant ZFS pool and you can kiss your data good bye. Redundancy in ZFS comes from combining disks using mirrors or raidz.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!