Posts by doscott

    A couple of suggestions:

    1 - Do a crontab -e from a root login to make sure you didn't included it there by accident.

    2 - Make sure that it is not included in the OMV scheduled tasks from the gui.


    I'm pretty sure that these are not the cause, but it doesn't cost anything to check.


    Other than that it would seem to be a anacron issue. Perhaps you can create a script that doesn't do anything (maybe touch a file) and include it in the weekly folder and see if it gets executed as well as the balance (you should get an email).

    anacron will never miss a run, but it is possible the run didn't complete before you shut down.


    Try the following:

    Following is the mailout from a successful balance:

    I'm not an expert on anacron, but with a bit of googling I've found the following. The command (from a root login):

    Code
    ls -la /var/spool/anacron

    should give the cron timestamps, something like:

    Code
    -rw------- 1 root root    9 Dec  8 08:23 cron.daily
    -rw------- 1 root root    9 Dec  8 08:23 cron.monthly
    -rw------- 1 root root    9 Dec  7 08:17 cron.weekly

    which shows time timestamps of the jobs when last run. In the above, the weekly jobs should not run again until on/after Dec 15


    I went through this thread again and everything you have shown looks correct. If your timestamps are not current I would assume it to be a permissions problem, but using root as the user I don't see how that would be the case.

    I bought my first IBM compatible, a portable with dual 5.25" floppy drives, in 1986. With MS-DOS it was a very reliable setup. It's been downhill every since.


    Anyways, the sequence you listed should have worked, but maybe there was an error message you missed. Looks like it should work for you now.

    Code
    root@N5550:~# btrfs scrub status /dev/sdf
    scrub status for 3c116019-c3d4-46f4-856c-cd624761c77e
           scrub started at Mon Nov  8 07:54:50 2021 and finished after 06:48:19
           total bytes scrubbed: 2.38TiB with 0 errors

    The above is a complete listing, but the cron.monthly and cron.weekly are the applicable ones.

    Quote

    - 3) Then you would install it to another USB drive -> But I do not even reach the point where the installer lets me choose any target drive.

    I have personally experienced frustration with random errors trying to install an OS from a USB drive to the point where I tried it from a different USB drive and had success. The failed drive had successfully had the image written to it and reported no errors on the check, but failed randomly when trying to use it.


    If you have not already done so, it might be worthwhile putting the install image on a different drive.

    Code
    # >>> [openmediavault]
    /dev/disk/by-label/BTRFS1               /srv/dev-disk-by-label-BTRFS1   btrfs   defaults,nofail,comp
    ress=zstd       0 2

    This is from OMV5 /etc/stab


    What if you add a similar line before the

    # >>> [openmediavault]


    using your parameters. This will not solve your OMV6 issue, but should get you a working setup. With btrfs multidisk setups you only need to reference one drive of the set (and while I have always used the lowest letter it is supposed to work with any letter).

    Quote

    So background scub is enabled by default? But when initiated manually (or via cronjob) the -B option must be removed?

    The command line parameters can be found here:

    https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-scrub


    When I run the command manually, which is only after I do something with the btrfs settings, I don't use the B option, and then I use the status command with watch.


    In any case, the referenced maintenance scripts insert the scrub into cron.monthly and the heart of it contains:

    Code
    run_task btrfs scrub start -Bd $ioprio $readonly "$MNT"

    -Bd keeps the task running in the cron job (matches what you have) and prints out the stats at the end, which is what you want. The $ioprio sets the -c and -n flags based on the configuration settings, and $readonly sets the readonly flag (which I don't see in the manual so have no idea if it does anything other than what its name implies). If you configure for "auto" the the command is run for each btrfs disk found (the $MNT parameter).


    The default configuration does a weekly balance and a monthly balance/scrub .


    I don't use defrag (and it's not enabled by default) or trim. You definitely don't need snapshotting.

    Quote

    No it won't. You just can't create a btrfs array from OMV's web interface. There may be a guide out there that tells you to use mdadm but it won't do that unless you explicitly tell it to.

    You may be correct. I never read it in a guide, but the first time I set raid up (OMV5), btrfs was an option presented for the file system, which I selected, and it installed without a problem. However what I got was an mdadm raid array with a btrfs file system.


    I was sober when I did it.

    All of my systems other than OMV use openSUSE Tumbleweed on btrfs. I used to use VirtualBox and now use KVM. However, I have never used btrfs on a virtual drive. I have read of there being issues with virtual btrfs “disks” on improper shutdown of the machines.


    That said, this is for testing so it looks good, except for your scrub command. Scrub normally runs in the background. Scrub status gives a point in time status. Since -B prevents running in the background there is not much point in getting the status after it has stopped. I would suggest using the scripts mentioned in the link I provided.


    I haven’t used portainer so maybe I am missing something and the two disks with btrfs are not virtual? In any case, scrubbing and balancing, as well as modifying most btrfs options, are done live; you do not have to unmount or stop using them.

    For problems this forum is an excellent source of help.


    The other file system that meets your requirements is zfs. It’s not as flexible as btrfs but those that use it swear by it.


    In my opinion, whatever system you choose, you will be best served by asking for help on the forum, and you will most likely have to use the command line for troubleshooting and repair.

    Setting up btrfs and maintenance utilities using the command line takes less than 30 minutes. After that you won't have to use the command line again unless you decide to add/remove/change a drive, or decide to change the raid type or compression. Everything else (shares, etc.) can be done through the OMV interface.

    issue 566. So, because some noobie does not know that in INIX/Linux you CANNOT have spaces in file names of any kind we have to deal uuid. You guys keep talking about Ubuntu but OMV is based in Debian, which is a "father" to Ubuntu, why? Since someone wrote app for symbolic links now we have to "use right hand to scratch left ear behind the head". I thought that I am stubborn...

    OMV is for "noobie". The goal, as I see it, is to prevent a GUI to NAS management that does not require the user to know what it is operating on. IMO the UI should prevent the user from entering an invalid name or auto correct it.


    By the way, using Linux you can have spaces in file names, they are just a little bit trickier to work with.

    btrfs is actively developed (check the links on https://btrfs.wiki.kernel.org/index.php/Main_Page ).


    My desktop system is Tumbleweed. With 3 to 7 updates a week, I appreciate the ability to rollback for the odd breaking change. I really appreciate it when I install something and manage to break things beyond repair, then a rollback is much quicker than a complete reinstall. For the base debian system, while btrfs would provide some nice features, until debian adopts it as a standard it would be best to stick with debian's default.


    As a data system, btrfs has a lot of pluses:

    - can be used with non-raid data and raid metadata, providing the ability to detect bitrot

    - raid allows the automatic repair of bitrot

    - the file system can be converted to / from non-raid to any type of raid on the fly quickly

    - setting up raid on large drives is mercifully quick.

    - mixed drive sizes are not a problem, even with raid.

    - drives can be added to or removed from raid (or non-raid) configurations on the fly.


    The disadvantages (with OMV):

    - command line is required to setup or convert


    The disadvantages (with raid):

    - raid5 / raid6 can have failure issues when there is a simultaneous power failure and drive failure (however this is not a btrfs exclusive)


    The raid issue can be resolved using a UPS, or as noted somewhere else in the forum, the metadata raid settings.


    The command line btrfs configuration works well with OMV (or is it the other way around?). A plugin would make life simpler, allowing more people to use a rock solid versatile file system with the simplicity of the OMV interface.


    I encourage anyone thinking about btrfs, or fear btrfs because of purported issues, the read through the wiki link above and checkout some of the video presentations (particularly the Facebook one).

    My interpretation of the message indicates you have something like: daily 1 weekly 1. The second level of backup, when executed moves the last daily backup to the weekly backup, which would result with the removal of your only daily backup, and logically this would be considered an error.


    In your rsnapshot.conf you need something like:



    with daily set to at least two (although 7 daily makes more sense to me).