Beiträge von Sean

    I added fsarchiver to the plugin three years ago and the verbose flag has always been there. And it only uses one 'v' which is the least amount of verbosity. You are the first to ask for less info.

    OK then, two's company... ;)


    I also don't need to know every single file that has been backed up every week. Like "Trottel", I would prefer just getting the info if everything went well or if there were problems (so I don't want to disable the email notification completely). But I keep an archive of my mails, so I would prefer shorter mails.


    Would it be possible to move the "v" option into the backup -> settings tab? It could still be default, I would just like to be able to switch it off.

    You guessed right, it was an initial run (after OMV reinstallation). Thanks for this! I run it manually, about once per month, so it shouldn't be a permanent problem.


    However, I'm still wondering why the ramdisk is still full after syncing, and what I can do to ensure that the ramdisk is synced (and purged) more frequently and regularly. Any hints?

    Here's the content of /var/log:

    syslog and daemon.log contain a lot of entries from backup jobs (USBbackup apparently enumerates every single file).

    I have the same problem, but even after the sync, I still get:


    Code
    Filesystem                                 Size  Used Avail Use% Mounted on
    folder2ram                                 8.3G  8.3G     0 100% /var/log

    When I click "sync all", it claims to sync /var/log:

    Code
    will now sync all mountpoints
    sync of /var/log successful!
    sync of /var/tmp successful!
    sync of /var/lib/openmediavault/rrd successful!
    sync of /var/spool successful!
    sync of /var/lib/rrdcached successful!
    sync of /var/lib/monit successful!
    sync of /var/cache/samba successful!

    Why is the ram disk still full?


    And anyway, regularly syncing manually is not a real solution IMO... Isn't there a way to make sure that the ram disk is synced reguarly?

    Not sure what you mean. scrub frequency only applies to the diff script.

    That's precisely what wasn't clear to me. On the settings page, it says "scrub frequency - 7 - units in days". There is no indication that this is only relevant if the diff script on the other page is activated. And I still don't understand how the "7 days" scrub frequency interacts with the scheduling settings from the diff page.


    I would find it more intuitive if all settings related to the scheduled diff were on that page. (Just a suggestion)

    That should be a random uuid.

    Maybe you forgot to call srand()? ;)

    Zitat

    How are you going to "disable" them?

    I figured changing the scrub percentage from "100" to "0" would work.

    Zitat

    They are used in the diff script AND manual scrub commands in the plugin.

    Ah, I didn't know that. In that case, it's a bit confusing that the scrub frequency is on the settings page and the date/time is in the diff sub-page...

    Zitat

    Manual runs will never send emails but you do have to have notifications setup for the script if/when it gets fixed.

    Good to know.

    OK, it's been a while, but now I have installed the new system with two Snapraid arrays. Manual snapraid sync and check jobs work fine so far (without rebuilding the cache, the errors really seem to result from the old system).


    BTW, in the new OMV installation, the config file for the first array is called omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf again, so it was no accident that it was the same for both of us.


    Now I have three remaining questions:


    - If I were to to run scheduled tasks using the config files for the different arrays, as in

    snapraid -c /etc/snapraid/omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf sync ,

    this should work, right?


    - About the "scrub" configuration in the "Snapraid settings": Should I disable them also (as with the scheduled diff)?


    - The "send mail" checkbox in the settings is checked, but I haven't received any mails after the manual syncs / checks. Do I have to do something else? I have setup my mail account in "System Notification"

    Maybe I figured something out. My network controller's id is "enp3s0", but in /etc/network/interfaces ist was "enp2s0" (probably because there was only one SATA adapter during installation).


    I changed /etc/network/interfaces, and at first glance, it seems to work.


    Do I have to worry that the file will be overwritten the next time I change the configuration and press "apply"?

    OK, I have received the parts now, and just out of curiosity, I tried to install the standard OMV6-distribution, and it went smoothly. The components are:


    - ASRock N100M motherboard

    - 16GB Crucial RAM

    - 500GB Samsung M2 SSD for the OS

    - Syba SI-PEX40064 4 Port SATA III Controller

    - 5x Western Digital Ultrastar DC HC550 18TB

    - 400W be quiet Power Supply

    My troubles began when I added the second Syba controller and the 5 HGST 8TB disks from the old system: The disks were detected, but the network became "unreachable". I suspected a driver issue with the kernel, so I installed the backports kernel, I now have "6.1.0-0.deb11.7-amd64", which is the newest one available for bullseye (if I'm not mistaken). The problem persisted.


    I played around a bit with the configuration, even suspected a power issue because the system worked when both controllers were connected, but some of the disks were not connected to the PSU. (But that wasn't it, when I connected all drives to the PSU but removed one of the controllers, it also worked.) It also works if both controllers are inserted, but some SATA ports are not connected.


    I have a suspicion that one of the two controllers might be faulty, but I haven't been able to make sure of it, and before I buy a new one (or switch to an 8x LSI controller as Aaron suggested) I want to make sure that there is no other issue I'm not seeing yet.


    In any case, losing the network after connecting an HDD seems weird to me.

    Hmm, I started a "snapraid check" job (with the config file you mentioned), and it crashed the system after a few hours.


    Then I tried a "snapraid sync" (also with the new config file), and it crashed during/after reading the ".content" file


    In both cases, the whole system became unresponsive (while still permanently accessing disks). Is that normal? IMO, snapraid might crash if there is an error somewhere, but it should take the whole system down...


    I'm thinking whether I should rebuild the whole parity disk...