Posts by crashtest

    We (Helios64 owners) are currently experiencing significant problems when we try to use the script from...

    https://github.com/OpenMediaVa…Script/raw/master/install

    ... to install the current OMV 7 on a Armbian Bookworm system.


    You have to realize that you're using an unsupported "automated" Armbian build for Dev's only. Those builds are untested and unsupported, by Armbian and OMV. BTW: I'm in the same boat with an older SBC, the Rock64.


    This is the CLI logon for the Rock64 and this is what you're seeing as well:



    While I understand trying to get use some out of an older more specialized SBC, in the bottom line, it's not supported.

    So, the script works fine, but not to do a fresh install. Still same problem with static IP

    It should be noted, in the prescribed R-PI installation, that the first preinstall script must "execute" successfully. That means if a message like "could not resolve host" comes up, it has not executed successfully.

    After the preinstall script executes successfully, a file named 10-persistent-eth0.link is created at the following location:

    /etc/systemd/network

    If the file is not there, the script failed and that's likely because be script's host couldn't be accessed.

    I am able to cd into the mergerfs with root, but normal users cant. What am I doing wrong?

    How are you creating your shares? Are they created at the root of the mergerfs pool?


    The reason for the question is:

    If you're nesting shares inside of a folder that's at the root of the pool, the permissions of the parent folder can affect the shares nested within.


    Create a test share, at the root of the mergerfs pool, as shown in -> Creating a Network Share. Permissions will be wide open.

    If the scripts are that different, it should be possible for the plugin to config things for either script. We could have a checkbox to select which script you want.

    And, possibly, one of those check boxes would enable @auanasgheps default script settings.
    (A sort of, middle of the road, script for SnapRAID newbies.)

    Allright let's do this. I'll open a discussion/issue on GitHub so we can discuss it there.

    Before this goes to github (where it will disappear for most forum users), what are you proposing? As noted earlier, the plugin already allows the customization of variables where, afterward (and saved), a Diff script is generated. With your version of a generated Diff, where the plugin is concerned, what would be different?

    So, what am I to do to get this to work?

    As noted above, the static IP address issue has been fixed.

    If you've already built an R-PI and are using DHCP only, run the following script and reboot:

    sudo wget -O - https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/preinstall | sudo bash

    **Note, after running the script, a reboot is required.**

    After the reboot, static IP addressing in the GUI will work as designed.
    _____________________________________________________________________________

    Otherwise, when doing a new build, follow the standard ->build process.

    Seems like the plugin (I assume that is what you mean by automation) could be enhanced if people provided suggestions.

    I don't know. The plugin is pretty damned good as it is right now.

    Setting drive recovery considerations aside:
    Since a sync operation removes any chance of recovery of previously deleted or changed files, the real question is to when (or when not) to sync. That question, and where to set thresholds in the Diff section of the plugin, is a matter of the user's use case and personal preference.

    - If a user wants to automate with Diff AND they're downloading everything under the sun, new files would have to be set high. (To allow a range that's typical for what they do.)
    - If they're worried about losing files or, say, changed documents, that setting should be low.

    What it really boils down to is, the user must familiarize themselves with how SnapRAID works, they must understand their own use case (their own wants and needs) and tweak Diff settings accordingly.

    In this case, the plugin allows new users to focus on Diff settings versus worrying about the in's and out's of an actual Diff script. That's exactly what the plugin should do.

    In the interim:

    After the OMV build, don't change anything in network settings. Use DHCP.

    If a set IP address is required, use a "static DHCP lease" at your router's DHCP server. (If you don't know how to do this, check with the router OEM's web site on how to set a static DHCP lease.)

    So, #1. Could this me a problem with OMV?

    I doubt it. The OS recognizes drives that are "presented to it". The presentation itself is a function of the controller, added expanders, etc.

    If there's a doubt about the SATA drive, connect it to a SATA connection on the mobo. If OMV see's it, the next place to connect it is directly to the LSI controller. (Using one of the two cables.) Finally, install the drive in the EMC KTN-STL3.


    #2 what else should I check?

    Depending on the outcomes of the above, the documentation on the controller and the external chassis is the place to start.

    I think I'll end up using BTRFS RAID5 + metadata as RAID1 as most people recommend.

    As I understand the problem, BTRFS in RAID5 has the same problem that mdadm RAID5 has - "the write hole" problem. Both issues are fixed with an UPS and, in the event of an extended power outage, a clean shutdown. 
    (Back in the day, some of the higher end hardware RAID controllers addressed the write hole issue with a battery backup on the controller itself.)

    why? what do you use instead?

    There are a number of variables. Sync, for new, changed and deleted files, tends to be pretty fast. On the other hand, the size of the data store, speed of the drives, USB connected drives, etc., effect the length of time required for a scrub. To avoid issues (errors) with a sync and the following scrub, it's best to run and automated Diff after-hours when the server is not being used. After-hours, where user access is unlikely, may mean something between midnight to 06:00AM. If the data store is large, requiring several hours to complete a scrub, scrubbing less than 100% of the store may be necessary.

    With the above in mind, I'm of the opinion that scrubbing 25% of the store, once a week, is enough. In a month's time, all is checked for data integrity. Along with keeping an eye on SMART data, I believe that's enough.

    What I keep an eye on are differences thresholds, when setting up Diff. (Changed, new and deleted files.) While a reasonable number of changes must be allowed or the script won't complete (negating automation), excess changes are an indication that a closer look is needed. For automation purposes, I've found that 100 each, per week is enough. Otherwise, if I add lots of files or do a large delete, I run a sync manually.

    In the bottom line, it's best to read up on what SnapRAID does, try to get a good understanding of how it works, then set it up according to what works best for you.

    (the thread is quite old so please let me know if there are better alternatives in the GUI)

    That is definitely an old post. It might have been before the GUI plugin.

    1. where will the snapsync.log be saved? How do I access it? And will it be overwritten with each run to prevent the space being unnecessary occupied?

    That was my arbitrary name for the log, as I named back then. I believe it went to /root because the command line was run by the root account. If using the command line to run a sync, a path could be specified for a log file such as snapraid sync -l /var/log/snapsync.log

    2. How should I set the settings below to be in harmony with the Scheduled tasks above?

    I don't use those particular settings anymore, however:



    Also I see reference of the sync process but I do not see any checkbox or schedule for it. Can the scripts above now be achieved also through the GUI with the new plugin version? In the "Scheduled diff" maybe? Is it also running a sync?

    The selections in the above GUI page constructs the needed command(s) and switch(es) for the sync command. The Schedule Diff button (above) runs the commands on a schedule. The Diff (Differences) thresholds are checked before a sync and scrub are ran.

    On average I access the NAS about two or three times a week. So I'm thinking spindown might be the way to go.

    If you're the only user and have no nightly automated tasks,under these circumstances, spindown seems reasonable to me.

    I also have the SMART mode set to 604800, a week, before it checks the hard drive.

    This also seems reasonable. However, you might consider a short SMART drive test once a week or, maybe, every other week. Again, the idea is to collect SMART stat updates, before something becomes serious. (The check and test accomplish this.)

    The boot drive cloning operation I linked is an off-line process that works well for 8 to 32GB thumb-drives. Anything larger than 32GB adds considerable time to the cloning process. If a boot drive is USB connected, it can be cloned off-line. The question is, how long will it take?
    Cloning flash media boot drives was a process that was ginned up and documented for beginners and SBC users. (**But it should be noted that experienced admin's use cloned boot drives, as well, because restorations are FAST.**)
    The linked cloning process is fully documented, explained at length, it's dirt simple and it's easy to test the generated backup. Further, a restoration, using "the backup" (replacing a physical device) is nearly fool proof and can be done in a matter of a few minutes.

    Other more advanced methods (dd, OMV's backup plugin, etc.) are arguably better in that they can be automated and the admin may have choices among several different backups. Further, dd and other methods may be better suited to larger boot devices because they tend to be faster. However, more advanced methods add decision points, time, and complexity to a potential restoration, when the admin may be "wobbling" in the aftermath of an outage. On the down side of the more advanced methods, many users tend NOT to test their backups or their restoration process before they need them. In some cases, when they need to restore their backup, they find that the backup or their process don't work.

    In the bottom line, that is no "right" or "wrong".
    What it boils down to is, what are you comfortable with?