Beiträge von crashtest

    So, #1. Could this me a problem with OMV?

    I doubt it. The OS recognizes drives that are "presented to it". The presentation itself is a function of the controller, added expanders, etc.

    If there's a doubt about the SATA drive, connect it to a SATA connection on the mobo. If OMV see's it, the next place to connect it is directly to the LSI controller. (Using one of the two cables.) Finally, install the drive in the EMC KTN-STL3.


    #2 what else should I check?

    Depending on the outcomes of the above, the documentation on the controller and the external chassis is the place to start.

    I think I'll end up using BTRFS RAID5 + metadata as RAID1 as most people recommend.

    As I understand the problem, BTRFS in RAID5 has the same problem that mdadm RAID5 has - "the write hole" problem. Both issues are fixed with an UPS and, in the event of an extended power outage, a clean shutdown. 
    (Back in the day, some of the higher end hardware RAID controllers addressed the write hole issue with a battery backup on the controller itself.)

    why? what do you use instead?

    There are a number of variables. Sync, for new, changed and deleted files, tends to be pretty fast. On the other hand, the size of the data store, speed of the drives, USB connected drives, etc., effect the length of time required for a scrub. To avoid issues (errors) with a sync and the following scrub, it's best to run and automated Diff after-hours when the server is not being used. After-hours, where user access is unlikely, may mean something between midnight to 06:00AM. If the data store is large, requiring several hours to complete a scrub, scrubbing less than 100% of the store may be necessary.

    With the above in mind, I'm of the opinion that scrubbing 25% of the store, once a week, is enough. In a month's time, all is checked for data integrity. Along with keeping an eye on SMART data, I believe that's enough.

    What I keep an eye on are differences thresholds, when setting up Diff. (Changed, new and deleted files.) While a reasonable number of changes must be allowed or the script won't complete (negating automation), excess changes are an indication that a closer look is needed. For automation purposes, I've found that 100 each, per week is enough. Otherwise, if I add lots of files or do a large delete, I run a sync manually.

    In the bottom line, it's best to read up on what SnapRAID does, try to get a good understanding of how it works, then set it up according to what works best for you.

    (the thread is quite old so please let me know if there are better alternatives in the GUI)

    That is definitely an old post. It might have been before the GUI plugin.

    1. where will the snapsync.log be saved? How do I access it? And will it be overwritten with each run to prevent the space being unnecessary occupied?

    That was my arbitrary name for the log, as I named back then. I believe it went to /root because the command line was run by the root account. If using the command line to run a sync, a path could be specified for a log file such as snapraid sync -l /var/log/snapsync.log

    2. How should I set the settings below to be in harmony with the Scheduled tasks above?

    I don't use those particular settings anymore, however:



    Also I see reference of the sync process but I do not see any checkbox or schedule for it. Can the scripts above now be achieved also through the GUI with the new plugin version? In the "Scheduled diff" maybe? Is it also running a sync?

    The selections in the above GUI page constructs the needed command(s) and switch(es) for the sync command. The Schedule Diff button (above) runs the commands on a schedule. The Diff (Differences) thresholds are checked before a sync and scrub are ran.

    On average I access the NAS about two or three times a week. So I'm thinking spindown might be the way to go.

    If you're the only user and have no nightly automated tasks,under these circumstances, spindown seems reasonable to me.

    I also have the SMART mode set to 604800, a week, before it checks the hard drive.

    This also seems reasonable. However, you might consider a short SMART drive test once a week or, maybe, every other week. Again, the idea is to collect SMART stat updates, before something becomes serious. (The check and test accomplish this.)

    The boot drive cloning operation I linked is an off-line process that works well for 8 to 32GB thumb-drives. Anything larger than 32GB adds considerable time to the cloning process. If a boot drive is USB connected, it can be cloned off-line. The question is, how long will it take?
    Cloning flash media boot drives was a process that was ginned up and documented for beginners and SBC users. (**But it should be noted that experienced admin's use cloned boot drives, as well, because restorations are FAST.**)
    The linked cloning process is fully documented, explained at length, it's dirt simple and it's easy to test the generated backup. Further, a restoration, using "the backup" (replacing a physical device) is nearly fool proof and can be done in a matter of a few minutes.

    Other more advanced methods (dd, OMV's backup plugin, etc.) are arguably better in that they can be automated and the admin may have choices among several different backups. Further, dd and other methods may be better suited to larger boot devices because they tend to be faster. However, more advanced methods add decision points, time, and complexity to a potential restoration, when the admin may be "wobbling" in the aftermath of an outage. On the down side of the more advanced methods, many users tend NOT to test their backups or their restoration process before they need them. In some cases, when they need to restore their backup, they find that the backup or their process don't work.

    In the bottom line, that is no "right" or "wrong".
    What it boils down to is, what are you comfortable with?



    In the S.M.A.R.T. setup it has a number of 1800 in the check interval section. Does that mean it checks every 30 minutes?

    Yes

    I'm using 86400 which is once a day (every 24 hours). In my opinion, that's frequent enough to update SMART stat's.


    And if I have the spindown in effect, does it spin up the disk every time it checks?

    Yes, but you might consider letting them spin. When a motor spins up, startup current is highest and the motor is under the most stress. Hard drive motors are no exception to that general rule. If a spinning drive is constantly (re)starting up, more current might actually be used (along with more wear and tear) versus letting just them spin.


    If your use case has no one using your NAS at night (to include automated tasks during sleep hours) and most of the day (work hours) that might be a good argument for spinning drives down. Otherwise, you might consider letting them spin.

    Your call.

    Try creating a shared folder / SMB network share, in accordance with this process -> Creating a Network Share. The resultant shared folder will be at the root of the data drive. For simplicity in permissions (and to avoid permissions issues that can be created by using a nested path), that's where your data folders should be - at the root of the data drive. The created shared folder / network share will have wide open permissions.

    Once you have the above network share working and writable, you can tighten permissions up in accordance with this doc -> NAS permissions.


    How can i detect a hardware issue as suggested by macom?

    Unfortunately, you can't. Logs, more or less, are the only indicator.

    While there are some Stress Test distro's for X86 platforms, that are designed to induce CPU overheating, cycle through memory, etc., there's very little for ARM platforms. The fault, itself, is "the indicator". If you think about it; asking software to detect faults on the hardware it's running on, borders on unrealistic.

    If you think there's a chance it's heat related, you could try putting a fan on the Banana Pi and it's power supply as a test. Other than that:

    Things you could do in order of cost:

    1. A clean rebuild from scratch. (Make sure to test the downloaded image against it's SHA hash and test the SD-card before burning it.)
    If you haven't used it before, here's a detailed -> install process to follow. (There's a process for OMV6 on the same site.)
    2. Try another SD-card
    3. Try another power supply.

    4. A new Banana Pi (This might be the time to try another SBC, if you chose.)

    Could it be the HDD enclosure causing this? Shall I replace it?

    There are common issues with USB to SATA bridges. Not passing SMART data is one of them. UAS ( -> USB Attached SCSI) implementations, in your case, appears to be another one.

    If you search the forum, you'd find that Jmicron USB adapters have odd issues. Unfortunately, selecting another manufacture may not be better. Since OEM's can (and do) change components and spec's in their USB drive enclosures, without notice, you might find that you're in the same position with a new enclosure.

    While there may be other options, depending on your preference, there appears to be two usable paths:

    - If your current setup is working and you're getting SMART data, I believe it's safe to ignore the UAS messages.
    - You could skip using drive enclosures and boot from a good quality thumbdrive. In that case, you'd be able to clone the thumbdrive for easy -> OS backup. USB thumbdrives don't use SMART data but, with a standby thumbdrive, it would be a matter of a few minutes to replace a suspected bad USB drive.

    You might find WinSCP to be useful. It presents a remote Linux filesystem is a graphical manner provides point and click tools for editing text files. Use caution, however, because direct file edits, permission changes, etc., can ruin your install.

    How to setup and use -> WinSCP.

    I noted the content under srv/remotemount/ was "weird" it had old remote mounts that I had since deleted and didn't have what I expected inside, so I deleted the plugin, deleted the folders at this location (after disabling the share at the source) and then reinstalled the plugin and added the remote map once more. So far it seems to be working as expected.

    I've never seen that before but, when I got them working I can't remember changing them thereafter.

    I was worried I was going to accidentally delete from the source, and I knew of no other way to clean up the content of srv/remotemount/


    If you need to delete a Mount that has write access at the source, you could change the username and password to something that doesn't exist at the remote source, save it (then verify that the mount is, in fact, not mounted), and finally delete it.

    When using Remote Mount, you're layering permissions issues.

    First:
    (If you want to "write" the remote share.) The username and password that you're using in Remount Mount must have "write" access to the share, to include the entire path to the share, if the share is a nested folder. (If possible, you might consider using the TrueNAS root account and password for full access.)
    Second:
    If you're re-sharing the same remote share, as local share on OMV (along with network access); local users mush have write access to the shared folder AND SMB network share as well.
    Third:
    Dockers are another wild card with so many variables, depending on the actual Docker and how it works, well,, I can't help you there.

    I would start with setting up and verifying, write access at OMV. Once that's working, then look at Docker access.

    To follow up:
    There were no issues with an Armbian Bullseye / OMV6 build. All proceeded normally.

    To try to recreate the issues described by the OP Tosnic, I attempted to build Buster / OMV5 from Armbian's archive. In that case there were repo errors (release file issues) with the script exiting on the version.

    I did update the OMV6 build procedure, directing users to Armbian's archive for downloading the appropriate Bullseye image.

    It seems I downloaded the nightly build

    There are clear notes in the OMV7 / Armbian build guide, regarding nightly builds - "Dev's Only". (Not Supported.)


    Does the OMV install script currently even work any more for any debian version older than bookworm?

    While not in Armbian build doc; the User Guide specifically states that SBC builds are for the -> "current version of OMV only". At this point, that's OMV7. Further, as previously noted, the OMV project can't change Armbian's handling of their archived repo's.

    I haven't built OMV6, on Armbian, recently. But, as it seems, I should. It may be necessary put an "archived statement" in the OMV6/Armbian doc.

    Unfortunately, when it comes to older releases, the Armbian group tends to move on quickly. There's not much that can be done where Armbian's older (sometimes archived) repo's are involved.

    Download Armbian Bookworm Minimal, at the bottom of -> this page and use -> this guide to install the latest version, OMV7.