Posts by crashtest

    Yes but Yacht runs in a container so that would require mounting an additional volume to include your network shares as it's restricted to it's own filesystem

    I hate to admit that I didn't think about the restrictions surrounding that (Yacht itself is running in a container).

    Still being able able to navigate the file system of the host, to set a bind point, would be nice if possible. Thanks.

    Ill have to see if that's doable (since it's in a docker container). It may not be.

    I think he was referring to being able to browse to the bind point "outside" of the container. Being able to navigate /browse to media storage (for example) on the host would be nice, versus being forced to type in (or paste) a letter perfect exact path.

    It might not be a good idea to enable being able to change a default bind point on the inside of a container. That's more in the realm of the Docker developer.

    There may be more involved with this issue than OMV, especially if the hdparm commands on the CLI don't work. (I'm assuming you tried macom 's suggestion.) Note that OMV is a server management tool, that is loaded on top of Raspbian.

    This issue may have something to do with Raspbian or the hard drive itself. If the hard drive doesn't fully support APM, or if WD's USB to SATA bridge is filtering drive commands, attempting to configure APM is not going to work (obviously). You might check with the WD website to see if your model of WD - USB drive fully supports APM and hdparm commands.

    Alternately, note that the power savings for idling one drive is minuscule to nonexistent (depending on usage) and it's arguable as to, whether or not, there's any savings on drive wear and tear by spinning the drive down. (In fact, regardless of size, the greatest wear, tear, and power consumption is involved in spinning an electrical motor up.)

    The easiest choice is not to attempt to idle the drive.

    First, I'm going to assume that the 120GB SSD will be the boot drive and the 240 might be for OS backup and / or for utility uses.

    I'd have to ask a few more questions such as, what are you going to store, primarily? When you say "file server", If video files are to be the bulk of your storage (Up to 1 or more GB per file), that might affect the way you may want to set it up.

    Do you want to use all disks, under a common mount point?
    Would you consider dividing them up? (4x4TB in one group, with the 1TB and 6TB in another?)

    To speculate and guess, you might benefit from SNAPRAID and Mergerfs. Both will work with dissimilar sized data disks.

    I found macvlan configuration to be somewhat confusing at first. Especially when portainer was complaining about the gateway already being used.

    I now have a macvlan that is used by my containers and corresponds to my LAN.

    I'm thinking more along the lines of how to describe a MacVlan interface to new users, how to configure it, and what it takes to write an easy to understand "walk-through" for configuring Docker containers. While it has a lot of options and is easier to use than the command line, Portainer doesn't lend itself to those tasks.

    Here's to hoping that Yacht will make things easier for all concerned.

    These are the SMART stat's most likely to be incrementing before a drive failure:

    SMART 5 – Reallocated_Sector_Count.

    SMART 187 – Reported_Uncorrectable_Errors.

    SMART 188 – Command_Timeout.

    SMART 197 – Current_Pending_Sector_Count.

    SMART 198 – Offline_Uncorrectable.

    You seem to have a problem with "Command Timeouts". That suggests the problem may (rpt "may") be with "hardware", whether that may be with the drive's interface board or your add-on PCI SATA controller. (Please note, this is "speculation". If you have utilities for testing the add-on PCI card, I'd run them.)

    As a general rule, setting aside commercial style SAS/SATA RAID cards which get exhaustive testing and are widely used; I'd stick with the MOBO's SATA ports (first) and use the PCI add-on card once there are no MOBO ports left. (But that's just my opinion.)

    If the drive reconnected using a MOBO SATA port, I'd let it run awhile for verification and check that the count on SMART 188 (Command timeout) has stopped incrementing. Then, I'd put the drive back on the card to verify the problem.

    I assumed re-installing the whole system would basically take the same amount of time than restoring the backup,

    If you're extensively configured, this may not be true. A straight reinstall takes about 15 minutes. If the configuration is extensive, with Dockers installed and configured, it may take several hours to recover with part of it being trying to remember what was done.

    If using thumbdrives, cloning them is dirt simple and is covered in this -> guide, under OS backup. If something happens, plug in the backup and boot.

    Otherwise, there's various threads on the forum for using dd and other methods of regularly cloning a boot drive.

    I haven't tried Yacht yet, but one of first things I'd be looking at would be easy MacVlan configuration.

    Being able to assign a separate IP address to a container is a desirable feature but it seems that configuring a MacVlan interface is an exacting exercise, with pitfalls, where the config doesn't work.

    What does SMART say? In the GUI go to Storage, SMART, in the Settings tab, enable it.

    In the Devices tab, click on the drive, Edit and enable SMART monitoring.
    In the Scheduled Tests tab, click on the drive, Edit, and set up a short self test. Enable it and set Hour "2" on Sunday or something like that. Save it. Then click on your enabled test and the run button. It will take 5 minutes or so to complete a short drive self test.

    To see the results, go back to the Devices tab, click on the drive, then on the Information button.

    A dialog window will pop up. Click on Attributes tab. (Extended information should be reviewed as well.)
    If we're talking about a spinning drive and the following attributes are incrementing up, they're indicative of a future drive failure.

    SMART 5 – Reallocated_Sector_Count.

    SMART 187 – Reported_Uncorrectable_Errors.

    SMART 188 – Command_Timeout.

    SMART 197 – Current_Pending_Sector_Count.

    SMART 198 – Offline_Uncorrectable.

    **One or two, in the raw count, means nothing other than keep an eye on it. However, if these counts are incrementing upward, a drive failure may be in the making.**

    I was thinking of dd instead of Rsync.

    If you know how to set up the command line, dd will work. Note gderf 's comment about expanding the file system. After that's done, re-pointing shared folders to the new disk, within OMV's GUI, will work as well. Network shares and other services layered onto shared folders will follow to the new drive.

    Odroid-hc2 has only one USB port for external disk.

    Do you have a powered USB hub or a two slot drive dock? The Rsync example in the guide is for locally connected disks. A remote host-to-host copy is a different matter.

    You still recommend Rsync in my situation?

    Whatever you're comfortable with is the way you should go.

    You can use Rsync for a full disk to disk copy - mirror. See this -> guide on page 59. This can be done from the GUI.

    Wipe, provide a volume name, and format the new destination disk with the file system of your choice and run the Rsync command line, set up as shown in the guide. **In your case, be sure to leave out the --delete switch when setting up the command line. The --delete switch does not apply to your scenario.**

    Note that the operation may take hours, depending on the amount of data. Also, other than the scrolling files list, there will be no "progress meter" or "percentage completed" indication. If you navigate away the page where the dialog box with files scrolling, Rsync is still working but there will be no indication that the job is in progress or complete. The only way to verify that the job has been completed, fully, is to run the command line again where no files will be listed. (Similar to the following.)

    When the copy is complete and verified, redirect your shared folders to the new drive as indicated in the guide.

    Would Raid5 have prevented this from happening?

    RAID5 does not have a checksum checking / testing capability that will repair bitrot. It can only replace a failed or failing drive, under certain conditions. Further running RAID5 on SBC's with USB connected drives is a real bad idea.

    SMART is the best tool to detect when a drive is beginning to go south.

    First, set up User Notifications. That's covered starting on page 35 in this OMV5 -> guide. With that done and tested, if SMART errors or other file system issues are detected, you'll get an E-mail heads up.

    Then, under Storage, SMART, enable SMART monitoring in the Settings tab.
    Under the Devices tab, consider running an after hours short test, once a week. I've found the short test to be enough, but others use the long test once a month. These tests are recorded and can be referred to for future use.

    The following are the SMART stat's to keep an eye on for spinning drives:

    SMART 5 – Reallocated_Sector_Count.

    SMART 187 – Reported_Uncorrectable_Errors.

    SMART 188 – Command_Timeout.

    SMART 197 – Current_Pending_Sector_Count.

    SMART 198 – Offline_Uncorrectable.

    (One or two counts of the above may not mean anything, but it they start to steadily increment, a drive failure may be in progress.)


    I believe the problem you're experiencing may be related to how you're implementing SNAPRAID.
    Consider the following operations, that may be setup to run in Scheduled Tasks, and note that sequence is important.

    First note that, when I used SNAPRAID on a backup server, I ran a SYNC operation once every two weeks. That gave me some time to recieve E-mail stat's and intervene if a problem was detected.

    (The following is the way I did it. There are many other ways this can be done. Perhaps others will chime in with their routine and rational.)

    snapraid -p 100 -o 13 scrub
    **scrubs all files that have not been scrubbed in two weeks. This operation is done two days before a sync.**

    snapraid -e fix
    **with the output of the above, SNAPRAID fixes corrupted files found, using their checksums and parity info. This operation is done the day before a SYNC**

    snapraid touch; snapraid sync -l snapsync.log
    ** This SYNC's new files added during the last two weeks, along with new checksums and parity for corrected files from above. The touch command takes care of a typical annoyance and the -l logs output to the file name shown.

    If notifications are set up and turned on in Scheduled Tasks, all of the output of the above is E-mailed to you for review and consideration.


    snapraid --force-zero sync

    **This command forces a sync when files of "0" byte length are detected - an annoying habit of windows clients.

    SNAPRAID "diff" scripts are also something to consider.
    Basically, a diff script can be set up to stop a SYNC operation if there's a designated percentage of files that are different from their checksums. This is an important consideration if a drive begins to fail where there are a large number of files that do not match their checksums. This is designed to keep the last SYNC available for drive recovery.
    ( gderf and others know more about customizing a diff script than I do.)

    In the bottom line, SNAPRAID is the one of the very few "easy" choices for error detection AND correction on SBC's. If combined with notifications and SMART drive testing, that covers a lot of bases. Nothing is perfect, but the combination (SNAPRAID and SMART) is very good. Of course, 100% backup of all data is highly recommended.


    At this point I'm thinking i need to get something more professional.

    I started out with a pair of R-PI2B's, several years ago, and rapidly came to the same conclusion. There's nothing wrong with SBC's and, for many users, they're enough. With two SBC's, even the backup function can be taken care of at low cost.

    However, in my case, I wanted something a bit more robust for dealing with LAN client backup and iron clad bit-rot protection, to include ECC RAM. I bought a small SOHO server to serve as my primary box, set up OMV with ZFS, and haven't looked back.

    /---/ , just hate the idea of them hanging outside the chassis. You could get an internal adapter cable I guess and run it to a header, which I'd do if I needed the drive bay.

    I put my servers in a closet. I use a cable to connect to a USB port on the back side, and lay the drive on the top of the case. That makes for easy access for backup and nearly eliminates the possibility of breaking them.

    There are a number of valid reasons to use an SSD to boot but I tend to push new users toward thumbdrive's for a number of reasons. Chief among them are, they're cheap so buying two is no big deal, and they're dirt simple to clone / back up.