Posts by crashtest

    In the S.M.A.R.T. setup it has a number of 1800 in the check interval section. Does that mean it checks every 30 minutes?

    Yes

    I'm using 86400 which is once a day (every 24 hours). In my opinion, that's frequent enough to update SMART stat's.


    And if I have the spindown in effect, does it spin up the disk every time it checks?

    Yes, but you might consider letting them spin. When a motor spins up, startup current is highest and the motor is under the most stress. Hard drive motors are no exception to that general rule. If a spinning drive is constantly (re)starting up, more current might actually be used (along with more wear and tear) versus letting just them spin.


    If your use case has no one using your NAS at night (to include automated tasks during sleep hours) and most of the day (work hours) that might be a good argument for spinning drives down. Otherwise, you might consider letting them spin.

    Your call.

    Try creating a shared folder / SMB network share, in accordance with this process -> Creating a Network Share. The resultant shared folder will be at the root of the data drive. For simplicity in permissions (and to avoid permissions issues that can be created by using a nested path), that's where your data folders should be - at the root of the data drive. The created shared folder / network share will have wide open permissions.

    Once you have the above network share working and writable, you can tighten permissions up in accordance with this doc -> NAS permissions.


    How can i detect a hardware issue as suggested by macom?

    Unfortunately, you can't. Logs, more or less, are the only indicator.

    While there are some Stress Test distro's for X86 platforms, that are designed to induce CPU overheating, cycle through memory, etc., there's very little for ARM platforms. The fault, itself, is "the indicator". If you think about it; asking software to detect faults on the hardware it's running on, borders on unrealistic.

    If you think there's a chance it's heat related, you could try putting a fan on the Banana Pi and it's power supply as a test. Other than that:

    Things you could do in order of cost:

    1. A clean rebuild from scratch. (Make sure to test the downloaded image against it's SHA hash and test the SD-card before burning it.)
    If you haven't used it before, here's a detailed -> install process to follow. (There's a process for OMV6 on the same site.)
    2. Try another SD-card
    3. Try another power supply.

    4. A new Banana Pi (This might be the time to try another SBC, if you chose.)

    Could it be the HDD enclosure causing this? Shall I replace it?

    There are common issues with USB to SATA bridges. Not passing SMART data is one of them. UAS ( -> USB Attached SCSI) implementations, in your case, appears to be another one.

    If you search the forum, you'd find that Jmicron USB adapters have odd issues. Unfortunately, selecting another manufacture may not be better. Since OEM's can (and do) change components and spec's in their USB drive enclosures, without notice, you might find that you're in the same position with a new enclosure.

    While there may be other options, depending on your preference, there appears to be two usable paths:

    - If your current setup is working and you're getting SMART data, I believe it's safe to ignore the UAS messages.
    - You could skip using drive enclosures and boot from a good quality thumbdrive. In that case, you'd be able to clone the thumbdrive for easy -> OS backup. USB thumbdrives don't use SMART data but, with a standby thumbdrive, it would be a matter of a few minutes to replace a suspected bad USB drive.

    You might find WinSCP to be useful. It presents a remote Linux filesystem is a graphical manner provides point and click tools for editing text files. Use caution, however, because direct file edits, permission changes, etc., can ruin your install.

    How to setup and use -> WinSCP.

    I noted the content under srv/remotemount/ was "weird" it had old remote mounts that I had since deleted and didn't have what I expected inside, so I deleted the plugin, deleted the folders at this location (after disabling the share at the source) and then reinstalled the plugin and added the remote map once more. So far it seems to be working as expected.

    I've never seen that before but, when I got them working I can't remember changing them thereafter.

    I was worried I was going to accidentally delete from the source, and I knew of no other way to clean up the content of srv/remotemount/


    If you need to delete a Mount that has write access at the source, you could change the username and password to something that doesn't exist at the remote source, save it (then verify that the mount is, in fact, not mounted), and finally delete it.

    When using Remote Mount, you're layering permissions issues.

    First:
    (If you want to "write" the remote share.) The username and password that you're using in Remount Mount must have "write" access to the share, to include the entire path to the share, if the share is a nested folder. (If possible, you might consider using the TrueNAS root account and password for full access.)
    Second:
    If you're re-sharing the same remote share, as local share on OMV (along with network access); local users mush have write access to the shared folder AND SMB network share as well.
    Third:
    Dockers are another wild card with so many variables, depending on the actual Docker and how it works, well,, I can't help you there.

    I would start with setting up and verifying, write access at OMV. Once that's working, then look at Docker access.

    To follow up:
    There were no issues with an Armbian Bullseye / OMV6 build. All proceeded normally.

    To try to recreate the issues described by the OP Tosnic, I attempted to build Buster / OMV5 from Armbian's archive. In that case there were repo errors (release file issues) with the script exiting on the version.

    I did update the OMV6 build procedure, directing users to Armbian's archive for downloading the appropriate Bullseye image.

    It seems I downloaded the nightly build

    There are clear notes in the OMV7 / Armbian build guide, regarding nightly builds - "Dev's Only". (Not Supported.)


    Does the OMV install script currently even work any more for any debian version older than bookworm?

    While not in Armbian build doc; the User Guide specifically states that SBC builds are for the -> "current version of OMV only". At this point, that's OMV7. Further, as previously noted, the OMV project can't change Armbian's handling of their archived repo's.

    I haven't built OMV6, on Armbian, recently. But, as it seems, I should. It may be necessary put an "archived statement" in the OMV6/Armbian doc.

    Unfortunately, when it comes to older releases, the Armbian group tends to move on quickly. There's not much that can be done where Armbian's older (sometimes archived) repo's are involved.

    Download Armbian Bookworm Minimal, at the bottom of -> this page and use -> this guide to install the latest version, OMV7.

    You could use this -> rsync process to backup your RAID array to the disk. The only change to the procedure would be the "source disk" which would be the mount point of the RAID array. Also, immediately following, there are instructions on how to recover to the backup disk, if your RAID array dies.

    Once the RAID array is fixed, the source and destination can be reversed to recover the array. Reverse the shared folder recovery procedure and you'd be back on the array.

    Trying to change the Pi to a fixed IP address within OMV fails and bricks the ethernet interface. I even tried setting up wlan0 interface as a backup and found that even trying to activate wifi causes an error in OMV and won't allow the interface to be added or the config changes to be saved. This never used to happen in the older version I had setup on a Pi3.

    Have you tried this -> installation process?

    There's a preinstall script that takes care of the networking issue.

    You've already named the pool (hdpool)

    In my case I set filesystems, at the root of the pool, to segregate data types. That allows for setting ZFS properties for each filesystem (if needed) but what I was most interested in was creating custom snapshot intervals for certain data types by filesystem. For instance, (even after they're deleted) I retain copies of personal documents for a year, before zfs-auto-snapshot purges them. That allows for plenty of time to discover an unintentional delete or to pull back a previous file version. On the other hand, after being deleted, video files are retained for 30 days before they're permanently purged.

    Following is an example of how I segregate data types:



    There's no right or wrong way, just personal preference.


    Please note, as ryecoaaron has said, this comes with risks and it's not supported. (The build warning should have been clear.)

    With that said;
    Run the following on the CLI as root


    zfs set aclmode=passthrough hdpool

    zfs set aclinherit=passthrough hdpool

    zfs set acltype=posixacl hdpool

    zfs set xattr=sa hdpool

    zfs set compression=lz4 hdpool


    If you decide to use ACL's at some point, the above will be helpful.
    (The last is optional. Your call. )

    Don't forget to create child filesystems on the parent pool. The filesystem level is where you should create your shares. In my case, I've created a separate filesystem for each share. That let's me customize ZFS properties for each share and to take individual filesystem snapshots.

    Along those lines, if you want to automate snapshots, give this -> doc a look. It also covers how to do selective restores.

    I've never attempted a ZFS install on a 32bit device, or an ARM device, so I don't know it's a possibility in OMV.


    Since OMV-extras is installed on your Helios4, as part of the scripted install, you could look for the ZFS plugin (openmediavault-zfs) under System, Plugins. If it's not there, and you attempt create a pool on the CLI, OMV won't integrate the pool, filesystems, etc., into the GUI. That means you won't be able to create network shares in the GUI.
    __________________________________________________________________________


    If ZFS is not possible, you still have options. (I'm going to assume that you want to use all 4 of the drive bays.)

    Traditional (mdadm) RAID:
    Since the Helios4 uses SATA drive connections, you might be able to configure RAID in the GUI (Under Storage, Software RAID). In your case, RAID5 and two RAID1 pairs are possible. With traditional RAID5, you might consider using an UPS.
    (I'm not a fan of traditional RAID but many use it.)


    Rsycn:
    You could do two each Drive-to-Drive Rsync'ed pairs which are the equivalent of RAID1. (I'd argue that an Rsycn mirror is better than tradtional RAID1 because you'd have real backup.) -> Rsync.

    -> SnapRAID + -> MergerFS:
    This pair of packages would give you the equivalent of RAID5, with a number of additional benefits such as data integrity, file and folder recovery, and others. But, there's some reading to do to understand them. The plugin doc's cover the basics of each plugin in an overview format. They can be used together or separately. They're not interdependent.

    BTRFS:
    (I don't know if BTRFS is available to ARM and/or 32bit.) Similar to ZFS, BTRFS snapshots are supported in OMV along with other features. The only caveat would be for BTRFS RAID5; just as it is with traditional RAID5, you should consider using an UPS. (That's not a bad idea in any case.)

    JBOD:
    There's nothing preventing from you from formatting each disk and simply assigning individual network shares to each drive. Other than ease of management, there's no requirement to pool disks together.