Posts by crashtest

    Again, did you see anything in dmesg?

    Further, have you looked closely at the SMART stat's for the drives? If you've been using them for awhile, one (or more) of them may be developing issues.

    As the error message appears, the following is the drive with the "Structure Needs Cleaning" note. This drive may need attention:
    /srv/dev-disk-by-uuid-9b0d428f-43c5-436d-b400-d2191593a2aa


    As they where all in an array previously.

    It's only 19% done so there is a day or two left, What's another day or two right?

    I have to admit, that I didn't think about the "size" of your array and the time it takes to recreate an mdadm RAID array. On the other hand, it's fairly obvious that something's not right.


    But not I did clean each drive but I did quick, so you think I should secure erase them and start over?

    Running a "full secure", erase on each drive, would take a lot of time. I don't believe that's necessary. However, if you "start" a full secure and, after 2 minutes or so interrupt it (hit Stop), then follow that with a quick erase, that's better then doing a quick erase by itself. (This is true if your drives have been used as RAID drives prior.) "Secure" erases RAID flags (at the beginning and end of the drive) and the boot sector, then begins working on the rest of the media from there. "Quick" by itself is not as comprehensive.


    NO, all 8 are not on same controller 6 on MB's SATA's and 2 on PCIe SATA controller, but has worked this way before.

    Things work until they don't. RAID is somewhat sensitive to roughly equal bandwidth for all drives which is why RAID over USB is not reliable. (RAID over USB is one of those scenarios where sometimes it works, until it doesn't.) If I were to speculate, your Mobo drive ports have more bandwidth available to them than the two that are connected with the add on card.

    ________________________________________________________________________________________


    If you're worried about creation time, have you thought about using ZFS? While the the setup is different from mdadm RAID, most of it is in the GUI. Creating a ZFS pool is very fast.

    While I don't have anything for you, as the error dialog indicated, did you look at dmesg output?

    I am not seeing a format or able to use files system to format the array.

    Since you're at the format stage, I'm assuming that existing data is not an issue:

    - Are you doing your RAID configuration on the CLI?
    - Are all 8 drives on the same controller?
    - Did you erase each drive before creating the mdadm array?
    Sometimes "quick" erase is not enough. If these drives are being reused, "secure" will erase old RAID drive signatures and the boot sectors. You don't have to wait for the entire drive to be securely erased. A couple of minutes per drive should do, then cancel. Once all drives have had the "secure" treatment, go back and do a quick erase. Then attempt to create the array.

    the second question could be: may i re install without loosing the files i copied in the disk?

    Yes you can. The files you have on your data disks are still there. In fact, given that something appears to have gone wrong with your boot drive, this might be a good time to upgrade.


    will i need to format again

    Absolutely not. If you format your two data drives, your data will (at that point) be gone. If you want, as an insurance policy, you might disconnect your internal data drives until after you finish your rebuild. (That would guarantee that no mistakes are made.)

    If you plan to use USB thumbdrives, I'd recommend that you get two new drives so you'll have backup. Once you're up and running again, you might be able to avoid a reoccurrence of a boot drive problem if you -> clone your boot drive.

    If I were you, I wouldn't rebuild on your existing thumbdrive. There might be something wrong with it. If you have no choice at the moment, at the very minimum test it as shown in the build process below.

    When rebuilding you can follow the build process starting -> here. ((Note: If using a thumbdrive as a boot drive, the installation of the Flashmemory plugin is required. That's described in the document.))

    or can i mount them without formatting?

    Yes, you can mount drives without formatting which is exactly what you want to do.

    After the rebuild is complete:
    - Stop at the section called A Basic Data Drive.
    - Shutdown, reconnect your data drives, and boot up.
    _______________________________________________________________________________

    In the GUI, go to Storage, Filesystems and click on the right arrow (mount an existing filesystem).

    When your drives are mounted:
    - To recreate your shared folders, follow the instructions in -> this post, using a newly remounted drive in the Filesystem field.

    - After recreating all your shared folders, for both drives, go -> here to recreate your SMB network shares.

    That's why everyone should always read the wiki and official instructions of each servicethey install, at least I'm doing that.

    In the wiki, efforts have been made to eliminate as many of the pitfalls a possible, so thanks for that.

    On the other hand,,, you'd be surprised at the number of users that (seemingly) go out of their way to deviate from instructions, follow an unsupported "how-to" on the net, and even come with their own "custom install" (like using Debian arm64, attempts to install on Ubuntu, etc., etc.) When it doesn't work, for some reason, they end up here. Then comes the "extraction" process of trying to find out what they did.

    Why are you asking these questions, how are they relevant?

    We've had a user install arm64 direct from Debian repo's. The end result is significantly different from using Raspberry PI OS.
    (And the difference was not discovered until the thread was well underway.)

    The flash tool shouldn't make a difference.

    True, but the reason for the question was, if the Raspberry PI Imager is used, the only choices are 32 or 64bit Raspberry PI OS.

    I don't know if you did an "upgrade" or an "update", what version of OMV you're using, etc. If you want help, you need to provide information beyond, "I did an upgrade and it's broke".

    I suspect that you may be running mdadm RAID and you've done an OMV version upgrade, or something similar, and your "upgrade" needs the "Multiple Device" plugin.

    If you are running mdadm RAID, install the plugin pictured below:

    Yes for a zpool create via the WEBUI as zpool history would show, e.g:

    If you check the ashift box, the plugin defaults to 12. Otherwise, ashift will be defaulted to whatever the zfs command line defaults to.

    I noted the check box behavior but wasn't sure otherwise. ashift 12 will take care of the vast majority of spinning drives and SSD's, so I'll have them check the box. I'll add a note and a reference for those who may want to go over ashift specifics. (As it seems, while it's the exception rather than the rule, ashift 13 is needed for 8K SSD's)

    Where is this advice going to live? Most users are just going to install the zfs plugin without reading anything.

    I need to pick this up again. In time past, I've had issues in times past with using the backports kernel & ZFS, after a kernel upgrade. Ever since I moved to the Proxmox kernel, years ago, it's been smooth sailing.

    I had the beginnings of a ZFS doc going but, since ZFS seems to be thinly used, I didn't finish it. As it seems, it's time to look at finishing the doc.

    One question: The plugin's current default ashift value is 12, right?

    As trapexit said, first recreate your MergerFS pool with your existing three each 10TB drives.

    After you recreate your pool, create a new Shared Folder "but" in the create window:

    - In the Name field: Don't "name" the share just yet.
    - Filesystem field: Select your MergerFS mount point.

    - In the Relative Path field: Follow the line all the way to the right. At the end of the line, you'll see a little directory tree icon. Click the icon.
    - In the drop down menu, pick one of your data folders. Click on the folder and Choose it. (In this sample case, the folder is "Documents".)
    - Permissions: (If you're unsure of what to use, go with Everyone read/write. This can be changed later.)




    Now that you know what data folder you're dealing with, go back to the name field and give this Shared Folder an appropriate name.
    Save the Shared Folder.

    ______________________________________________________________________

    - Repeat the above process, creating a Shared Folder for all your data folders.

    - If you were using SMB network shares, go -> here for guidance on setting up your SMB network shares again.

    So I number the parity drives 1 and 2 respectively in snapraid is that correct?

    I'm not sure I'm following here. You can name them anything you want but, when I name drives, the name itself is an indicator of the drive's function. Along those lines, if you want to use two parity drives that have unique names, Parity1 and Parity2 makes sense to me.

    SnapRAID's "Split Parity" does work (I've tested it with a data restore). However, if you use split parity, you'd need to keep a close eye on the health of both drives. For this reason, I'm not a fan. My preference is to put the newest (healthyist) drive in the Parity role. After all without a solid Parity drive, after a data drive failure, there is no "restore". In any case, setting up SMART tests and filesystem reporting with E-mail notifications, for all hard drives, is a good idea.

    BTW: I take it you found the rsync guidance -> here ? (I could have sworn that I had the link right, above. In any case, I fixed the link.)

    Bu rebuild you mean reinstall?

    Yep. Reinstall. It's the only way to be sure there's no software corruption. Also, you might give thought to using a new, good quality, thumbdrive for the rebuild. (And, this time around, consider -> Cloning your thumbdrive. That would mean buying 2 each of equal size.)


    Had another one today after adding a HDD and configuring storage, mergerfs and samba to include it.

    If "something" is wrong with software or the kernel, it would be hard to predict what the symptoms might be.

    On the other side of it, hardware wise, adding a HDD (even a good one) puts additional load on your PS. If the PS is marginal and right at the edge... I hate to throw that one out to you but it is a legitimate consideration, especially since this issue started with "adding a hard drive".

    If you're OK with booting from a thumbdrive (installing the flashmemory plugin is a must) and are willing to rebuild, cloning thumbdrives is dirt easy. -> Cloning flash media. (You might want to read over the -> OS backup section for general info.)

    Cloning flash media is an "off-line" process. However, once the backup process is complete, testing your cloned OS backup will be a matter of "plug and play", just a matter of 3 to 5 minutes. Recovery will be the same as testing your backup; shutdown, swap thumbdrives, boot up. Done.

    Note that there are many ways to backup your OS that are arguably better, especially if you're running media server add-on's or numerous dockers from your boot drive. However, when it comes to simple OS backup methods, there are few that are as easy and fast, especially when you need to restore.

    BUT the first time this error occurred was after adding a data disk with a SMART "caution" warning and that caused all kind of problems. After I removed it, the system is stable.

    The above is another matter and it could have, indeed, corrupted software. Hardware issues can cause bizarre software problems that have no rational explanation. Once memory is corrupted, and written back to the boot disk, it might not be possible to "fix it". Kernel corruption is what it is. Finally, note that at this point, all I have for you is speculation.

    A potential solution seems to be a filesystem repair, ref -> this article. Along those lines, it's highly likely that your thumbdrive is formatted EXT4. You could try booting onto a live distro, like -> Knoppix and running -> fsck. (You could pull your thumbdrive and do this at a workstation.) It might be worth a shot. Also, as noted in the article, to rule out a memory module problem, running -> memtest86 on your server is a good idea.

    ________________________________________________

    Otherwise, if you didn't get the BIOS warning before the disk incident, it might be time to rebuild. That's the only way to rule out software - a clean rebuild. If you rebuild or get this fixed, you might consider this -> cloning flash media.

    While I hate to throw this into the mix, the BIOS Has Corrupted Hw-PMU Resources seems to be an HP server thing. Keep that in mind.

    What are you using for a boot drive? SSD, thumbdrive, or is it a spinning disk?


    The last thing I did in OMV was created a new shared folder for one of my data disks with rel. path "/" and enabled file browser to access that share. I rebooted because file browser showed a blank screen in the browser.

    While corruption may have occurred around the time you took the above actions, I believe it's coincidental. With that said, your boot drive or data hard drive(s) may have a problem. Have you look at it's SMART stat's?

    I deleted what I had in this post. It was too complicated.


    Do this.


    - First, format and mount your 2TB drives.

    - Do an rsync disk-to-disk copy of each of your 1TB drives. (1TB drive to a 2TB drive) There will be two copies to do.

    Details on how to set up an Rsync drive-to-drive copy are -> here
    - In your case, do not use the --delete switch.
    - To insure that all files are copied, when the command line is complete, run it again. (On the SSH command line, the up arrow will bring up the last command.) All is done when you see "success" AND no files scroll by.


    Each drive copy may take awhile.
    (**Note,, don't let users add or delete files when you're making these copies.**)


    - At this point you might want to take a screen shot of your existing MergerFS array, under Storage, MergerFS. (This will provide you with "backout" information if needed.)


    - Add the mount points of the freshly copied 2TB drives, as found in the filesystem window, to your MergerFS array and remove the mount points of the 1TB drives. (Save)
    - Reboot/.

    - Test your shares and other services attached to the array.


    -Deconfigure SnapRAID for 1TB drives (both data and parity) (Save)

    - Configure SnapRAID for the 2TB drives drives that are now in your MergerFS pool along with 1 or more TB drives for parity.


    Test for proper operation.

    Run a SnapRAID sync command.

    Keeping in mind that this forum is about OMV:

    From that perspective, OMV, let's review what you've done.

    Docker is installed on OMV and working: check
    Pihole is working for devices on your LAN: check
    Tailscale is installed, along with OMV, and working: check


    did I understand correctly that I need to create a subnet (https://tailscale.com/kb/1019/subnets)?

    With all of the above noted, wouldn't asking your question be more suited to the -> Tailscale forum?

    Apparently "BIOS Has Corrupted Hw-PMU Resources" is a fairly common issue with Linux. -> example thread. Do a net search on it. Perhaps there's a fix for your hardware.

    On the other hand; if your machine is reliably booting, in a reasonable period of time, I wouldn't don't worry about it.

    I'll set up a github account soon so it might be easier to show more then a few files.

    This should be done sooner, rather than later. Without being able to take a full look at what you're doing, it's doubtful if anyone would take a look.


    I know it is a lot to ask but maybe someone can take a look.

    Unfortunately, very few on this forum have real development experience. That includes me. Those that are experienced developers (two that I know of) are driving this project so they're really busy and that's an understatement.

    What I'd saying is, don't be surprised if you don't hear anything.