Posts by jollyrogr

    No. MergerFS will not move drive content to the other drives.

    The process is:
    1. Recover a failed drive, using SNAPRAID, as outlined in the plugin doc under Recovery Operations. That means physically removing the failed drive, adding a new drive, and running the fix command. (See the doc.)

    2. At the end of the SNAPRAID doc, there's help in how to remove the old drive entry from MergerFS, and how to add the replacement drive to MergerFS pool.

    I've only had to do this once, and I replaced the drive before it totally died, so I rsynced everything on the bad drive to the new one and then replaced the old with the new in the snapraid config and ran the fix command which then did not take very long. Finally, update mergerfs config.

    Thank you for pointing this out. This is absolutely essential to understand when using mergerfs and Snapraid together. I had exactly that problem and only realised when it was too late. I had 5x500GB data disks and 2x 1TB Partiy disks. When I deleted c. 1TB of media files (all jpgs and mp3/4's) which were scattered on 3 HDs, I was only able to recover around 60% of the data. All syncs had run perfectly well beforehand so I did not understand why my data was not recoverable, even with 2 parityt disks. It seems to be related to the point you are describing. What this means is that the two packages should never be used together and especially not in mergerfs MostFreeSpace mode.

    They can be used together, you just have to understand what you're doing.

    I use mergerfs in read only mode. The benefit is that a media player needs only mount one share to see all files and the user doesn't have to browse through multiple shares to find what they're looking for.

    When writing files to the NAS, I copy directly to the individual disk via the discrete samba shares. This ensures that I know which disk the files are on, and it is up to me to manage free space. After ensuring a successful write, I do a snapraid sync.

    That command is calling the snapraid binary itself which I didn't write or maintain. You could write a wrapper script that calls the snapraid binary and looks up which config file to call. Or you could just use the web interface since it does all of that for you.

    I don't think it has anything to do with the snapraid binary. Using the -c flag you can call /foo/bar.conf and it should work if that file is a snapraid config file. In this scenario, it's the plugin that names the conf, right?

    Any command you would typically run needs to include the -c argument with snapraid config file path. So, when you run

    sudo snapraid sync

    it will now look like:

    sudo snapraid -c /etc/snapraid/omv-snapraid-7ebf61f2-7370-4152-bf78-6a2e01249e01.conf sync

    I see, without the -c flag it will still look for /etc/snapraid.conf

    Any way to shorten that conf name? Perhaps make it user adjustable, "array1", "array2", "media", "images", etc.? I guess that might break the diff script, but I don't use that anyway ;)

    On OMV 6.x, 6.1 was the last single array version. 6.2.x had multiple array and split parity support. This version was only available in the testing repo. I couldn't get hardly anyone to test it. When I ported the plugin to OMV 7.x, I wasn't going to throw this work away. So, it is the 6.2.x version.

    What do you want to know? Each array has a snapraid config in /etc/snapraid/ that is in the form omv-snapraid-40e7a036-26a5-4cb0-bbc2-66d257e15dbb.conf. I updated the diff script to take the uuid as an argument. That is why you see the for loop in the OP's cron job. /etc/snapraid.conf is ignored. When you run a command from the Arrays tab like sync, check, scrub, etc require you to select an array to run it on.

    Cool. So When I upgrade to 7, I can look forward to multiple array support. I have data that is not included in the large media array that I would probably create a separate array for. How does this work with the command line? I typically do all my snapraid maintenance via SSH terminal.

    I'm running OMV6 but my snapraid.conf is at /etc/snapraid.conf, and the diff script also looks for it there. If the path for the snapraid conf has changed in OMV7, perhaps the diff script just needs to be updated with the new filename and path.

    Update. The plugin UI is now working as expected. I'm not sure how or why. I updated my network config and applied the changes and now the plugin works. I'm wondering if whatever routine runs when you apply changes did something.

    Have you tried to fix the issue by clearing the browser cache.

    Can not reproduce that with Chrome Version 120.0.6099.109 (Offizieller Build) (64-Bit) and Firefox 121.0 (64-Bit)

    Tried clearing cache, tried Ctl-Shift-R, tried more browsers, different computers, etc. - all behave the same.

    Firefox is 120.0.1-1 which is latest in archlinux repository. Chromium is 120.0.6099.109-1 which is also the latest.

    Can not reproduce that. What browser are you using? And what exact openmediavault version are you using?


    I had same results with Brave and Firefox and just confirmed with Brave and Safari on my phone as well. I have two OMV machines, both behave the same. OMV is 6.9.10-4, kernel is 6.1.0-0-deb11.13-amd64

    Just wanted to note that the UI for the symlinks plugin seems to have a bug. When you select the tree the list goes past the bottom of what appears to be a box and you cannot scroll down. You can still get to where you want to go by typing "srv" in the box; you just can't see the entire tree if you're at root level.


    Absolutely. You can add/remove drives at any time without destroying any data, and you can even mix different size drives. The only caveat is that your parity disk needs to be at least as big as your largest data disk. I'm currently utilizing 7x4tb + 2 parity. As I increase the array size, will eventually add another parity disk.


    https://www.snapraid.it/ is the website. Lots of good info there.

    Don't use RAID unless you need high availability. I use and recommend Snapraid along with mergerfs. With Snapraid, if a drive fails the most you would possibly lose is the data on that particular drive. But that's only if you also lost all parity.

    As I have said, use what you want but don't think Proxmox is a different KVM than OMV. Proxmox has many more features and might do some things better but it is still doing the same things underneath the covers.

    Yep. I like those features and I will use what I want :)

    Would you think it would be absurd if each of those applications was running in its own VM all on the same server?

    No, assuming the server hardware was capable.

    If you made a clonezilla (or ddfull in the backup plugin) image of OMV once you had it setup, then if you media failed, you just restore that image, update, and maybe tweak a couple settings. OMV doesn't have to be harder.

    You're not wrong. It's a choice. The way I see it is both are Debian and very similar under the hood but PVE is a feature-rich hypervisor that some folks like to turn into a NAS. OMV is a feature-rich NAS that some folks like to turn into a hypervisor. I guess I choose to use the systems for the purpose for which they are specifically built. As an absurd example, it's possible build a linux based router that could also be a NAS, VPN server, media server, etc., etc., but to me that's a horrible idea.

    Backup plugin but I rarely use the backup.

    I don't. Anything that I am unsure about is tested in a VM. But since I do minimal things in OMV's OS, this isn't an issue. I don't remember the last time I did something even to a testing install that I couldn't fix though.

    Proxmox is the exact same Debian userland as OMV. How are you not breaking things in Proxmox? If you aren't installing things on the bare metal OS, you should apply that same thinking to OMV when it is installed on bare metal.

    All good points. Running OMV on bare metal would have the minor advantage for me of not needing to do hardware passthrough of the HBA, though it is not difficult to configure. I don't mess with proxmox other than installing updates, but if something gets messed up it's easier for me to install proxmox and then bring my VM's back up then to reinstall OMV and reconfigure everything in OMV. Just recently I had to do this because the system drive in the server was starting to fail. Put in new drive, install proxmox and configure hardware passthrough, copy VM's from backup server, start them up, done. The most annoying part is hooking up a monitor and keyboard to the server. It would be nice to get a KVM for the rack to make that easier too.

    So I could run OMV on bare metal (I used to back in the days of OMV 2 and before I upgraded servers), but I'm not going to any more.

    Run OMV on baremetal. Install the kvm plugin. Run additional OMV setups in VMs for playing/dev. I have done this for years.

    How do you backup your OMV installation? Suppose you do something to brick your OMV, how do you recover quickly? Why I like running OMV as a VM is the ability to schedule backups and if something breaks, to restore a running server in seconds.