Posts by bbddpp

    OMV6 wants to install a bunch of nvidia package updates today. I recall this breaks any custom installation of nvidia hardware accelerated drivers. Is there a list of packages we need to keep back?

    Disregard, I'll explain what I did wrong in case it helps anyone in the future. I replaced the bad drive with a new drive but SnapRaid wanted me to name it exactly the same in the GUI as the drive I yanked. It's happier now. The UUID can be different but the name given in the GUI needs to match the drive you pulled to make it happy. Learning experience.

    Hi all,

    Snapraid dummy here. Had a drive go bad and instead of just replacing it I relocated my files to remaining drives had plenty of space. Now snapraid is missing that drive and doesn't run even though I removed it from the GUI. It wants a fresh drive and to rebuild that drive that was pulled.

    At this point I am OK with starting over and using this as a learning lesson that a pulled drive must always be replaced and rebuilt. Unless there is a simple step I am missing.

    If I wanted to start over, would I just delete the parity drive and/or the whole plugin and reinstall?

    All right appreciate the advice. I guess it was just bad luck that the newest drive I added happened to be the one that failed. I was using it pretty much exclusively since it was getting all the new media and the old drives were sitting there basically unused. But I guess that’s just how it will work from now on considering how things work and were set up. Unless we start watching a lot of old media, the newest drive in the pool will always going to get hammered the most.

    Thanks guys. I’m actually not looking to rebuild any drives. I didn’t lose any data, I just rsynced all the content from the failing disk to the mergerFS pool

    Before I removed the then empty drive. I plan to run a snapraid sync after I pull the bad drive (not replacing it at all).

    When I later DO add new empty drives, what is the proper procedure to balance mergerFS to populate the new drives with content so they are not the only ones getting slammed with new media? I think that’s what killed this drive. My other drives are all almost full so they weren’t being used at all and all the new media that I was adding was all landing on one drive. I’d much rather that spread out across all my drives. Old media rarely gets accessed so those drives just sit idle. Balancing data usage equally across all my drives would help that I think.

    The big issue with drives and space for me has been 2160p content is getting bigger and I'm still a little too scared to convert everything down to x265. Some remux rips I have are huuuuuge.

    Recovery going well by the way. Since I had enough space without the bad drive in the pool, I decided to remove the bad drive from mergerfs and then use rsync to sync the content to the mergerfs pool. Did not touch snapraid but I'll run a rebuild when it's done.

    When I do add a new drive or drives to replace the bad one, I think I need to run a mergerfs balance or something and then rebuild my snapraid. Mergerfs was trying to fill the empty disk and slamming it with new content, since all my other drives were around 80% full and it was 0% full. My guess it that when you add an empty drive you should tell mergerfs to balance the pool so that drive quickly gets as full as the others already there, so you are continuing to use your volumes equally as new content is added. This one drive was doing all the work.

    I appreciate all the great info here. It sounds like there are two schools of thought, one being to use snapraid itself to rebuild the drive (treat is as failed and remove it and replace with an empty new drive.

    Since I don’t have a new drive yet and I’m waiting for a sale, but still have enough space left overall in my mergerFS pool I probably need to use the other method which is the rsync. I just need to make sure I run the right command and I’m not great at rsync and used to just use cp out of fear.

    The drive that appears to be on its way to failure is a 15TB western digital drive.

    Jumping into the thread as I want to do this too to be able to yank a failing drive from my mergerfs snapraid pool.

    Here's what I am thinking the steps are (and I am wondering why I can't just Wetty in and run the rsync command?)

    1. Disable snapraid in OMV GUI

    2. Set up a share for the UUID of the drive I am about to remove from the mergerFS pool in the gui

    3. Remove the failing drive from the mergerFS pool using the GUI

    4. Root into Wetty

    5. Run a special rsync command with the correct flags (little lost here)


    But probably with a bunch of flags etc

    6. After rsync completes, delete the SHARE and UNMOUNT the failing volume/disk

    7. Yank the failing drive from my server physically.

    8. Re-enable Snapraid

    Did I cover this pretty well? I think I just need the rsync command and the rest will be pretty easy.

    I'm a little scared of data loss, I have plenty of free space to cover the removal of the drive (it's a 15 TB drive and the mergedFS volume overall shows 20TB of free space).

    I assume the first step may be to remove the drive from MergerFS (which I hope then will move all its content to the other available drives as part of that removal?) then remove it from SNAPraid?

    Is there a process documented somewhere? I am sure there is, just have not found anything concrete and I want to make sure I am careful and doing this right, especially since I do not have a new drive to swap in for the failing one just yet. But I want to get that failing drive (SMART reports it as WARN status) as soon as possible.


    I am hoping there's a setting somewhere I can adjust to fix this.

    Lately, I've noticed that my mergerfs snapraid setup (one pool) is causing Plex playback to fail when a background service like copying/moving a file (such as a new media download) takes place. It's a minor inconvenience like a couple minutes of freezing/buffering issues but enough to be annoying.

    Just wanted to make sure there wasn't a setting somewhere I could check or adjust to try and mitigate things? It seems pretty apparent that my storage gets hammered when moving a file around so if I can't avoid that, maybe a way to increase the plex buffer to handle it vs. the automatic setting?

    I run all the latest OMV6 kernel and Linux 6.1.0-0.deb11.6-amd64 seems to be working for me however it's gone bad on me twice and I am not sure why it went bad the second time (the first time I let OMV update all the nvidia stuff to newer packages and that borked it good).

    It might not be a bad idea to add extra steps to tell OMV to not ever update certain packages/components. Since we are using older nvidia drivers on OMV6, OMV6 seems to want to update those packages. I thought I was able to run a block updates on them but something went wrong and my plex eventually stopped working until I stopped running the nvidia env version. Once I ran through all the steps again, it's back, just not sure for how long.

    Major kudos for the guide in the other thread, big fan of it and how it takes everything one step at a time with exactly what is needed and an explanation. The world needs more guides like this.

    My guess is that I'm supposed to ignore any nvidia updates that OMV offers me after installing everything in the guide here:

    [HowTo] Nvidia hardware transcoding on OMV 5 in a Plex docker container - Guides - openmediavault

    I followed the OMV6 steps...Working well for a week or so until I allowed the OMV update process to update a lot of the nvidia packages.

    Now I see this when I attempt to run the plex docker:

    failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: initialization error: nvml error: driver not loaded: unknown

    Is there an easy fix or am I basically just uninstalling everything now and starting over?

    It may also be a good idea to add something to the guide to warn us not to allow OMV to update any nvidia packages if it breaks the install, if we can learn which updates broke my working plex docker.

    Thanks, will research the steps to make this possible on an existing install - pretty easy I assume. Can't believe I set this up the wrong way when I did my fresh install unless this is what all the tutorials have you do.

    My OS disk is 120 GB and a dedicated SSD which I never thought would run out of space. Turns out dockers quickly suck up space since the old ones hang around for so long.

    So having one of those days. Could use some help. Literally went on vacation for a week and came back to an OMV webgui that won't let me login.

    The login screen displays correctly, and prompts with an error message when the user/password is typed incorrectly. However, when the proper user/password is used (admin or my main user), it just "redirects" to the login screen again instead of logging in.

    I'm running OMV on port 89. Updates are automatic by nightly cron so something maybe set it off.

    Any commands or info I can provide that might help troubleshoot the issue?

    The system is running well, I just can't seem to login to WebGUI from any browser anymore.

    Thanks. That's the plan then and I'll run it right on the server instead of thru an ssh.

    I'll report back how long a much larger sync takes if anyone cares. It's going to be a lot.

    I've just migrated a ton of content to my mergerfs pool and am now ready to run my first snapraid. I have read up on keeping things tidy with snapscript and whatnot, but would like to just run it manually for the first time.

    That said, I see a way to kick it off (I think) via the GUI but PCs crash, restart, etc, and I'm guessing the first parity build is going to take several days if not more.

    Does anyone have a recommendation on how to build the first parity safely? Should I be logging in locally at the server and running a command line job to build the parity drive this first time?

    Appreciate any info, sorry if it's a dum-dum question. This will be the first time I actually bothered to have any type of RAID on my server other than JBOD so I'm learning.