Posts by bbddpp

    Just wanted to say thank you for this post. I had the same situation just trying to get this card working and it was helpful. I'm not too familiar with power management and my box but maybe it's time to be a little more attuned to the power it's drawing with all its drives etc. Anyway connection seems good except I'm having trouble getting the OMV dashboard interface to display in the browser but I'm sure it's user error somehow since all my other plugins are working.

    FYI, I have a buddy set up to use nvidia transcoding. Recently an update pulled a file from backports that broke his encoding since it was updated but not the expected version the drivers wanted. The solution was to disable backports and then re-install the nvidia drivers.


    I personally use an intel Arc380. No driver compiling nonsense or docker tool kits like you have with nvidia.

    Thanks for the reply. I actually got it working using a post I found here -- everyone in this community is super helpful. That said it doesn't come into play too much, I'm mostly direct play but to know I can watch remotely without issue is nice -- I also think it was useful when I was playing with Watch together in Plex before they decided to remove the feature. I'll have to look for an Arc380 deal. Seems easier than this. I disabled backports to update to OMV7 and I haven't turned them back on yet, and I now can't remember even why I had them turned on for OMV6 since it all seems to be working just fine with them disabled.

    This worked for me on OMV6 for now, installing the headers. Then everything else worked. For some reason my nginx proxy was not running if nvidia was not running (no idea when or how that became dependent on nvidia working but whatever). So I needed to do something quickly.


    apt install linux-headers-amd64

    Still intend to do OMV7 update over the weekend, does anyone know if you should uninstall nvidia and/or turn off backports before you do it for a smoother ride?

    Just saw this after starting my own thread. Exact same problems here. Hopefully the wizards can come up with a fix. The standard OMV6 guide I have been using a couple of years now no longer works with the latest kernel which is the same one you and I are on.

    This was working, with some reading I am seeing every time the kernel updates, it will break. No big deal I can re-install, but this time it's not working.


    Following OMV6 guide here:


    [HowTo] Nvidia hardware transcoding on OMV 5 in a Plex docker container - Guides - openmediavault


    Errors in step: apt install nvidia-driver firmware-misc-nonfree


    I have backports off and Kernel 6.1.0-0.deb11-18 running. Could use some help resolving. No other changes to rig other than software updates in OMV6, nvidia drivers were working fine before some recent updates.




    Please feel free to merge me in another thread if needed.

    OMV6 wants to install a bunch of nvidia package updates today. I recall this breaks any custom installation of nvidia hardware accelerated drivers. Is there a list of packages we need to keep back?

    Disregard, I'll explain what I did wrong in case it helps anyone in the future. I replaced the bad drive with a new drive but SnapRaid wanted me to name it exactly the same in the GUI as the drive I yanked. It's happier now. The UUID can be different but the name given in the GUI needs to match the drive you pulled to make it happy. Learning experience.

    Hi all,


    Snapraid dummy here. Had a drive go bad and instead of just replacing it I relocated my files to remaining drives had plenty of space. Now snapraid is missing that drive and doesn't run even though I removed it from the GUI. It wants a fresh drive and to rebuild that drive that was pulled.


    At this point I am OK with starting over and using this as a learning lesson that a pulled drive must always be replaced and rebuilt. Unless there is a simple step I am missing.


    If I wanted to start over, would I just delete the parity drive and/or the whole plugin and reinstall?

    All right appreciate the advice. I guess it was just bad luck that the newest drive I added happened to be the one that failed. I was using it pretty much exclusively since it was getting all the new media and the old drives were sitting there basically unused. But I guess that’s just how it will work from now on considering how things work and were set up. Unless we start watching a lot of old media, the newest drive in the pool will always going to get hammered the most.

    Thanks guys. I’m actually not looking to rebuild any drives. I didn’t lose any data, I just rsynced all the content from the failing disk to the mergerFS pool

    Before I removed the then empty drive. I plan to run a snapraid sync after I pull the bad drive (not replacing it at all).


    When I later DO add new empty drives, what is the proper procedure to balance mergerFS to populate the new drives with content so they are not the only ones getting slammed with new media? I think that’s what killed this drive. My other drives are all almost full so they weren’t being used at all and all the new media that I was adding was all landing on one drive. I’d much rather that spread out across all my drives. Old media rarely gets accessed so those drives just sit idle. Balancing data usage equally across all my drives would help that I think.

    The big issue with drives and space for me has been 2160p content is getting bigger and I'm still a little too scared to convert everything down to x265. Some remux rips I have are huuuuuge.


    Recovery going well by the way. Since I had enough space without the bad drive in the pool, I decided to remove the bad drive from mergerfs and then use rsync to sync the content to the mergerfs pool. Did not touch snapraid but I'll run a rebuild when it's done.


    When I do add a new drive or drives to replace the bad one, I think I need to run a mergerfs balance or something and then rebuild my snapraid. Mergerfs was trying to fill the empty disk and slamming it with new content, since all my other drives were around 80% full and it was 0% full. My guess it that when you add an empty drive you should tell mergerfs to balance the pool so that drive quickly gets as full as the others already there, so you are continuing to use your volumes equally as new content is added. This one drive was doing all the work.

    I appreciate all the great info here. It sounds like there are two schools of thought, one being to use snapraid itself to rebuild the drive (treat is as failed and remove it and replace with an empty new drive.


    Since I don’t have a new drive yet and I’m waiting for a sale, but still have enough space left overall in my mergerFS pool I probably need to use the other method which is the rsync. I just need to make sure I run the right command and I’m not great at rsync and used to just use cp out of fear.


    The drive that appears to be on its way to failure is a 15TB western digital drive.

    Jumping into the thread as I want to do this too to be able to yank a failing drive from my mergerfs snapraid pool.


    Here's what I am thinking the steps are (and I am wondering why I can't just Wetty in and run the rsync command?)


    1. Disable snapraid in OMV GUI

    2. Set up a share for the UUID of the drive I am about to remove from the mergerFS pool in the gui

    3. Remove the failing drive from the mergerFS pool using the GUI

    4. Root into Wetty

    5. Run a special rsync command with the correct flags (little lost here)

    rsync /SHAREDRIVE /MERGERFSPOOL

    But probably with a bunch of flags etc

    6. After rsync completes, delete the SHARE and UNMOUNT the failing volume/disk

    7. Yank the failing drive from my server physically.

    8. Re-enable Snapraid


    Did I cover this pretty well? I think I just need the rsync command and the rest will be pretty easy.

    I'm a little scared of data loss, I have plenty of free space to cover the removal of the drive (it's a 15 TB drive and the mergedFS volume overall shows 20TB of free space).


    I assume the first step may be to remove the drive from MergerFS (which I hope then will move all its content to the other available drives as part of that removal?) then remove it from SNAPraid?


    Is there a process documented somewhere? I am sure there is, just have not found anything concrete and I want to make sure I am careful and doing this right, especially since I do not have a new drive to swap in for the failing one just yet. But I want to get that failing drive (SMART reports it as WARN status) as soon as possible.


    TIA!