Posts by Shadow Wizard

    mergerfs will show what is available to it. You should confirm your setup. If the 14TB drive's filesystem is in the pool and you have proper perms then it should show.

    Okay. So I can confirm its in the pool (as stated), and that I don't have access to it through the pool (as stated). And unless removing drives removed permissions, AND somehow you can take permissions away from root (as all SSH tests were performed as root) then all that has been confirmed, and I still don't have access to the files.

    So how do we fix this?

    So, I have 3 drives within my mergerFS. 2x12 TB (They are both empty) and 1x14 TB. Due to a hardware failure, I have temporarily removed both of the 12 TB drives. I have confirmed that the 14 TB drive has the files on it I am looking for. I have confirmed this both by SSH, and by browsing the share for the 14 TB drive only, and confirmed it has the same dev-disk-byuuid number.

    Now here is the interesting part. All the directories show (mind you there are only 3 of them, some nested), but not the files within them. This is confirmed by both SSH and the shared folder.

    Is this normal behavior? I wouldn't expect it to be. What causes this, and is there an easy fix I can use for just a couple of days? I just have a few docker containers that access the mergerFS I would like to use.

    So after taking it apart, 99% its the PSU. I moved the boards around, checked behavior of the PSU's when not even in the unit, and everything points to the PSU.
    Now to decide if I wanna buy a new PSU.. A whole new unit for only twice the price and have spare parts, or just modify the whole unit to hold disks and connect them with SFF-8088 to Sata cables... Oh the choices.

    Thank you for the help and suggestion.

    Thats what I was thinking too. But from my understanding most of these enterprise stuff has redundant PSU. So if that was the case. That is was just the PSU, I should just be able to unplug the faulty one, and move along. There are only 5 drives in the unit designed to hold 15, so I am quite sure it can't be drawing more power then a single PSU can handle. I think I am gonna rip it apart tonight and see what I can see.

    Today I got a notice from my server telling me drives were no longer available. I get home and it is running very loud. IN the back, one of the power supplies will only become yellow (Even when plugged in on its own) and when that one is plugged in, no noise is made. When the other is plugged in the fans seem to go at 100%, the back LED segments all flash dashes (The centre)

    When I power it on, the drives that are in it turn green, and for a bit I can access them. Then they turn off, and I am no longer able to access them.

    Does anyone have any experience troubleshooting these things?

    I tried posting this on reddit, but the over zellous filters deleted it.

    So I have managed to find a resolution to this, but I don't know if its the correct resolution. In the CIFS shares section, I checked the "Inherit Permissions" check, and set the folder itself to 777.

    I don't know if this is the best way to do it, but it is working.

    If anyone can suggest a better way, or a reason I should not do it this way (I am open to constructive criticism) please feel free to pipe in.

    I hope I can ask this here...

    Okay, I have a directory, lets call it /downloads set up as a cifs share (Because thats what I got working)

    I have another Linux box (debain if it matters. Because thats what I set up) that has mapped that share to /shares/downloads

    When the debian box writes a file to /shares/files it seems to be set with permissions RWXRxXRxX

    this seems to prohibit users on the OMV system from writing to those files.


    Lets do the long version, in case I wasn't clear enough.

    On the OMV system, I have Plex, and Radarr set up in a docker container using portainer (Not what is built into OMV). They use user 1000. On the Debian system, I have NZBGet set up, and it writes the files to the shared folder over the network. Once Radarr goes to import the file (which for clarity, in on that physical system), it is able to read and import it, but is denied permission to delete it, as it doesn't have write permission.

    Other then running Radarr as root, what is the, or some of the fixes to this?

    I don't know the configuration of your tunnel in docker at home, so I can't say what you can do with that tunnel or how a client can connect to that tunnel. To connect two servers, I suggest you follow the instructions in the link I gave you.

    There are many ways to configure wireguard tunnels. If you want to go deeper you can read here. https://www.procustodibus.com/…/10/wireguard-topologies/

    Unfortunately I can't follow those directions, as they assume I have wireguard set up in OMV on both systems. I will play around when I get the chance, and hopefully it will all work :)

    Can't seem to find an answer here.

    Lets say drive 1 has 10 TB of space free, and has the directory structure /media/tv

    And drive 1 has 9 TB of free space, and has the directory structure /media/tv/simpsons


    And I save the file "The simpsons.mpg" to /meda/tv/simpsons

    Does it create the simpsons directory in drive 1 and save it there because there is more space and has the same root folder? Or does it save it to drive 2 because it has the simpsons directory, the directory furthest deeper? And then to further that (if the latter is correct) does that mean if I manually access drive 1 and create /meda/tv/simpsons before copying the file, it will in fact go to drive 1, as it has more space, and has the simpsons directory?

    I currently have OMV6, and have set up the wireguard plugin. It is working wonderfully as a server, but have need of it to also connect to another server as a client. How can I do this?

    In the event I am not explaining myself clearly, here is exactly what I have, and what I want to do.

    I have OMV setup at work, it runs the plugin for wireguard. It has several clients set up that work fine. I can connect to the wireguard server at work with my phone and access the files on it.

    I also have wireguard set up at home as a server (In its own docker container). It also has several clients, and works fine. I can connect to my home wireguard from my phone, and access my files from home.


    What I want to do, is create a cfg file from my wireguard at home (Like I have done so my phone can connect to my home network) and import that config into wireguard at work, so my work OMV can access my network at home.

    It was an 8TB disk and all the data on it was lost. I think there were about nine or ten drives in the array at the time. I don't have an exact time figure for the recovery but it was an overnight thing.


    I should mention that SnapRAID will not save and recover metadata such as permissions, ownership, and extended attributes. In my case all the files and directories on the disks protected by SnapRAID have the same ownership and permissions so it is easy to reset them to the proper values after recovery.

    Naw, I don't need any permissions or anything. Its gonna be 95% media, I am basically the only one that uses the server, and all of my important/private files are on a ZFS filesystem with a fault tolerance of 2 drives. If I have to recover doing a chmod -R 777 will work just fine for what I will be storing on it.

    SO, I guess the next (And maybe final, who knows) question, is the "redundancy" created with the parity drive done automatically and instantly? Or is that what this scrubbing is for? I have read not less then 5 documents detailing scrubbing, and still don't understand it.

    Basically, with raid5, if I loose a drive 1 second after I have written a file, the file is still 100% recoverable. Is the same the case with Snapraid (Somewhere I got the impression it wasn't) or does it take time? And how much time (Assuming this 'scrubbing' isn't what creates it.)
    Someone really needs to write a "What your average user needs to know about scrubbing without a bunch of technical details your average user don't care about/can't understand" document, lol.

    I had one disk fail a few years ago. SnapRAID allowed it to be fully recovered.

    Yep, I think we all will experience a failure. Its just a matter of time. Do you mind if I ask how long the "rebuild" took? How many drives were in the array, and how much data needed to be recovered? I have a really hard time with the terms "A long time" because a long time in the computer world can be 10 seconds.... or 48 hours, depending what one is doing.

    Yes, thats what I remember and read. But mergerfs doesn't have any redundancy built in, does it? All the redundancy would come from installing the separate unrelated SnapRAID?

    And if that is correct, and a drive is lost that is "protected" with snapraid, is data still fully accessible (like it is with raid5 for example?) where you can read/write to an array even with a faulty/missing drive in the exact same way you would read/write to the array that was fully intact? Or does the drive need to be replaced and "rebuilt" before it can be accessed?

    Okay, I think I am starting to remember now. Using Snapraid with mergerfs gives me a single mount point, with the backup/redundency of a drive.

    So, if I understand these correctly, even using both snapraid and mergerfs, drive loss beyond the tolerance level only results in data loss of the specific drives lost?

    And then another related question, with mergerfs, can I, if I want to, still mount/access the points seperately? For example if I want to specifically select a drive to store some data on, just write it to that specific mount point, but if I want mergerfs to handle where to put things to store it in the mergerfs mount point?

    I am getting ready to add some storage space to my OMV, but it will just be a collection of drives. 1x20tb, 1x14 tb, and 2x12 TB. If my memory serves there is a filesystem that allows for this, allows for one redundant drive, so long as the largest drive is set as the checksum drive. What file system is it? And is there any reason not to use it? Is there a batter choice?

    Although this is likely a hardware issue, I thought I would ask here as I am sure there are a lot of knowledgeable people here that may be able to help.

    I bought a LSI 9201-16E, 2 cables (Only connecting a few drives), and a KTN-STL3. The interposes it uses are P/N 303-115-003D and the interposers have marked 204-115-603 on the board.

    When I put a SAS drive in it, OMV recognizes it. So I know I am using the correct port/bay combination. But when I put a SATA drive in it, OMV does not.

    So, #1. Could this me a problem with OMV? I am using 6.9.15-1 (Shaitan) with Linux 5.19.17-2-pve. (Just a system for testing) Although I am pretty sure it isn't OMV, I guess the first step is to ask, could it be?

    #2 what else should I check? Do the interposes/HBA or even the JBOD need to be flashed with a specific firmware that permits sata AND sas? The HBA should be in IT mode (it was purchased as in IT mode)


    Sorry if this question isn't appropriate here, I just thought people here may have experience with these.