I am getting ready to add some storage space to my OMV, but it will just be a collection of drives. 1x20tb, 1x14 tb, and 2x12 TB. If my memory serves there is a filesystem that allows for this, allows for one redundant drive, so long as the largest drive is set as the checksum drive. What file system is it? And is there any reason not to use it? Is there a batter choice?
Help selecting a file system (Is that what to call it?)
-
-
It's not a filesystem. You are probably thinking of SnapRAID.
-
Okay, I think I am starting to remember now. Using Snapraid with mergerfs gives me a single mount point, with the backup/redundency of a drive.
So, if I understand these correctly, even using both snapraid and mergerfs, drive loss beyond the tolerance level only results in data loss of the specific drives lost?
And then another related question, with mergerfs, can I, if I want to, still mount/access the points seperately? For example if I want to specifically select a drive to store some data on, just write it to that specific mount point, but if I want mergerfs to handle where to put things to store it in the mergerfs mount point?
-
-
Okay, I think I am starting to remember now. Using Snapraid with mergerfs gives me a single mount point, with the backup/redundency of a drive.
So, if I understand these correctly, even using both snapraid and mergerfs, drive loss beyond the tolerance level only results in data loss of the specific drives lost?
And then another related question, with mergerfs, can I, if I want to, still mount/access the points seperately? For example if I want to specifically select a drive to store some data on, just write it to that specific mount point, but if I want mergerfs to handle where to put things to store it in the mergerfs mount point?
mergerfs and SnapRAID are two seperate things that are not related to each other and don't require each other.
mergerfs provides a mount point for one or more drives (or directories) in combination.
SnapRAID provides a backup/restore function where one lost data drive can be recovered for each available parity drive in the array.
mergerfs sits on top of the other filesystems. The individual drive mountpoints are still present and functional as before.
-
Yes, thats what I remember and read. But mergerfs doesn't have any redundancy built in, does it? All the redundancy would come from installing the separate unrelated SnapRAID?
And if that is correct, and a drive is lost that is "protected" with snapraid, is data still fully accessible (like it is with raid5 for example?) where you can read/write to an array even with a faulty/missing drive in the exact same way you would read/write to the array that was fully intact? Or does the drive need to be replaced and "rebuilt" before it can be accessed?
-
mergerfs does not offer any redundancy.
SnapRAID offers redundancy but not in real time, a recovery process must be initiated by the user and a significant amount of time will be required to recover an entire lost drive unlike true RAID which allows access to data from a "lost" drive in real time, albeit at considerably degraded performance.
-
-
So is the only way to use different sized drives with redundancy mergefs/snapraid? Or is there a different/better alternative?
-
So is the only way to use different sized drives with redundancy mergefs/snapraid? Or is there a different/better alternative?
I use mergerfs and SnapRAID. But I do not have an answer to your question.
-
I use mergerfs and SnapRAID. But I do not have an answer to your question.
Fair enough. I shall too then.
General interest question. Ever need to rebuild?
-
-
Fair enough. I shall too then.
General interest question. Ever need to rebuild?
I had one disk fail a few years ago. SnapRAID allowed it to be fully recovered.
-
I had one disk fail a few years ago. SnapRAID allowed it to be fully recovered.
Yep, I think we all will experience a failure. Its just a matter of time. Do you mind if I ask how long the "rebuild" took? How many drives were in the array, and how much data needed to be recovered? I have a really hard time with the terms "A long time" because a long time in the computer world can be 10 seconds.... or 48 hours, depending what one is doing.
-
Yep, I think we all will experience a failure. Its just a matter of time. Do you mind if I ask how long the "rebuild" took? How many drives were in the array, and how much data needed to be recovered? I have a really hard time with the terms "A long time" because a long time in the computer world can be 10 seconds.... or 48 hours, depending what one is doing.
It was an 8TB disk and all the data on it was lost. I think there were about nine or ten drives in the array at the time. I don't have an exact time figure for the recovery but it was an overnight thing.
I should mention that SnapRAID will not save and recover metadata such as permissions, ownership, and extended attributes. In my case all the files and directories on the disks protected by SnapRAID have the same ownership and permissions so it is easy to reset them to the proper values after recovery.
-
-
It was an 8TB disk and all the data on it was lost. I think there were about nine or ten drives in the array at the time. I don't have an exact time figure for the recovery but it was an overnight thing.
I should mention that SnapRAID will not save and recover metadata such as permissions, ownership, and extended attributes. In my case all the files and directories on the disks protected by SnapRAID have the same ownership and permissions so it is easy to reset them to the proper values after recovery.
Naw, I don't need any permissions or anything. Its gonna be 95% media, I am basically the only one that uses the server, and all of my important/private files are on a ZFS filesystem with a fault tolerance of 2 drives. If I have to recover doing a chmod -R 777 will work just fine for what I will be storing on it.
SO, I guess the next (And maybe final, who knows) question, is the "redundancy" created with the parity drive done automatically and instantly? Or is that what this scrubbing is for? I have read not less then 5 documents detailing scrubbing, and still don't understand it.
Basically, with raid5, if I loose a drive 1 second after I have written a file, the file is still 100% recoverable. Is the same the case with Snapraid (Somewhere I got the impression it wasn't) or does it take time? And how much time (Assuming this 'scrubbing' isn't what creates it.)
Someone really needs to write a "What your average user needs to know about scrubbing without a bunch of technical details your average user don't care about/can't understand" document, lol. -
All SnapRAID operations are initiated by the user, either via command line, scripts, or scheduled commands. A sync operation must be run for files to be protected. So, if you add any files after the last sync, they are not protected until the next sync.
If you haven't yet, I suggest you read the SnapRAID manual, FAQ, and other material available at
snapraid.it
-
So is the only way to use different sized drives with redundancy mergefs/snapraid? Or is there a different/better alternative?
There come at least two possibilities to my mind:
BTRFS is able to use drives of different sizes and provide redundancy.
The other possibility is:
If you are knowledgeable about partitioning, mdadm and lvm you could build yourself a Sliced Hybrid Raid. But that requires manually setting it up on the command line.
One remark on SnapRaid:
You only have redundancy after you took a snapshot and as long as the data is unmodified. As soon as the data gets modified, you loose the redundancy and regain it after the next snapshot.
That's because SnapRaid doesn't provide online redundancy.
If you use it, you should be aware of that.
-
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!