Ok... I obviously don't understand RAID but..

  • Please help me understand what is going on here.


    I have 3 disks (I am using VirtualBox to test out open media vault before I commit to it)


    1 @ 1.5TB, 1 @ 512GB and 1 @ 256GB = total of ~2.2TB.


    Using windows storage spaces, I create a pool which is ~2.2TB in size and then create a drive with 2 way mirroring which gives me 1.1TB of usable space.


    In OMV if I select RAID 5 with these same 3 disk sizes I get 512GB of usable space (and if I select mirroring I only get 255GB of usable space 8| ) .


    :/


    Why is my usable space using the same disks so much less in linux than in windows?

    • Offizieller Beitrag

    You are doing two different things. On windows, it sounds like you are creating a JBOD device and then creating two partitions that are mirrored. If any drive failed, you would lose all data and the mirror wouldn't help anything.


    On Linux with raid 5, mdadm uses the space of the smallest drive (256GB) on all three drives (256GB x 3) and subtracts one drive from array for parity. You could do what you are doing on Windows but I wouldn't and it would have to be done from the command line. You would be better off to just create a unionfilesystems pool (mergerfs) of all three disks for 2.2TB. The mirror is pointless as mentioned above.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I think the windows storage spaces is a bit more advanced than that. My current (but ageing hardware I want to move on from) has 7 disks, each of different size. The mirrored volume can loose any one of these disks and keep going, and assuming the disk is replaced with one of a similar size or bigger can rebuild to get back to full redundancy. I was hoping for something similar so that I could re-use the same set of disks until the wear out.

    • Offizieller Beitrag

    Maybe it is more advanced. I have no idea since it is Windows. And you said three disks. You could never have 1.1TB mirrored with the three disks you told me about and survive a disk failure.


    That said, if you have seven disks, you can create something with redundancy with different size disks. Lots of people are using unionfilesystem (mergerfs) and snapraid to accomplish what it sounds like you want. mergerfs pools the drives and snapraid provides redundancy.


    Although do you really need redundancy? I hope you aren't using this setup as backup.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No, this would be my primary server. Not yet sure if to use mirroring or simply go back to old fashioned rsync overnight to a secondary array.


    I will dig a bit deeper into snapraid - I have seen it mentioned.


    Thanks

    • Offizieller Beitrag

    mergerfs pools the drives and snapraid provides redundancy.

    I always thought snapraid’s main feature was protection against bit rot of large unchanging files. I know “raid” is part of the name but I don’t hear anyone referring to it as redundancy. I’m not trying to cause a stir (looks like you’ve had plenty of that lately :) ) I’m just trying to understand.

    • Offizieller Beitrag

    Without redundancy there can be no bitrot protection, only bitrot detection at the most. However bitrot detection and backups can provide (manual) bitrot protection.


    Snapraid provides both bitrot detection, checksums, and protection, parity drive redundancy.

    • Offizieller Beitrag

    To add to Adoby's comments, snapraid doesn't offer realtime redundancy. But it is still redundancy because it allow a drive to fail without losing data (it isn't backup though). And don't confuse redundancy with availability. Helpful link as well - https://www.snapraid.it/compare

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    @Adoby & @ryecoaaron Thank you both for your replies. I understand backup. I have implemented it a long time ago with a "Full Disk Mirroring / Backup with Rsync" from page 64 of crashtest's OMV Getting Started. It is an amazing thing because it is so simple, and it just works. I have swapped the data disk out for the rsync disk numerous times simply by repointing the shared folders to the rsync disk without even breaking a sweat. I have even figured out how to backup using Rsync and a second SBC using WastlJ's Rsync Two OMV Machines.



    When I hear talk of silent data corruption the Snapraid solution sounds perfect: so easy to set up and necessary if you have a growing media server as I do. But I can't visualize a path forward for the way I backup and adding Snapraid (and maybe unionfilesystem). So where does Snapraid fit? On the main machine with its data and rsync disks, or on the second just-backup machine, or both? And how? Both machines have four sata ports each. I don't need 24/7 RAID availability or real time redundancy.

    • Offizieller Beitrag

    I agree, there is a lack of tools here.


    You need checksums and redundancy to handle bitrot. But you also need backups. That can be wasteful.


    Assuming you just do mirroring (using ZFS or BTRFS) for redundancy and bitrot protection, you would need four drives. Two each for data and backups.


    If you used checksums manually to detect bitrot, you could use the backup copy to fix it. Or fix bitrot in the backups the other way. Just manually replace the rotten file as needed. This should be possible to automate. Then you would need just two drives. Original and backup. And checksums...


    I am working on a simple backup utility. Spare time project. Works partially but is still very buggy. It works by creating backup snapshots similar to what rsync or rsnapshot can do. (Very similar to my rsync snapshot script.) But it is much simpler (and possibly somewhat faster) than rsync because it only works on whole files and local filesystems, including NFS. Written in C++.


    During daily (or whatever) backups it checksums and backups new and changed files. And creates new snapshots using hardlinks. But it also does a bitrot check of 5% (or whatever) of the old (supposedly) unchanged files in both the backup copy and the original files. Since there are two copies of every file available, backup and original, there is a good chance that any bitrotten file can be replaced.


    5% means that all files are checked for bitrot every 20 days.


    This would work fine with drives in pairs. Either in the same NAS or in the LAN.


    BTRFS, ZFS and Snapraid can already handle this. Faster. Directly. But, for my needs, also wastefully.


    I would prefer to use my backups as redundancy for bitrot protection, without the need to introduce more redundancy. And I would be fine checking for bitrot only during backups.


    Edit: Replace "drive" with "filesystem" or "partition" as appropriate above.


    Perhaps I am inventing the wheel? Do you know of some similar way to semi-automatically provide bitrot protection using backup copies rather than a filesystem with builtin redundancy?

    • Offizieller Beitrag

    Well, any backup by definition is wasteful but it sure comes in handy when your data disk spins down (another vexing topic for me) and then refuses to spin back up. I can live with waste as long as it provides real insurance against loss. We do it all the time with home-auto-life-health insurance. I believe I have disk failure covered. I’m concerned with data corruption now. I am perfectly willing to add disks to incorporate bit rot protection into my system. I just don’t know how to pull the trigger.


    @Adoby I have read your explanation of snapshots, checksums, etc. that you use and it is fascinating but I fear it is above my skill level. OMV’s GUI integration of Snapraid seems doable to me. It’s just a matter of making it work with my Rsync setup.

    • Offizieller Beitrag

    Perhaps I am inventing the wheel? Do you know of some similar way to semi-automatically provide bitrot protection using backup copies rather than a filesystem with builtin redundancy?

    While it would probably be slow, rsync does have a --checksum flag that does this. This could be used with rsync, rsnapshot (pretty sure), or your script. Not as flexible as your self-written utility. Borgbackup does checksumming (and dedup and compression) but doesn't normally have the files always accessible. But you can mount a borgbackup archive as a filesystem.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!