SnapRAID - Multiple Pools?

  • I wanted to see if anyone has setup and/or if there is a way to go about setting up multiple SnapRAID pools? The idea here is: I have 8 disks (4 x 2TB / 4 x 4TB). I know that for a pool I need to have the parity disk be the largest. In my case, I'd like to create 2 separate pools using the 4x2 in 1 pool, and the 4x4 in pool 2. Rationale here is if I wanted to put it altogether, and use 2 parity disks, I'd have to use 2 of the 4TB disks (to my understanding). I'd like to be able to use 1 of the 4TB and 1 of the 2TB for my parity disks instead and just keep the pools separate among the similar sized disks. I know 2TB might seem petty to worry about when we're talking about 16 vs 18 TB of usable disk - but 2TB is 2TB.


    If it helps for color, the underlying filesystems (when I'm finished moving some data and setting up) will be ALL LUKS encrypted disks, 2 different SnapRAID pools, and then MergerFS being used on the top of it all to present all 18TB of usable disk as a single mount point.

    • Offizieller Beitrag

    I mentioned this in another post but you can only have one pool with the plugin. I think snapraid will let you have more than one pool since you can specify the config file location but I have tried. This would be a huge change to the plugin to allow multiple pools.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • not 100% understand what is the point in this setup.


    what you proposing is not having 2 parity disks setup but rather 2 pools with single parity disk each.
    why is it different from having a single pool with a single 4 TB parity?


    let see
    you have 4x2TB and 4x3 tb


    a single pool with 1x4TB parity will get you 20TB protected config.
    (2+2+2+2+4+4+4 ) = (8+12) = 20
    your setup yelads you 18TB and loose you 2TB
    (2+2+2) = 6
    (4+4+4) =12


    and not only do you loose the 2TB in the process, but you add complexity and limit expansion capability.
    without gaining any thing at all. you still have a single parity setup in a sense.
    and now if you want to update 2TB pool you will need to add a bigger drive to it as parity(loosing the dive) and add a second drive to add capacity.
    also you do know that you can add parity and data disks at any time?


    ALSO, I think you misunderstand how SnapRaid works( OR I misunderstand your post somehow).
    even when you configure the Snapraid, unless you use the pooling feature in SR it self, you end up with just a bunch of disks. snapraid, unlike other mdadm and such, does not actually combines the disks into single volume. it just keeps track of the disks and data on them and if something is amiss acts on the issue so you can mergerfs all of them as you want.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • @ryecoaaron - thanks, understood. I figured it likely would, but wanted to validate. I guess I'd have to look at doing some manual management at the CLI if I wanted to attempt it.


    @vl1969 - thanks for the input. I could be missing something, but my understanding was, when you add more Parity disks, you account for more fault tolerance. AKA - if I lose a disk with 1 parity disk, I'm ok. I need to rebuild that 1 disk, then I'm back to operating. If I lose 2 disks, I'm SOL period. However, if I was to have 2 parity disks, I now increase my fault tolerance to support 2 disks lost. So in theory I could lose one of my 4TB and one of my 2TB and still be in ok shape, just have to rebuild.


    I know that if I'm extremely concerned with the loss of more than 1 disk, I should potentially look at RAID. I'm not extremely concerned, but I like to err on the side of caution allowing myself a little bit of extra coverage if possible for a minimal cost. In this case, RAID setup would cost more either via hardware and/or via disk usage to get proper coverage. Not to mention the potential r/w penalties associated.


    One thing you do make me thing though, is that if I go with 2 separate pools, then I'm effectively allowing for only a loss of 1 disk in EACH pool vs 2 disks among the larger pool. So it sounds like in the optimal scenario looking for 2 disks worth of fault tolerance, I would have to go for 2x4TB disks as Parity and leave the remaining 16TB to pool under MergerFS.

  • well that is not how SnapRaid actually works you know :)


    the thing with snapraid is that when you loose more drive than can be recovered you only loosing data on those drives. the rest is still there and usable.


    i.e. let say you have 8 disks setup
    1P(arity) + 7D(ata)


    this protects you for a single data drive loss. if you loose one data disk you can recover the data on it.
    if you loose 2 disks than what ever data was on those disks are lost, but data on the other 5 are still there and usable. so with a decent backup you only loose a percentage of the data temporary until you can restore from backup.


    so even if you do loose 2 disks you only loose data on those disks period.
    if you loose 2 4TB you loose 8TB data
    if you loose 1 2tb and 1 4tb you loose 6TB of data all other data is safe and available.
    as per SNapRaid Wiki you can stay with single disk parity for up to 4 data disks
    after that you need to add a additional parity disk for every 7 data disks in the pool.


    1/Single Parity/RAID52 - 4
    2/Double Parity/RAID65 - 14
    3/Triple Parity15 - 21
    4/Quad Parity22 - 28
    5/Penta Parity29 - 35
    6/Hexa Parity36 - 42

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • @vl1969 - Sorry, I didn't mean to sound so crude in the way I mentioned being "SOL". I do realize that the only data lost is the data on the failed disks. I guess what I was illustrating was that if I expand to having 2 disks for parity I can actually save 2 disks. I thought this understanding was correct, that the more parity disks I have, the more failures I can handle. If that understanding is incorrect, then I was missing something about how SnapRAID works.


    Based on the input you shared though in the table of how many disk I need, it does appear I need to have 2 parity disks since I do have a total of 8 disks. And if you remove the 2 for parity, I'm still at 6 disks, which is still in a category needing 2 parity drives.


    So in the end, it seems like unless I can setup 2 pools, I'd have to use 2 of my 4TB disks anyways to handle the parity at this point.

  • so does it mean, snapraid can implement 2 pools but just we cannot manage it from the OMV Plugin?

    Will OMV create issues if we manage 2 independent Snapraid pools via CLI?

    OMV6 i5-based PC

    OMV6 on Raspberry Pi4

    OMV5 on ProLiant N54L (AMD CPU)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!