Possible to grow linear pool?

  • I've been using 3 drives in linear configuration and am trying to add a fourth to the pool, but the "grow" button is grayed out. Is it not possible to grow a linear pool or am I doing something wrong?

    Case: U-NAS NSC-810
    Motherboard: ASRock - C236 WSI Mini ITX
    CPU: Core i7-6700
    Memory: 32GB Crucial DDR4-2133

  • Re,

    I have the "Test" pool selected, but am trying to grow the "Home" pool which is linear. I'm aware stripe can't be grown

    Then you have to mark the other array ... but:
    - you can grow any raid-array made with md, even striped
    - your striped raid1-array is made of two fake-raid drives (dm = device-mapper) it seems - so you can alter this array only in the fake-raid-environment (bios of the controller), md "reads" this only ...


    Which physical disk you wanna add to "Home"?


    Sc0rp

  • Linear is not a real RAID, but I can not tell you I do not know about Raid
    https://en.wikipedia.org/wiki/RAID

    I don't know much at all about RAID



    Another 8TB, but I was playing with the test array so in case i broke something, no lost data. Either option I pick though, I can not grow the RAID.

    Case: U-NAS NSC-810
    Motherboard: ASRock - C236 WSI Mini ITX
    CPU: Core i7-6700
    Memory: 32GB Crucial DDR4-2133

  • Re,


    may be the best strategy for you is not RAID0 ... did you hear about UnionFS (in case of OMV v3.x is it mergerfs)?


    I dunno why OMV can not "grow" a RAID0 - and i'm missing the screenshot of the "Physical disks" and the "file systems", but you cann grow your array via console as well. (SSH or local)


    Sc0rp


  • I've heard of UFS but have no experience with it. I'm eventually planning on switching to RAID 6. How would I go about growing it then?







    From
    https://raid.wiki.kernel.org/index.php/RAID_setup#5._Grow



    "Grow, shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1."


    But I thought technically a linear pool wasn't considered a RAID?

    Case: U-NAS NSC-810
    Motherboard: ASRock - C236 WSI Mini ITX
    CPU: Core i7-6700
    Memory: 32GB Crucial DDR4-2133

  • Re,

    I've heard of UFS but have no experience with it. I'm eventually planning on switching to RAID 6. How would I go about growing it then?

    RAID6 is possible but old fashion, why not use ZFS instead? ZFS-Z2 in case of RAID6 equivalent ... but it depends highly on the usecase, because you can also setup a SnapRAID/mergerfs construct (deals with redundancy with up to six parity drives too).


    Sc0rp

  • Re,

    RAID6 is possible but old fashion, why not use ZFS instead? ZFS-Z2 in case of RAID6 equivalent ... but it depends highly on the usecase, because you can also setup a SnapRAID/mergerfs construct (deals with redundancy with up to six parity drives too).
    Sc0rp


    Yes, but sometimes old fashioned is good :)


    I could do the whole ZFS thing, but I have mixed feelings about the whole scrubbing feature. Part of me feels like it's a bad idea and will lead to premature wear and tear as time progresses. I don't know anything about snapraid unfortunately. Is there any reason not to use raid6 in your opinion? I'm eventually going to have (8) 8TB drives.

    Case: U-NAS NSC-810
    Motherboard: ASRock - C236 WSI Mini ITX
    CPU: Core i7-6700
    Memory: 32GB Crucial DDR4-2133

  • I have mixed feelings about the whole scrubbing feature


    ...but want to use RAID (where OMV also scrubs regularly unless users decide to behave irresponsible)?


    It feels really strange hearing of people wanting to waste 2 entire disks for something they try to sabotage at the same time (added complexity requires continous testing. Most of these 'My array has gone! Where is all my data' threads every other day here are related to users blindly trusting in technology instead of testing, testing and testing... and confusing availability with data protection way too often)

  • Re,

    I could do the whole ZFS thing, but I have mixed feelings about the whole scrubbing feature.

    Hmm, scrubbing an array (of which technology ever) is pure basic "array management". You need this to evaluate your redundacny and integrity - how will you achive this without "scrubbing" (or any other ongoing messurment)?


    Scrubbing belongs to it, that's a fact. Avoiding this, you can not use any array-technology ...


    I'm eventually going to have (8) 8TB drives.

    Yeah, when it is a more static content (media-archive), then SnapRAID is a good choice, when you will build up a workgroup-grade NAS, you should go with ZFS. But you have to learn and understand both technologies (as well as that goes with RAID).


    At least, with a very good backup-strategy you may not need redundancy at all :P


    Sc0rp

  • RAID0 is really a bad idea for a NAS. If one of your drives dies, then you lose everything on all the drives. I suggest you get rid of that setup ASAP!


    Why do you need RAID at all? Do you have customers that expect 100% uptime ? Do you lose money if your data is unavailable for a day or two ? If not, then you don't need RAID, you need BACKUP.

  • Re,

    RAID0 is really a bad idea for a NAS. If one of your drives dies, then you lose everything on all the drives. I suggest you get rid of that setup ASAP!

    That's not correct for linear-mode! There you'll only loose the data from the failed drive and the (both) files, which are held at the beginning and the end of the failed drive.


    Sc0rp

  • Is the same for linear, one drive goes south array dies also

    Sure, can't be any different since with classic/anachronistic RAID there's a huge gap between the device and the filesystem layer. RAID builds a block device and the filesystem sitting on top knows nothing about the stuff happening on the layers below. So all the block device layer can do is to fail completely once a device is missing/broken.


    It needs modern approaches that overcome those outdated concepts from last century to get the desired behaviour (only data lost on those devices that are physically missing or damaged):


    Code
    mkfs.btrfs -d single -m raid10 /dev/sd?

    Since btrfs when used directly and NOT on top of mdraid can bridge between device and filesystem layer (as expected since it's something born in this century and not the last)

  • Re,

    Is the same for linear, one drive goes south array dies also, is on the kernel wiki.

    Nope, it may stand on the kernel-wiki, but i had ages bevor an RAID0 linear setup ... you can read the remaining disks. It's not easy, of course, but possible ... but anyway, today i would do this never again!


    Sc0rp

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!