Possible to grow linear pool?

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Possible to grow linear pool?

      I've been using 3 drives in linear configuration and am trying to add a fourth to the pool, but the "grow" button is grayed out. Is it not possible to grow a linear pool or am I doing something wrong?
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • Linear is not a real RAID, but I can not tell you I do not know about Raid
      en.wikipedia.org/wiki/RAID
      forum-bpi.de Visit on and help us, The Germany Forum :thumbsup:
      github.com/Wolf2000Pi
      Banana Pi /Armbian 3.4.109 / Openmediavault 2.x
      Banana Pi /Armbian 4.11.5-sunxi / Openmediavault 3.0.xx (Test)
      Dell Inspiron One 2205 | Openmediavault 3.0.xx / Kernel 4.7 (Test)
    • Re,

      elastic wrote:

      I have the "Test" pool selected, but am trying to grow the "Home" pool which is linear. I'm aware stripe can't be grown
      Then you have to mark the other array ... but:
      - you can grow any raid-array made with md, even striped
      - your striped raid1-array is made of two fake-raid drives (dm = device-mapper) it seems - so you can alter this array only in the fake-raid-environment (bios of the controller), md "reads" this only ...

      Which physical disk you wanna add to "Home"?

      Sc0rp
    • Wolf2000 wrote:

      Linear is not a real RAID, but I can not tell you I do not know about Raid
      en.wikipedia.org/wiki/RAID
      I don't know much at all about RAID

      Sc0rp wrote:

      Re,

      elastic wrote:

      I have the "Test" pool selected, but am trying to grow the "Home" pool which is linear. I'm aware stripe can't be grown
      Then you have to mark the other array ... but:- you can grow any raid-array made with md, even striped
      - your striped raid1-array is made of two fake-raid drives (dm = device-mapper) it seems - so you can alter this array only in the fake-raid-environment (bios of the controller), md "reads" this only ...

      Which physical disk you wanna add to "Home"?

      Sc0rp

      Another 8TB, but I was playing with the test array so in case i broke something, no lost data. Either option I pick though, I can not grow the RAID.
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • From
      raid.wiki.kernel.org/index.php/RAID_setup#5._Grow


      "Grow, shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1."
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Sc0rp wrote:

      Re,

      may be the best strategy for you is not RAID0 ... did you hear about UnionFS (in case of OMV v3.x is it mergerfs)?

      I dunno why OMV can not "grow" a RAID0 - and i'm missing the screenshot of the "Physical disks" and the "file systems", but you cann grow your array via console as well. (SSH or local)

      Sc0rp

      I've heard of UFS but have no experience with it. I'm eventually planning on switching to RAID 6. How would I go about growing it then?









      subzero79 wrote:

      From
      raid.wiki.kernel.org/index.php/RAID_setup#5._Grow


      "Grow, shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1."

      But I thought technically a linear pool wasn't considered a RAID?
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • Re,

      elastic wrote:

      I've heard of UFS but have no experience with it. I'm eventually planning on switching to RAID 6. How would I go about growing it then?
      RAID6 is possible but old fashion, why not use ZFS instead? ZFS-Z2 in case of RAID6 equivalent ... but it depends highly on the usecase, because you can also setup a SnapRAID/mergerfs construct (deals with redundancy with up to six parity drives too).

      Sc0rp
    • Sc0rp wrote:

      Re,

      elastic wrote:

      I've heard of UFS but have no experience with it. I'm eventually planning on switching to RAID 6. How would I go about growing it then?
      RAID6 is possible but old fashion, why not use ZFS instead? ZFS-Z2 in case of RAID6 equivalent ... but it depends highly on the usecase, because you can also setup a SnapRAID/mergerfs construct (deals with redundancy with up to six parity drives too).
      Sc0rp

      Yes, but sometimes old fashioned is good :)

      I could do the whole ZFS thing, but I have mixed feelings about the whole scrubbing feature. Part of me feels like it's a bad idea and will lead to premature wear and tear as time progresses. I don't know anything about snapraid unfortunately. Is there any reason not to use raid6 in your opinion? I'm eventually going to have (8) 8TB drives.
      Case: U-NAS NSC-810
      Motherboard: ASRock - C236 WSI Mini ITX
      CPU: Core i7-6700
      Memory: 32GB Crucial DDR4-2133
    • elastic wrote:

      I have mixed feelings about the whole scrubbing feature

      ...but want to use RAID (where OMV also scrubs regularly unless users decide to behave irresponsible)?

      It feels really strange hearing of people wanting to waste 2 entire disks for something they try to sabotage at the same time (added complexity requires continous testing. Most of these 'My array has gone! Where is all my data' threads every other day here are related to users blindly trusting in technology instead of testing, testing and testing... and confusing availability with data protection way too often)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Re,

      elastic wrote:

      I could do the whole ZFS thing, but I have mixed feelings about the whole scrubbing feature.
      Hmm, scrubbing an array (of which technology ever) is pure basic "array management". You need this to evaluate your redundacny and integrity - how will you achive this without "scrubbing" (or any other ongoing messurment)?

      Scrubbing belongs to it, that's a fact. Avoiding this, you can not use any array-technology ...

      elastic wrote:

      I'm eventually going to have (8) 8TB drives.
      Yeah, when it is a more static content (media-archive), then SnapRAID is a good choice, when you will build up a workgroup-grade NAS, you should go with ZFS. But you have to learn and understand both technologies (as well as that goes with RAID).

      At least, with a very good backup-strategy you may not need redundancy at all :P

      Sc0rp
    • RAID0 is really a bad idea for a NAS. If one of your drives dies, then you lose everything on all the drives. I suggest you get rid of that setup ASAP!

      Why do you need RAID at all? Do you have customers that expect 100% uptime ? Do you lose money if your data is unavailable for a day or two ? If not, then you don't need RAID, you need BACKUP.
    • subzero79 wrote:

      Is the same for linear, one drive goes south array dies also
      Sure, can't be any different since with classic/anachronistic RAID there's a huge gap between the device and the filesystem layer. RAID builds a block device and the filesystem sitting on top knows nothing about the stuff happening on the layers below. So all the block device layer can do is to fail completely once a device is missing/broken.

      It needs modern approaches that overcome those outdated concepts from last century to get the desired behaviour (only data lost on those devices that are physically missing or damaged):

      Source Code

      1. mkfs.btrfs -d single -m raid10 /dev/sd?
      Since btrfs when used directly and NOT on top of mdraid can bridge between device and filesystem layer (as expected since it's something born in this century and not the last)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.