RAID6 with 5 drives only allows me to add one sparse drive

  • Dear friends,


    i am running a OMV-Raid build with OMV3 and the switched to OMV4.
    The raid has 5 8GB disk drives used as a RAID6 device. The i decided
    to add another disk (same type same vedor), The disk appears in the
    drive but when i went to raid-menu and selected the raid and push the
    enlarge (vergrößern) Button the disk was added to the raid (as a sparse
    disk), so the capacity did not changed.


    How can i redraw this setting and use the drive hda as an active device
    extending the raid capacity?


    # mdadm --detail /dev/md127
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jul 23 23:15:41 2016
    Raid Level : raid6
    Array Size : 23441685504 (22355.73 GiB 24004.29 GB)
    Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
    Raid Devices : 5
    Total Devices : 6
    Persistence : Superblock is persistent



    Intent Bitmap : Internal



    Update Time : Mon Jan 22 21:41:01 2018
    State : clean
    Active Devices : 5
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 1



    Layout : left-symmetric
    Chunk Size : 512K



    Name : woody:raiddev (local to host woody)
    UUID : e6001385:99d68308:36c78329:14720f1f
    Events : 14077



    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 8 32 1 active sync /dev/sdc
    2 8 48 2 active sync /dev/sdd
    3 8 64 3 active sync /dev/sde
    4 8 80 4 active sync /dev/sdf



    5 8 0 - spare /dev/sda
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



    Contents of /etc/mdadm/mdadm.conf:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md127 metadata=1.2 spares=1 name=woody:raiddev UUID=e6001385:99d68308:36c78329:14720f1f
    Contents of /etc/mdadm/mdadm.conf:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



    cat /proc/partitions
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    major minor #blocks name


    259 0 244198584 nvme0n1
    259 1 235956224 nvme0n1p1
    259 2 1 nvme0n1p2
    259 3 8240128 nvme0n1p5
    8 0 7814026584 sda
    8 16 7814026584 sdb
    8 32 7814026584 sdc
    8 48 7814026584 sdd
    8 64 7814026584 sde
    8 80 7814026584 sdf
    9 127 23441685504 md127
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



    [Update-00]
    I followed this howto:http://www.ewams.net/?date=201…a_RAID6_volume_with_mdadm
    It looks like, that the command run until step6 (see result of mdadm --detail above).


    So the step 7 was outstanding:



    # mdadm -v --grow --raid-devices=5 /dev/md127


    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    mdadm: /dev/md127: no change requested
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



    The output looks strange to me. The mdadm told me, that there is no change. Can it be, that the process


    of enlarging take some time?



    [Update-01]
    According to this page, it may take some time ( a day ).
    https://ubuntuforums.org/showthread.php?t=2146170
    So i gonna leave a note tomorrow


    # echo check > /sys/block/md127/md/sync_action
    # cat /proc/mdstat
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid6 sda[5](S) sdf[4] sdc[1] sde[3] sdd[2] sdb[0]
    23441685504 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
    [>....................] check = 1.7% (138283584/7813895168) finish=690.6min speed=185228K/sec
    bitmap: 0/59 pages [0KB], 65536KB chunk
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    [Update-02]
    Sad to say that this did not help at all. After remove add grow same situation.


    --> Help !!!

  • Ok, i solved that issue.


    For the first, OMV may did the same mistake, so i went on playing with the mdadm cli.
    The documentation of http://www.ewams.net/?date=201…RAID6_volume_with_mdadm#b
    is not correct. The error is in step 7:


    7 Expand the RAID6 array to include data on the new disk.
    root@debian:~# mdadm -v --grow --raid-devices=7 /dev/md0


    The argument 7 of param raid-devices=7 in in the spec is wrong it must be 8 !
    because it denotes the amount of devices to handle in the raid and not the disk
    number of:


    root@debian:~# mdadm --detail /dev/md0
    /dev/md0:
    Raid Level : raid6
    Array Size : 11721060352 (11178.07 GiB 12002.37 GB)
    Raid Devices : 6
    Total Devices : 7
    Persistence : Superblock is persistent
    State : clean
    Number Major Minor RaidDevice State
    0 8 32 0 active sync /dev/sdc
    1 8 48 1 active sync /dev/sdd
    2 8 80 2 active sync /dev/sdf
    4 8 96 3 active sync /dev/sdg
    5 8 112 4 active sync /dev/sdh
    6 8 16 5 active sync /dev/sdb
    7 8 0 - spare /dev/sda


    So my details of the raids is:
    # mdadm --detail /dev/md127
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    [*] UUID : e6001385:99d68308:36c78329:14720f1f
    Events : 14119


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 8 32 1 active sync /dev/sdc
    2 8 48 2 active sync /dev/sdd
    3 8 64 3 active sync /dev/sde
    4 8 80 4 active sync /dev/sdf


    5 8 0 - spare /dev/sda
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    so there will be 6 disks and by this the mdadm grow should look like this:
    # mdadm -v --grow --raid-devices=6 /dev/md127


    Now when i ask the raid how it feels i got:
    # mdadm --detail /dev/md127
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # mdadm --detail /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Sat Jul 23 23:15:41 2016
    Raid Level : raid6
    Array Size : 23441685504 (22355.73 GiB 24004.29 GB)
    Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
    Raid Devices : 6
    Total Devices : 6
    Persistence : Superblock is persistent



    Intent Bitmap : Internal



    Update Time : Thu Jan 25 08:26:35 2018
    State : clean, reshaping
    Active Devices : 6
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 0



    Layout : left-symmetric
    Chunk Size : 512K



    Reshape Status : 0% complete
    Delta Devices : 1, (5->6)



    Name : woody:raiddev (local to host woody)
    UUID : e6001385:99d68308:36c78329:14720f1f
    Events : 14147



    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 8 32 1 active sync /dev/sdc
    2 8 48 2 active sync /dev/sdd
    3 8 64 3 active sync /dev/sde
    4 8 80 4 active sync /dev/sdf
    5 8 0 5 active sync /dev/sda
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    This is what i want and
    # cat /proc/mdstat
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid6 sda[5] sdc[1] sdd[2] sdb[0] sdf[4] sde[3]
    23441685504 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
    [>....................] reshape = 0.3% (28559472/7813895168) finish=3012.3min speed=43073K/sec
    bitmap: 0/59 pages [0KB], 65536KB chunk


    unused devices: <none>
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


    Yeah, system is in state reshape, this is reasonalble and will take a day.
    Time to reshape myself and move to work too ;(


    I'll gonna leave a message to the above author.

  • Hi,


    right, raid-devices must be the full number of all drives inkl. the new one(s) ... so, if you upgrade from 5->6:
    mdadm --grow --raid-devices=6 /dev/mdX
    and if you upgrade from 5 to 8:
    mdadm --grow --raid-devices=8 /dev/mdX


    The -v switch turns on the verbose level ...


    After finishing the reshape, you have to "resize" your filesystem too.


    Sc0rp

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!