Dear friends,
i am running a OMV-Raid build with OMV3 and the switched to OMV4.
The raid has 5 8GB disk drives used as a RAID6 device. The i decided
to add another disk (same type same vedor), The disk appears in the
drive but when i went to raid-menu and selected the raid and push the
enlarge (vergrößern) Button the disk was added to the raid (as a sparse
disk), so the capacity did not changed.
How can i redraw this setting and use the drive hda as an active device
extending the raid capacity?
# mdadm --detail /dev/md127
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/dev/md127:
Version : 1.2
Creation Time : Sat Jul 23 23:15:41 2016
Raid Level : raid6
Array Size : 23441685504 (22355.73 GiB 24004.29 GB)
Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
Raid Devices : 5
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Jan 22 21:41:01 2018
State : clean
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : woody:raiddev (local to host woody)
UUID : e6001385:99d68308:36c78329:14720f1f
Events : 14077
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 8 80 4 active sync /dev/sdf
5 8 0 - spare /dev/sda
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contents of /etc/mdadm/mdadm.conf:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md127 metadata=1.2 spares=1 name=woody:raiddev UUID=e6001385:99d68308:36c78329:14720f1f
Contents of /etc/mdadm/mdadm.conf:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat /proc/partitions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
major minor #blocks name
259 0 244198584 nvme0n1
259 1 235956224 nvme0n1p1
259 2 1 nvme0n1p2
259 3 8240128 nvme0n1p5
8 0 7814026584 sda
8 16 7814026584 sdb
8 32 7814026584 sdc
8 48 7814026584 sdd
8 64 7814026584 sde
8 80 7814026584 sdf
9 127 23441685504 md127
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Update-00]
I followed this howto:http://www.ewams.net/?date=201…a_RAID6_volume_with_mdadm
It looks like, that the command run until step6 (see result of mdadm --detail above).
So the step 7 was outstanding:
# mdadm -v --grow --raid-devices=5 /dev/md127
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mdadm: /dev/md127: no change requested
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output looks strange to me. The mdadm told me, that there is no change. Can it be, that the process
of enlarging take some time?
[Update-01]
According to this page, it may take some time ( a day ).
https://ubuntuforums.org/showthread.php?t=2146170
So i gonna leave a note tomorrow
# echo check > /sys/block/md127/md/sync_action
# cat /proc/mdstat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md127 : active raid6 sda[5](S) sdf[4] sdc[1] sde[3] sdd[2] sdb[0]
23441685504 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/5] [UUUUU]
[>....................] check = 1.7% (138283584/7813895168) finish=690.6min speed=185228K/sec
bitmap: 0/59 pages [0KB], 65536KB chunk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Update-02]
Sad to say that this did not help at all. After remove add grow same situation.
--> Help !!!