Hi,
I am running OMV6, with a RAID 5 volume of 4 2TB disks.
As my RAID volume is 80% full, I'd like to extend its capacity; my plan is to swap progressively each 2TB disk by a new 4TB disk. I would expect the RAID to go into degraded mode, to recover with the new 4TB disk so it is added in the grap; when the RAID is recovered, then I swap the next 2TB disk and so on... Utilmately, mdadm shall manage actually 4 disks of 4TB and I will enjoy larger space for my RAID5 volume.
Unfortunately, when I swap the first 2TB disk with the new 4TB disk, the RAID5 is not mounted in OMV (FYI, I have formatted EXT4 then wiped the new 4TB disk prior to this operation, and it's clean). Moreover, I have no possibility to recover the RAID5; the button "Recover" is greyed out in the OMV interface, menu "RAID".
This is the status of my volume:
mdadm -vQD /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Working Devices : 3
Name : GemenosNAS:Gemenos
UUID : 5e222c44:31a161c4:3899a442:92cbf54f
Events : 2238
Number Major Minor RaidDevice
- 8 32 - /dev/sdc
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
Display More
We can see that 3 of the 4 disks of my RAID5 are still attached, so I don't understand why I can't recover by adding the new 4TB disk.
When I reconnect my old 2TB disk, the RAID5 mounts properly.
I didn't find any similar issue in the forum.
Any help would be appreciated!
Regards