Raid 5 offline and not rebuilding with replacement drive

  • Looking for some expert advice on how to proceed please!


    SSD drive for OMV, Two Raid 5 drive arrays used as data storage. 3 x 3Tb, and 3 x 1.5Tb. The 3x1.5Tb labelled as 'ExternalDisk' (/dev/md126 ) suffered a drive failure. I had a cold spare identical drive that I used to replace the failed drive. When I then tried to use the 'Recover' option in OMV to add the new drive to the failed array I get this message


    Failed to execute command 'export LANG=C; mdadm --manage '/dev/md126' --add /dev/sdh 2>&1': mdadm: cannot load array metadata from /dev/md126



    Degraded Array Info from Support Information Menu:


    Personalities : [raid6] [raid5] [raid4]
    md126 : active raid5 sdd[0](F) sdf[2](F) sde[1](F)
    2930014208 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/0] [___]


    md127 : active raid5 sdb[0] sdg[2] sdc[1]
    5860530176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    [===================>.] check = 99.7% (2921787740/2930265088) finish=1.7min speed=82042K/sec


    unused devices: <none>
    /dev/sda1: UUID="945edb3f-adb2-4f97-aaa4-672051d9de49" TYPE="ext4"
    /dev/sda5: UUID="41cecf67-457a-4600-8189-596325aee2ce" TYPE="swap"
    /dev/sdb: UUID="308e44c1-5e9a-6c22-3949-34a900e335e1" UUID_SUB="ae78e9e4-a100-c804-77a4-a1c4b4462fa0" LABEL="OpenMediaVault:Storage" TYPE="linux_raid_member"
    /dev/md127: LABEL="Disk" UUID="fd5d3309-9138-4ee0-8426-194937c3dd59" TYPE="ext4"
    /dev/sdc: UUID="308e44c1-5e9a-6c22-3949-34a900e335e1" UUID_SUB="9bc7e9e5-1a50-77e4-b81d-f01f01215a10" LABEL="OpenMediaVault:Storage" TYPE="linux_raid_member"
    /dev/sdg: UUID="308e44c1-5e9a-6c22-3949-34a900e335e1" UUID_SUB="f73c7900-3683-a068-b7e7-588abfba158a" LABEL="OpenMediaVault:Storage" TYPE="linux_raid_member"
    /dev/md126: LABEL="ExternalDisk" UUID="7f39497e-1305-437f-9628-2b4137e21845" TYPE="ext4"
    /dev/sdi: UUID="538d7098-e88a-fb5b-0611-ed878e2ee3c1" UUID_SUB="7a8a8436-ffce-c3dd-27fc-6a886db765b9" LABEL="Castleserver:ExternalDisk" TYPE="linux_raid_member"
    /dev/sdj: UUID="538d7098-e88a-fb5b-0611-ed878e2ee3c1" UUID_SUB="3aca517c-b646-f969-f8a0-95c296b5a379" LABEL="Castleserver:ExternalDisk" TYPE="linux_raid_member"


    Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Device Boot Start End Blocks Id System
    /dev/sdb1 1 1565565871 782782935+ ee GPT
    Partition 1 does not start on physical sector boundary.


    Disk /dev/sda: 180.0 GB, 180045766656 bytes
    255 heads, 63 sectors/track, 21889 cylinders, total 351651888 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00063bde


    Device Boot Start End Blocks Id System
    /dev/sda1 * 2048 337332223 168665088 83 Linux
    /dev/sda2 337334270 351649791 7157761 5 Extended
    /dev/sda5 337334272 351649791 7157760 82 Linux swap / Solaris


    Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
    255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0xcabacaba


    Disk /dev/md127: 6001.2 GB, 6001182900224 bytes
    2 heads, 4 sectors/track, 1465132544 cylinders, total 11721060352 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
    Disk identifier: 0x00000000


    Disk /dev/md126: 3000.3 GB, 3000334548992 bytes
    2 heads, 4 sectors/track, 732503552 cylinders, total 5860028416 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
    255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
    255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
    255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000

  • Pretty much the same here:
    RAID 1 (Mirror), and after removing disk 2 from the array in the web console I cannot recover anymore:


    Failed to execute command 'export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1': mdadm: ARRAY line /dev/md0 has no identity information. mdadm: cannot load array metadata from /dev/md0


    Error #4000:
    exception 'OMVException' with message 'Failed to execute command 'export LANG=C; mdadm --manage '/dev/md0' --add /dev/sdb 2>&1': mdadm: ARRAY line /dev/md0 has no identity information.
    mdadm: cannot load array metadata from /dev/md0' in /usr/share/openmediavault/engined/rpc/raidmgmt.inc:400
    Stack trace:
    #0 [internal function]: OMVRpcServiceRaidMgmt->add(Array, Array)
    #1 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array)
    #2 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('add', Array, Array)
    #3 /usr/sbin/omv-engined(500): OMVRpc::exec('RaidMgmt', 'add', Array, Array, 1)
    #4 {main}


    Would be nice if someone could give his/her comment ... recovering a RAID is one of the most essential parts, if that doesn't work stable then I don't need a NAS with a RAID behind!

  • OK guys, for me it was really simple:
    Just rebooting the Pi where OMV is installed using the OMV web interface solved the problem: Both disks are fully present in the RAID again :)
    What I still don't know now is what would have happened if that was a real disk replacement, means with a new empty disk... well, probably I'm gonna try that soon.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!