RAID5 missing after adding disk

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID5 missing after adding disk

      Hello everyone,


      my Raid5 went missing after adding a new disk to it.

      Before growing: 3x disk 10TB
      After growing: 4x disk 10TB


      I added the new disk with
      mdadm --add /dev/md0 /dev/sda

      --> somehow the new disk showed up as sda !?


      and grew the Raid with
      mdadm --grow --raid-devices=4

      Everything went fine. My Raid grew and the new storage space showed up in the WebGUI.

      I copied some files to it an shut it down.

      After rebooting my Raid went missing. I have no idea what went wrong.

      I hope you can help me. Thank you very much!


      Release: 4.1.19-1
      Codename: Arrakis




      Output results:



      1. cat /proc/mdstat

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md0 : inactive sdb[0](S) sdd[2](S) sdc[1](S)
      3. 29298921984 blocks super 1.2
      4. unused devices: <none>



      2. blkid

      Source Code

      1. /dev/sdb: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="945f5477-7c7f-97b5-cf6d-60701af938fb" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      2. /dev/sdd: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="177204c9-dfb9-f78c-725b-b9861fe857b6" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      3. /dev/sdc: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="921d4d34-4ee0-dc0a-1ef0-6c014035953c" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      4. /dev/sde1: UUID="4bc0abc4-ac0f-4814-a4f4-e51213828b15" TYPE="ext4" PARTUUID="0286303e-01"
      5. /dev/sde5: UUID="02284066-1848-411d-9a57-7d952e67d8a7" TYPE="swap" PARTUUID="0286303e-05"
      6. /dev/sda1: PARTUUID="69221752-7171-483e-a28f-ae7d83f81caf"

      3. fdisk -l | grep "Disk "

      Source Code

      1. Disk /dev/sdb: 9,1 TiB, 10000831348736 bytes, 19532873728 sectors
      2. Disk /dev/sdd: 9,1 TiB, 10000831348736 bytes, 19532873728 sectors
      3. Disk /dev/sda: 9,1 TiB, 10000831348736 bytes, 19532873728 sectors
      4. Disk identifier: 460B81D3-EA0D-4E21-ABD4-92B84661D883
      5. Disk /dev/sdc: 9,1 TiB, 10000831348736 bytes, 19532873728 sectors
      6. Disk /dev/sde: 59,6 GiB, 64023257088 bytes, 125045424 sectors
      7. Disk identifier: 0x0286303e

      4. cat /etc/mdadm/mdadm.conf

      Source Code

      1. #
      2. # Please refer to mdadm.conf(5) for information about this file.
      3. #
      4. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      5. # alternatively, specify devices to scan, using wildcards if desired.
      6. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      7. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      8. # used if no RAID devices are configured.
      9. DEVICE partitions
      10. # auto-create devices with Debian standard permissions
      11. CREATE owner=root group=disk mode=0660 auto=yes
      12. # automatically tag new arrays as belonging to the local system
      13. HOMEHOST <system>
      14. # definitions of existing MD arrays
      15. ARRAY /dev/md0 metadata=1.2 name=openmediavault:SimonsNAS UUID=430dadae:5f1c40c9:c3a5a8d5:15749f6c
      16. # instruct the monitoring daemon where to send mail alerts
      17. MAILADDR *****_******@web.de
      Display All

      5. mdadm --detail --scan --verbose

      Source Code

      1. INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 name=openmediavault:SimonsNAS UUID=430dadae:5f1c40c9:c3a5a8d5:15749f6c
      2. devices=/dev/sdb,/dev/sdc,/dev/sdd

      The post was edited 1 time, last by Zerstoerer_ ().

    • Have you tried

      Shell-Script

      1. $ omv-mkconf mdadm
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Zerstoerer_ wrote:

      md0 : inactive sdb[0](S) sdd[2](S) sdc[1](S)
      The Raid is inactive, it also shows only 3 drives associated, so dev/sda is not part of the array as noted by the output of blkid try the following from the cli mdadm --assemble --verbose --force /dev/md0 /dev/sd[bcd] if you get a busy error stop the array mdadm --stop /dev/md0 then run again.

      This should bring the array back up, then feed back.
      Raid is not a backup! Would you go skydiving without a parachute?
    • It worked, ist back up with 3 drives...

      Source Code

      1. mdadm: Unknown keyword INACTIVE-ARRAY
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: Merging with already-assembled /dev/md/SimonsNAS
      4. mdadm: /dev/sdb is identified as a member of /dev/md/SimonsNAS, slot 0.
      5. mdadm: /dev/sdc is identified as a member of /dev/md/SimonsNAS, slot 1.
      6. mdadm: /dev/sdd is identified as a member of /dev/md/SimonsNAS, slot 2.
      7. mdadm: /dev/sdc is already in /dev/md/SimonsNAS as 1
      8. mdadm: added /dev/sdd to /dev/md/SimonsNAS as 2
      9. mdadm: no uptodate device for slot 3 of /dev/md/SimonsNAS
      10. mdadm: /dev/sdb is already in /dev/md/SimonsNAS as 0
      11. mdadm: /dev/md/SimonsNAS has been started with 3 drives (out of 4).
      Display All
    • :thumbup:

      In the GUI Storage -> Disks select your new drive then wipe from the menu bar.

      Raid Management, select the Raid array then grow from the menu bar, select the new drive from the popup box -> Ok the drive should then add itself and grow the array. That's the theory :)
      Raid is not a backup! Would you go skydiving without a parachute?
    • I can't remember if i have wiped the new drive. :D


      1. cat /proc/mdstat

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md127 : active raid5 sdd[2] sdc[1] sdb[0]
      3. 29298917376 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      4. bitmap: 5/73 pages [20KB], 65536KB chunk
      2. blkid

      Source Code

      1. /dev/sdb: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="945f5477-7c7f-97b5-cf6d-60701af938fb" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      2. /dev/sdd: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="177204c9-dfb9-f78c-725b-b9861fe857b6" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      3. /dev/sdc: UUID="430dadae-5f1c-40c9-c3a5-a8d515749f6c" UUID_SUB="921d4d34-4ee0-dc0a-1ef0-6c014035953c" LABEL="openmediavault:SimonsNAS" TYPE="linux_raid_member"
      4. /dev/sde1: UUID="4bc0abc4-ac0f-4814-a4f4-e51213828b15" TYPE="ext4" PARTUUID="0286303e-01"
      5. /dev/sde5: UUID="02284066-1848-411d-9a57-7d952e67d8a7" TYPE="swap" PARTUUID="0286303e-05"
      6. /dev/md127: LABEL="Raid5" UUID="4ba52b0b-c03f-4854-964f-a18ca8bcebe4" TYPE="ext4"
      7. /dev/sda1: PARTUUID="69221752-7171-483e-a28f-ae7d83f81caf"
    • :D

      Source Code

      1. root@openmediavault:~# mdadm --detail /dev/md127
      2. mdadm: Unknown keyword INACTIVE-ARRAY
      3. /dev/md127:
      4. Version : 1.2
      5. Creation Time : Mon Oct 8 21:07:22 2018
      6. Raid Level : raid5
      7. Array Size : 29298917376 (27941.63 GiB 30002.09 GB)
      8. Used Dev Size : 9766305792 (9313.88 GiB 10000.70 GB)
      9. Raid Devices : 4
      10. Total Devices : 3
      11. Persistence : Superblock is persistent
      12. Intent Bitmap : Internal
      13. Update Time : Mon Mar 4 16:10:38 2019
      14. State : clean, degraded
      15. Active Devices : 3
      16. Working Devices : 3
      17. Failed Devices : 0
      18. Spare Devices : 0
      19. Layout : left-symmetric
      20. Chunk Size : 512K
      21. Name : openmediavault:SimonsNAS (local to host openmediavault)
      22. UUID : 430dadae:5f1c40c9:c3a5a8d5:15749f6c
      23. Events : 36685
      24. Number Major Minor RaidDevice State
      25. 0 8 16 0 active sync /dev/sdb
      26. 1 8 32 1 active sync /dev/sdc
      27. 2 8 48 2 active sync /dev/sdd
      28. - 0 0 3 removed
      Display All