raid5 disk failure.

    • OMV 4.x
    • Ok, unplugged a drive while powered down and tried the command that ness1602 suggested. changed it a bit to only include dev b,c, and d as those are the remaining 3 in the array.

      Source Code

      1. root@openmediavault4:~# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
      2. mdadm: /dev/sdb is busy - skipping
      3. mdadm: /dev/sdc is busy - skipping
      4. mdadm: /dev/sdd is busy - skipping
      No joy there.
      I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
      I tried the command again with the same results.
    • JimT wrote:

      Actually click on a folder to open it and it throws an error message that says "The share is inaccessible because a device has been removed"
      It would because as far as Windows is concerned it's still there on the network.

      JimT wrote:

      No joy there.
      I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
      I tried the command again with the same results.
      Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?

      What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat
      Raid is not a backup! Would you go skydiving without a parachute?
    • ness1602 wrote:

      When you fail one disk(mdadm based) RAID should be active/degraded. It shouldnt be inactive anytime.
      Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.
      This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      ness1602 wrote:

      When you fail one disk(mdadm based) RAID should be active/degraded. It shouldnt be inactive anytime.
      Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.
      This is consistent with what I'm seeing happen.

      geaves wrote:

      Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?
      What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat
      Yes I got a save configuration prompt.
      Found it really weird though that even after unmounting, I powered down and plugged in the unplugged drive. This returned everything to normal with Raid clean and all drives included.
      I intend to try your suggested commands this evening.

      The post was edited 1 time, last by JimT ().

    • Powered down machine and unplugged drive.
      Boot up and entire raid array is gone in GUI as had been the case.
      did the commands below that geaves and ness1602 helped me with.
      It appears that after a forced stop command and then an assemble command, I'm good to go.
      The array showed back up in the GUI in a degraded state which allowed me to add another drive and recover from there.

      Source Code

      1. root@openmediavault4:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md0 : inactive sdb[0](S) sdd[3](S) sdc[2](S)
      4. 556247409 blocks super 1.2
      5. unused devices: <none>
      6. root@openmediavault4:~# mdadm --stop /dev/md0
      7. mdadm: stopped /dev/md0
      8. root@openmediavault4:~# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
      9. mdadm: /dev/md0 has been started with 3 drives (out of 4).
      10. root@openmediavault4:~#
      Display All
      This is a great learning experience and I thank you gentlemen greatly.
    • New

      Hello, folks,
      ask for the help.


      First of all, I'm not a LINUX expert.
      Use only for me.
      I had the same problem with RAID5.
      Performed the steps described above.
      With putty I get the following output:

      Source Code

      1. root@zuhause:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [ra id10]
      3. md0 : active raid5 sdb[0] sdd[4] sdc[3]
      4. 3906766848 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      5. bitmap: 0/15 pages [0KB], 65536KB chunk
      6. unused devices: <none>
      7. root@zuhause:~# blkid
      8. /dev/sda1: UUID="fbc65d7c-174c-4fc8-82d2-1287df163194" TYPE="ext4" PARTUUID="bbe c484c-01"
      9. /dev/sda5: UUID="c1dc2f85-6a94-42f8-b46e-12c14488cbf8" TYPE="swap" PARTUUID="bbe c484c-05"
      10. /dev/sdb: UUID="90ac7b52-dad7-3430-43e2-3631dd5d0bab" UUID_SUB="7a54d402-6c6f-63 65-b131-cdb3d827d153" LABEL="zuhause:Raid" TYPE="linux_raid_member"
      11. /dev/sdd: UUID="90ac7b52-dad7-3430-43e2-3631dd5d0bab" UUID_SUB="6a4b34c6-bba7-ab ab-49aa-455ae7ff817a" LABEL="zuhause:Raid" TYPE="linux_raid_member"
      12. /dev/md0: UUID="B3qaTb-2Tbj-Lmny-jE54-ibhh-ypeB-0b4Gcf" TYPE="LVM2_member"
      13. /dev/sdc: UUID="90ac7b52-dad7-3430-43e2-3631dd5d0bab" UUID_SUB="51ab9d20-db8b-e2 2a-639d-03049a3397d4" LABEL="zuhause:Raid" TYPE="linux_raid_member"
      14. /dev/mapper/speicher-Raidspeicher: LABEL="Speicher" UUID="8b4ff320-223e-4aaf-821 a-0792b4ec3378" UUID_SUB="6c587569-eade-4152-8683-814ea3dc4eae" TYPE="btrfs"
      15. root@zuhause:~# fdisk -l | grep "Disk "
      16. Disk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors
      17. Disk identifier: 0xbbec484c
      18. Partition 2 does not start on physical sector boundary.
      19. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      20. Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      21. Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      22. Disk /dev/md0: 3,7 TiB, 4000529252352 bytes, 7813533696 sectors
      23. Disk /dev/mapper/speicher-Raidspeicher: 2 TiB, 2182107627520 bytes, 4261928960 sectors
      24. root@zuhause:~# cat /etc/mdadm/mdadm.conf
      25. # mdadm.conf
      26. #
      27. # Please refer to mdadm.conf(5) for information about this file.
      28. #
      29. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      30. # alternatively, specify devices to scan, using wildcards if desired.
      31. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      32. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      33. # used if no RAID devices are configured.
      34. DEVICE partitions
      35. # auto-create devices with Debian standard permissions
      36. CREATE owner=root group=disk mode=0660 auto=yes
      37. # automatically tag new arrays as belonging to the local system
      38. HOMEHOST <system>
      39. # definitions of existing MD arrays
      40. ARRAY /dev/md0 metadata=1.2 spares=1 name=zuhause:Raid UUID=90ac7b52:dad73430:43e23631:dd5d0bab
      41. root@zuhause:~# mdadm --detail --scan --verbose
      42. ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=zuhause:Raid UUID=90ac7b52:dad73430:43e23631:dd5d0bab
      43. devices=/dev/sdb,/dev/sdc,/dev/sdd
      Display All


      Via WEBUI I get the following errors during mount:

      Source Code

      1. Error #0:
      2. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mount -v --source '/dev/disk/by-label/Speicher' 2>&1' with exit code '32': mount: wrong fs type, bad option, bad superblock on /dev/mapper/speicher-Raidspeicher,
      3. missing codepage or helper program, or other error
      4. In some cases useful info is found in syslog - try
      5. dmesg | tail or so. in /usr/share/php/openmediavault/system/process.inc:182
      6. Stack trace:
      7. #0 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(720): OMV\System\Process->execute()
      8. #1 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(912): OMV\System\Filesystem\Filesystem->mount()
      9. #2 [internal function]: OMVRpcServiceFileSystemMgmt->mount(Array, Array)
      10. #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      11. #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('mount', Array, Array)
      12. #5 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('FileSystemMgmt', 'mount', Array, Array, 1)
      13. #6 {main}
      Display All
      Something went wrong with the backup.
      Not all folders are copied.

      Do I still have the possibility to restore the data?
      Many thanks for every help.

      Many greetings
      Gelo
    • New

      gelo wrote:

      Many thanks for every help.
      I've read this a couple of times but I know I can't help I have not dealt with how you have your raid set up, but I can see the cause of the error.

      You have set the Raid 5 with 3 disks, and you have set this using LVM -> /dev/md0: UUID="B3qaTb-2Tbj-Lmny-jE54-ibhh-ypeB-0b4Gcf" TYPE="LVM2_member"

      The output of your mdadm.conf -> ARRAY /dev/md0 metadata=1.2 spares=1 name=zuhause:Raid UUID=90ac7b52:dad73430:43e23631:dd5d0bab shows there is a spare. ->?

      You might be better running mdadm --detail /dev/md0 it might give more information.

      The error you are seeing and the reason it will not mount I think is this -> /dev/mapper/speicher-Raidspeicher: LABEL="Speicher" UUID="8b4ff320-223e-4aaf-821 a-0792b4ec3378" UUID_SUB="6c587569-eade-4152-8683-814ea3dc4eae" TYPE="btrfs"

      That drive obviously has something to do with your Raid but it's formatted with btrfs hence this error -> mount -v --source '/dev/disk/by-label/Speicher' 2>&1' with exit code '32': mount: wrong fs type, bad option, bad superblock on /dev/mapper/speicher-Raidspeicher,

      As I said I can 'see' the cause but I have not had any experience with this, but the above should give you a starting point.
      Raid is not a backup! Would you go skydiving without a parachute?