RAID5 : State : clean, degraded

    • OMV 0.5
    • RAID5 : State : clean, degraded

      Hi,

      I have a problem with my RAID5 on OMV 0.5.60 (HP N54L / Esxi5)


      Source Code

      1. Version : 1.2
      2. Creation Time : Sat Aug 30 14:58:34 2014
      3. Raid Level : raid5
      4. Array Size : 1950348288 (1860.00 GiB 1997.16 GB)
      5. Used Dev Size : 975174144 (930.00 GiB 998.58 GB)
      6. Raid Devices : 3
      7. Total Devices : 2
      8. Persistence : Superblock is persistent
      9. Update Time : Mon Mar 2 19:14:45 2015
      10. State : clean, degraded
      11. Active Devices : 2
      12. Working Devices : 2
      13. Failed Devices : 0
      14. Spare Devices : 0
      15. Layout : left-symmetric
      16. Chunk Size : 512K
      17. Name : OMVHSERVER25:data
      18. UUID : aaf6aaca:be2119a7:93438121:4db1540c
      19. Events : 55904
      20. Number Major Minor RaidDevice State
      21. 0 8 16 0 active sync /dev/sdb
      22. 1 0 0 1 removed
      23. 2 8 48 2 active sync /dev/sdd
      Display All


      So i try to repair it

      Source Code

      1. root@Rex:/# mdadm --misc --scan --detail /dev/md127
      2. /dev/md127:
      3. Version : 1.2
      4. Creation Time : Sat Aug 30 14:58:34 2014
      5. Raid Level : raid5
      6. Array Size : 1950348288 (1860.00 GiB 1997.16 GB)
      7. Used Dev Size : 975174144 (930.00 GiB 998.58 GB)
      8. Raid Devices : 3
      9. Total Devices : 2
      10. Persistence : Superblock is persistent
      11. Update Time : Mon Mar 2 18:44:33 2015
      12. State : clean, degraded
      13. Active Devices : 2
      14. Working Devices : 2
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : OMVHSERVER25:data
      20. UUID : aaf6aaca:be2119a7:93438121:4db1540c
      21. Events : 55216
      22. Number Major Minor RaidDevice State
      23. 0 8 16 0 active sync /dev/sdb
      24. 1 0 0 1 removed
      25. 2 8 48 2 active sync /dev/sdd
      26. root@Rex:/# blkid
      27. /dev/sda1: UUID="bbf397f1-f164-45c4-9aef-afa6a2c71a74" TYPE="ext4"
      28. /dev/sda5: UUID="17ae30d3-8fae-4eb9-a0bf-275a9d5408db" TYPE="swap"
      29. /dev/sdb: UUID="aaf6aaca-be21-19a7-9343-81214db1540c" LABEL="OMVHSERVER25:dat a" TYPE="linux_raid_member"
      30. /dev/sdd: UUID="aaf6aaca-be21-19a7-9343-81214db1540c" LABEL="OMVHSERVER25:dat a" TYPE="linux_raid_member"
      31. /dev/sdc: UUID="aaf6aaca-be21-19a7-9343-81214db1540c" LABEL="OMVHSERVER25:dat a" TYPE="linux_raid_member"
      32. /dev/md127: LABEL="data" UUID="bc81081d-5761-4dc2-a141-47252e46da15" TYPE="ex t4"
      33. root@Rex:/# mdadm --stop --force /dev/md127
      34. mdadm: failed to stop array /dev/md127: Device or resource busy
      35. Perhaps a running process, mounted filesystem or active volume group?
      Display All


      Could you help me?

      Best regards,

      p.
    • Recovering in progress after

      Source Code

      1. mdadm --incremental --run /dev/sdc


      Brainfuck Source Code

      1. root@Rex:/# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid5 sdc[1] sdb[0] sdd[2]
      4. 1950348288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
      5. [>....................] recovery = 0.8% (8744448/975174144) finish=471.0min speed=34193K/sec
      6. unused devices: <none>