RAID1 degraded, drive is accessible but appears as removed

    • OMV 4.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID1 degraded, drive is accessible but appears as removed

      Hello together,
      I have a RIAD 1 set up which ran fine for a few month. Then suddenly I receiced a notification the state of the array is degraded. Here the output as far as I guess it is needed

      The OVM is installed on a USB stick, the raid 1 has 2 disks

      Source Code

      1. root@openmediavault:~# blkid
      2. /dev/sda: UUID="048b0b5e-91bc-492d-09ca-49e6cb4bd491" UUID_SUB="1e4153ae-28a7-dfbc-8db0-44e28153bd97" LABEL="openmediavault:RAID1" TYPE="linux_raid_member"
      3. /dev/md127: LABEL="FS1" UUID="cd495a14-bd07-4478-a512-87b540a6fbda" TYPE="ext4"
      4. /dev/sdb: UUID="048b0b5e-91bc-492d-09ca-49e6cb4bd491" UUID_SUB="fcbbf020-e39c-322b-cc97-bac6d351f1af" LABEL="openmediavault:RAID1" TYPE="linux_raid_member"
      5. /dev/sdc1: UUID="0aad0df6-e507-4772-b0f1-05a5ca989740" TYPE="ext4" PARTUUID="ae49762b-01"
      6. /dev/sdc5: UUID="c7973780-fbfc-4890-a9ff-ead879d91e16" TYPE="swap" PARTUUID="ae49762b-05"





      Source Code

      1. root@openmediavault:~# cat /proc/mdstat
      2. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]md127 : active raid1 sda[0] 1953383488 blocks super 1.2 [2/1] [U_] bitmap: 6/15 pages [24KB], 65536KB chunk
      3. unused devices: <none>


      Tells me there is a raid 1 consists of one disk


      mounted devices

      Source Code

      1. root@openmediavault:~# cat /proc/mounts
      2. sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
      3. proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
      4. udev /dev devtmpfs rw,nosuid,relatime,size=1993032k,nr_inodes=498258,mode=755 0 0
      5. devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
      6. tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=402428k,mode=755 0 0
      7. /dev/sdc1 / ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      8. securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
      9. tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
      10. tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
      11. tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
      12. cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
      13. pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
      14. cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
      15. cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
      16. cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
      17. cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
      18. cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
      19. cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
      20. cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
      21. cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
      22. cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
      23. systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=41,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=11842 0 0
      24. debugfs /sys/kernel/debug debugfs rw,relatime 0 0
      25. hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
      26. mqueue /dev/mqueue mqueue rw,relatime 0 0
      27. sunrpc /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
      28. nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
      29. tmpfs /tmp tmpfs rw,relatime 0 0
      30. /dev/md127 /srv/dev-disk-by-label-FS1 ext4 rw,noexec,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0
      31. ...snipp
      32. ...snapp
      33. /dev/sdc1 /var/folder2ram/var/log ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      34. folder2ram /var/log tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      35. /dev/sdc1 /var/folder2ram/var/tmp ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      36. folder2ram /var/tmp tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      37. /dev/sdc1 /var/folder2ram/var/lib/openmediavault/rrd ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      38. folder2ram /var/lib/openmediavault/rrd tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      39. /dev/sdc1 /var/folder2ram/var/spool ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      40. folder2ram /var/spool tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      41. /dev/sdc1 /var/folder2ram/var/lib/rrdcached ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      42. folder2ram /var/lib/rrdcached tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      43. /dev/sdc1 /var/folder2ram/var/lib/monit ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      44. folder2ram /var/lib/monit tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      45. /dev/sdc1 /var/folder2ram/var/lib/php ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      46. folder2ram /var/lib/php tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      47. /dev/sdc1 /var/folder2ram/var/lib/netatalk/CNID ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      48. folder2ram /var/lib/netatalk/CNID tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      49. /dev/sdc1 /var/folder2ram/var/cache/samba ext4 rw,noatime,nodiratime,errors=remount-ro 0 0
      50. folder2ram /var/cache/samba tmpfs rw,nosuid,nodev,noexec,relatime 0 0
      Display All
      The array is mounted and accessible

      Details of array

      Source Code

      1. root@openmediavault:~# mdadm --detail /dev/md127
      2. /dev/md127:
      3. Version : 1.2
      4. Creation Time : Wed Jul 4 20:44:09 2018
      5. Raid Level : raid1
      6. Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      7. Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
      8. Raid Devices : 2
      9. Total Devices : 1
      10. Persistence : Superblock is persistent
      11. Intent Bitmap : Internal
      12. Update Time : Thu Apr 4 20:32:29 2019
      13. State : clean, degraded
      14. Active Devices : 1
      15. Working Devices : 1
      16. Failed Devices : 0
      17. Spare Devices : 0
      18. Name : openmediavault:RAID1 (local to host openmediavault)
      19. UUID : 048b0b5e:91bc492d:09ca49e6:cb4bd491
      20. Events : 4281
      21. Number Major Minor RaidDevice State
      22. 0 8 0 0 active sync /dev/sda
      23. - 0 0 1 removed
      Display All
      There should be two drives, while one is removed

      First drive...

      Source Code

      1. root@openmediavault:~# mdadm --examine /dev/sda
      2. /dev/sda:
      3. Magic : a92b4efc
      4. Version : 1.2
      5. Feature Map : 0x1
      6. Array UUID : 048b0b5e:91bc492d:09ca49e6:cb4bd491
      7. Name : openmediavault:RAID1 (local to host openmediavault)
      8. Creation Time : Wed Jul 4 20:44:09 2018
      9. Raid Level : raid1
      10. Raid Devices : 2
      11. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      12. Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      13. Used Dev Size : 3906766976 (1862.89 GiB 2000.26 GB)
      14. Data Offset : 262144 sectors
      15. Super Offset : 8 sectors
      16. Unused Space : before=262056 sectors, after=48 sectors
      17. State : clean
      18. Device UUID : 1e4153ae:28a7dfbc:8db044e2:8153bd97
      19. Internal Bitmap : 8 sectors from superblock
      20. Update Time : Thu Apr 4 20:32:29 2019
      21. Bad Block Log : 512 entries available at offset 72 sectors
      22. Checksum : ec2528a9 - correct
      23. Events : 4281
      24. Device Role : Active device 0
      25. Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
      Display All
      ...is active and second drive....

      Source Code

      1. root@openmediavault:~# mdadm --examine /dev/sdb
      2. /dev/sdb:
      3. Magic : a92b4efc
      4. Version : 1.2
      5. Feature Map : 0x1
      6. Array UUID : 048b0b5e:91bc492d:09ca49e6:cb4bd491
      7. Name : openmediavault:RAID1 (local to host openmediavault)
      8. Creation Time : Wed Jul 4 20:44:09 2018
      9. Raid Level : raid1
      10. Raid Devices : 2
      11. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      12. Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      13. Used Dev Size : 3906766976 (1862.89 GiB 2000.26 GB)
      14. Data Offset : 262144 sectors
      15. Super Offset : 8 sectors
      16. Unused Space : before=262056 sectors, after=48 sectors
      17. State : active
      18. Device UUID : fcbbf020:e39c322b:cc97bac6:d351f1af
      19. Internal Bitmap : 8 sectors from superblock
      20. Update Time : Thu Mar 28 19:24:41 2019
      21. Bad Block Log : 512 entries available at offset 72 sectors
      22. Checksum : c9b8403e - correct
      23. Events : 3705
      24. Device Role : Active device 1
      25. Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
      Display All
      ... is also active


      Source Code

      1. root@openmediavault:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md/openmediavault:RAID1 level=raid1 num-devices=2 metadata=1.2 name=openmediavault:RAID1 UUID=048b0b5e:91bc492d:09ca49e6:cb4bd491
      3. devices=/dev/sda

      mdadm.conf:

      Source Code

      1. root@openmediavault:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/openmediavault:RAID1 metadata=1.2 name=openmediavault:RAID1 UUID=048b0b5e:91bc492d:09ca49e6:cb4bd491
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR whoever@whatever.de
      20. MAILFROM root
      Display All


      My question now: how can I restore the array? The SMART parameters of both disks look good. I can access both disk and see the same content. Both drives appear in UEFI and are seen by fdisk, I assume there was possibly an issue with the cabel/power of the disks. But again, I do not have any experience with a faulty OVM and don't want to mess anything up.

      When I check the GUI it only "sees" one drive in the RAID Management, eventhough two disks are attached. But I cannot add the missing drive using the GUI

      Your input is appreciated :)

      The post was edited 1 time, last by JoeAB: added mdadm.conf and mdadm --detail --scan --verbose ().

    • problem solved. The disk was not part of the array for whatever reason. The error was reported a few hours after running an update.

      I re-added the device with

      Source Code

      1. root@openmediavault:~# mdadm --manage /dev/md127 -a /dev/sdb
      2. mdadm: re-added /dev/sdb
      now I see it is rebuilding

      Brainfuck Source Code

      1. root@openmediavault:~# mdadm --manage /dev/md127 -a /dev/sdb
      2. mdadm: re-added /dev/sdb
      3. root@openmediavault:~# cat /proc/mdstat
      4. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      5. md127 : active raid1 sdb[1] sda[0]
      6. 1953383488 blocks super 1.2 [2/1] [U_]
      7. [===>.................] recovery = 19.9% (390130816/1953383488) finish=500.1min speed=52096K/sec
      8. bitmap: 5/15 pages [20KB], 65536KB chunk
      9. unused devices: <none>