RAID gone from Webinterface after Updates/Adding new drive

    • RAID gone from Webinterface after Updates/Adding new drive

      Hi all,

      today I wanted to put in a drive caddie and a new HDD in my OMV server. Before I did that I installed all the pending updates through the webinterface. Unfortunately after booting the server again, the RAID 5 was gone, hard drives are correctly recognized in the "physical drives" though.

      I have one SSD where OMV is installed.
      My raid was built with 2 WD Red and 1 WD Green, 3 TB all of them. The new HDD is a 3TB WD Red

      Here is the information that I gathered:

      Source Code

      1. root@openmediavault:/dev# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. unused devices: <none>

      Source Code

      1. root@openmediavault:/dev# blkid
      2. /dev/sdb: UUID="ae425e46-0637-9bf6-2b61-e46547f1d65e" UUID_SUB="3d20bb2d-d43d-87a6-fab5-a87adb6eebce" LABEL="openmediavault:StorageRAID5" TYPE="linux_raid_member"
      3. /dev/sdc: UUID="ae425e46-0637-9bf6-2b61-e46547f1d65e" UUID_SUB="95471e2b-c84e-0e81-88b5-b28a0d38cc73" LABEL="openmediavault:StorageRAID5" TYPE="linux_raid_member"
      4. /dev/sdd: UUID="ae425e46-0637-9bf6-2b61-e46547f1d65e" UUID_SUB="35756774-f77e-e7a3-6f40-6d9de926ca9a" LABEL="openmediavault:StorageRAID5" TYPE="linux_raid_member"
      5. /dev/sda1: UUID="c59536d0-c7f0-4b5a-9874-701f2ac544b4" TYPE="ext4" PARTUUID="000921da-01"
      6. /dev/sda5: UUID="2ad89433-176e-46b4-a1bd-644c2218ccf3" TYPE="swap" PARTUUID="000921da-05"

      Source Code

      1. root@openmediavault:/dev# fdisk -l | grep "Disk"
      2. Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      4. Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      5. Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      6. Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
      7. Disklabel type: dos
      8. Disk identifier: 0x000921da

      Source Code

      1. root@openmediavault:/dev# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md0 metadata=1.2 spares=0 name=openmediavault:StorageRAID5 UUID=ae425e46:06379bf6:2b61e465:47f1d65e
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR xxxxxxxxxx@gmail.com
      20. MAILFROM root
      Display All

      Source Code

      1. root@openmediavault:/dev# mdadm --detail --scan --verbose
      2. root@openmediavault:/dev#

      Any help is greatly appreceated!

      Thanks!
    • Done that, here is the output

      Source Code

      1. root@openmediavault:/dev# mdadm --assemble --verbose --force /dev/md0 /dev/sd[bdc]
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
      4. mdadm: /dev/sdc is identified as a member of /dev/md0, slot 0.
      5. mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
      6. mdadm: Marking array /dev/md0 as 'clean'
      7. mdadm: added /dev/sdb to /dev/md0 as 1 (possibly out of date)
      8. mdadm: added /dev/sdd to /dev/md0 as 2
      9. mdadm: added /dev/sdc to /dev/md0 as 0
      10. mdadm: /dev/md0 has been started with 2 drives (out of 3).


      I can see the RAID again the the OMV web interface which is good but I think the message on sdb being out of date isn't perfect right? Is the RAID currently being handled as if the sdb drive has failed?


      Source Code

      1. root@openmediavault:/dev# mdadm --detail /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Sat Jan 21 12:36:04 2017
      5. Raid Level : raid5
      6. Array Size : 5860270080 (5588.79 GiB 6000.92 GB)
      7. Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
      8. Raid Devices : 3
      9. Total Devices : 2
      10. Persistence : Superblock is persistent
      11. Update Time : Sun Feb 10 21:47:55 2019
      12. State : clean, degraded
      13. Active Devices : 2
      14. Working Devices : 2
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : openmediavault:StorageRAID5 (local to host openmediavault)
      20. UUID : ae425e46:06379bf6:2b61e465:47f1d65e
      21. Events : 18900
      22. Number Major Minor RaidDevice State
      23. 0 8 32 0 active sync /dev/sdc
      24. 2 0 0 2 removed
      25. 2 8 48 2 active sync /dev/sdd
      Display All

      The post was edited 3 times, last by teitan ().

    • So you have two out of three, at least the data is still there, backup? If you haven't I suggest you do before executing the following;

      mdadm --zero-superblock /dev/sdb

      mdadm --add /dev/md0 /dev/sdb

      BTW you don't have to be in /dev folder to any of the above, just do cd and go back to root.
      Raid is not a backup! Would you go skydiving without a parachute?
    • teitan wrote:

      Will the superblock automatically be recovered after adding the disk back to the array?
      Yes.

      teitan wrote:

      What's the risk with running those commands?
      Technically none (but a backup covers my arse :) ) as it should restore the array back to 3 drives.

      teitan wrote:

      This is only affecting the disk sdb which is not in the array at the moment anyways right?
      Yes, because of (possibly out of date) error when you did assemble.

      I take it from your questions you don't have a backup :rolleyes:
      Raid is not a backup! Would you go skydiving without a parachute?
    • You're right, I don't have a backup of this machine. I'm using OMV to backup my main machine and for Plex so my movies are technically not backed up anywhere but loosing them wouldn't be the end of the world.

      I was also asking out of interest. I think I'm going to run the commands now and see what will happen. Thanks a lot for you help so far :)


      EDIT:

      RAID is now rebuilding. I hope everything will be good when I grow it tomorrow with the new disk

      Source Code

      1. root@openmediavault:/# mdadm --detail /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Sat Jan 21 12:36:04 2017
      5. Raid Level : raid5
      6. Array Size : 5860270080 (5588.79 GiB 6000.92 GB)
      7. Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
      8. Raid Devices : 3
      9. Total Devices : 3
      10. Persistence : Superblock is persistent
      11. Update Time : Sun Feb 10 22:31:33 2019
      12. State : clean, degraded, recovering
      13. Active Devices : 2
      14. Working Devices : 3
      15. Failed Devices : 0
      16. Spare Devices : 1
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Rebuild Status : 0% complete
      20. Name : openmediavault:StorageRAID5 (local to host openmediavault)
      21. UUID : ae425e46:06379bf6:2b61e465:47f1d65e
      22. Events : 18916
      23. Number Major Minor RaidDevice State
      24. 0 8 32 0 active sync /dev/sdc
      25. 3 8 16 1 spare rebuilding /dev/sdb
      26. 2 8 48 2 active sync /dev/sdd
      Display All