raid5 missing after power failure... but all physicals disks are present...!

    • OMV 2.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • raid5 missing after power failure... but all physicals disks are present...!

      Hello,

      I use OMV since 6 years for now. And this is a very stable NAS OS!

      But last day we have a power failure and now we can't see the raid in the raid management array. So strange !

      All the disks are present, and of course the file systems is missing too (see my screenshots for more understanding)


      I don't want to make mistake and loose my data backup.

      So do you have a idea to fix this problem ?

      Thank you very much
      Images
      • 2017-09-07_14h49_15.jpg

        30.24 kB, 638×294, viewed 16 times
      • 2017-09-07_14h49_24.jpg

        11.49 kB, 596×265, viewed 18 times
      • 2017-09-07_14h49_36.jpg

        19.66 kB, 945×158, viewed 15 times
      • 2017-09-07_14h49_52.jpg

        17.25 kB, 590×316, viewed 17 times
    • Thank.
      Here is the infos :

      Source Code

      1. root@OMV-NAS:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. md0 : inactive sdb[0] sdi[8] sdh[6] sdg[5] sdf[4] sde[3] sdd[1]
      4. 13673684584 blocks super 1.2
      5. unused devices: <none>

      Source Code

      1. root@OMV-NAS:~# blkid
      2. /dev/sda1: UUID="b857a071-05e1-4376-8c3f-1fd7480bd4d5" TYPE="ext4"
      3. /dev/sda5: UUID="00a7bd9b-645f-4073-bd2e-d2589bd9e3e3" TYPE="swap"
      4. /dev/sdb: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="d97c84fa-a9f0-d979-8012-2bbe4218b6f0" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      5. /dev/sdc: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="fff6cdbe-f915-5f9c-7718-d4b1e1e2e5a5" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      6. /dev/sde: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="f9cd04f1-34e8-8625-f74d-30b9b51573f0" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      7. /dev/sdd: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="f9adb76c-7c04-572b-9ec1-dc6bef2715a3" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      8. /dev/sdf: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="ab4a3ab4-ed67-ed77-0579-794d719699a8" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      9. /dev/sdg: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="5302d604-ad26-d6b5-2883-de91203e41a1" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      10. /dev/sdh: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="ffe43aa5-0976-999a-4448-b0a995ff26e2" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      11. /dev/sdi: UUID="118b93c2-b4c2-e708-751d-95ca7a668fc9" UUID_SUB="8191886e-7dec-fbd3-c873-9457dc269265" LABEL="OMV-NAS:RAID5" TYPE="linux_raid_member"
      Display All

      Source Code

      1. root@OMV-NAS:~# fdisk -l | grep "Disk "
      2. Disk /dev/sdi doesn't contain a valid partition table
      3. Disk /dev/sda: 80.0 GB, 80026361856 bytes
      4. Disk identifier: 0x0002872c
      5. Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
      6. Disk identifier: 0x890f2ef3
      7. Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
      8. Disk identifier: 0x890f2ef1
      9. Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
      10. Disk identifier: 0x890f2f0f
      11. Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
      12. Disk identifier: 0x890f2f0e
      13. Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes
      14. Disk identifier: 0x890f2f0d
      15. Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
      16. Disk identifier: 0x890f2ef0
      17. Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes
      18. Disk identifier: 0x890f2f0c
      19. Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes
      20. Disk identifier: 0x00000000
      Display All


      Source Code

      1. root@OMV-NAS:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md0 metadata=1.2 spares=1 name=OMV-NAS:RAID5 UUID=118b93c2:b4c2e708:751d95ca:7a668fc9
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR pdi@cire.be
      20. MAILFROM root
      Display All


      Source Code

      1. root@OMV-NAS:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md0 level=raid5 num-devices=8 metadata=1.2 name=OMV-NAS:RAID5 UUID=118b93c2:b4c2e708:751d95ca:7a668fc9
      3. devices=/dev/sdb,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi


      I hope it'll help.

      Thank you.

      Regard.
    • Thank you very very much.

      Now I can see the Raid 5.
      Just a last thing. The raid 5 is degraded.
      So I want to add the missing disk, sdc, but I can't see it in the "Add hot spares / recover RAID device" window...!

      But I can see it here, in the "Physical disks" window.

      How can I force to add the sdc in the raid...? If you have a idea...

      Thank you.
      Regard.


      PS : and I'm a little afraid because the "fdisk -l" command return this strange result : "Disk /dev/sdi doesn't contain a valid partition table"

      Source Code

      1. /dev/sdh1 63 3907027119 1953513528+ 42 SFS
      2. Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes
      3. 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      4. Units = sectors of 1 * 512 = 512 bytes
      5. Sector size (logical/physical): 512 bytes / 4096 bytes
      6. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      7. Disk identifier: 0x00000000
      8. Disk /dev/sdi doesn't contain a valid partition table
      9. Disk /dev/md0: 14001.8 GB, 14001848713216 bytes
      10. 2 heads, 4 sectors/track, -876547200 cylinders, total 27347360768 sectors
      11. Units = sectors of 1 * 512 = 512 bytes
      12. Sector size (logical/physical): 512 bytes / 4096 bytes
      13. I/O size (minimum/optimal): 524288 bytes / 3670016 bytes
      14. Disk identifier: 0x00000000
      15. Disk /dev/md0 doesn't contain a valid partition table
      Display All
      Images
      • 2017-09-08_19h54_17.jpg

        72.35 kB, 1,233×661, viewed 13 times
      • 2017-09-08_19h54_55.jpg

        56.48 kB, 956×380, viewed 12 times
    • piet wrote:

      So I want to add the missing disk, sdc, but I can't see it in the "Add hot spares / recover RAID device" window...!
      You aren't ready to start using the web interface yet. So, don't anything there.

      What is the output of: cat /proc/mdstat

      piet wrote:

      and I'm a little afraid because the "fdisk -l" command return this strange result : "Disk /dev/sdi doesn't contain a valid partition table"
      That is fine. The arrays that OMV creates don't use partitions and don't need a partition table.
      omv 4.0.5 arrakis | 64 bit | 4.12 backports kernel | omvextrasorg 4.0.4
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • Here the result of the command :

      Regard.

      Source Code

      1. root@OMV-NAS:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. md0 : active (auto-read-only) raid5 sdb[0] sdi[8] sdh[6] sdg[5] sdf[4] sde[3] sdd[1]
      4. 13673680384 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/7] [UU_UUUUU]
      5. unused devices: <none>


      and I see it with the "fdisk-l" command :

      Source Code

      1. Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
      2. 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      3. Units = sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disk identifier: 0x890f2ef1
      7. Device Boot Start End Blocks Id System
      8. /dev/sdc1 63 3907027119 1953513528+ 42 SFS
      9. Partition 1 does not start on physical sector boundary.

      The post was edited 1 time, last by piet ().

    • Thank you.

      I just do this, but the Raid 5 is still degraded and I can't see the sdc in the "Add hot spares / recover RAID device" window.

      Maybe force to add the disk, or format the drive...?

      Regard.

      PS : or maybe add the "c" at the end of this command...

      Source Code

      1. mdadm --assemble --force --verbose /dev/md0 /dev/sd[bihgfed]
    • piet wrote:

      I can't see the sdc in the "Add hot spares / recover RAID device" window.
      Again, you really don't want to use the web interface yet.

      piet wrote:

      Hello, if someone have a idea to fix this, it'll be great.
      patience... Reinstalling OMV wouldn't help this issue.

      piet wrote:

      or maybe add the "c" at the end of this command...
      yes add the 'c'.

      mdadm --stop /dev/md0
      mdadm --assemble --force --verbose /dev/md0 /dev/sd[bihgfedc]
      omv 4.0.5 arrakis | 64 bit | 4.12 backports kernel | omvextrasorg 4.0.4
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • Hello,

      thank for help.

      There is a problem with the first command, so I don't use the second command for now.
      Maybe a restart of OMV...

      Source Code

      1. root@OMV-NAS:~# mdadm --stop /dev/md0
      2. mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?

      Source Code

      1. root@OMV-NAS:~# mount
      2. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      3. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      4. udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=488794,mode=755)
      5. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      6. tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=392756k,mode=755)
      7. /dev/disk/by-uuid/b857a071-05e1-4376-8c3f-1fd7480bd4d5 on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
      8. tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
      9. tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1427960k)
      10. /dev/md0 on /media/c34fbc0c-ed92-4f6e-b19c-a672ce926cb3 type ext4 (rw,noexec,relatime,user_xattr,barrier=1,stripe=256,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group,_netdev)
      11. /dev/md0 on /export/backup-vm type ext4 (rw,noexec,relatime,user_xattr,barrier=1,stripe=256,data=ordered,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
      12. rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
      13. nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
      Display All

      Source Code

      1. root@OMV-NAS:~# umount -f /dev/md0
      2. umount2: Device or resource busy
      3. umount: /export/backup-vm: device is busy.
      4. (In some cases useful info about processes that use
      5. the device is found by lsof(8) or fuser(1))
      6. umount2: Device or resource busy
      7. root@OMV-NAS:~# umount -f /media/c34fbc0c-ed92-4f6e-b19c-a672ce926cb3
      8. umount2: Invalid argument
      9. umount: /media/c34fbc0c-ed92-4f6e-b19c-a672ce926cb3: not mounted


      ps : Restart of the OMV didn't change anything.


      ok with umount -l /dev/md0, it is unmounted but still mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group

      so status now :

      Source Code

      1. root@OMV-NAS:~# mdadm --detail /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Wed Sep 7 16:56:39 2016
      5. Raid Level : raid5
      6. Array Size : 13673680384 (13040.24 GiB 14001.85 GB)
      7. Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
      8. Raid Devices : 8
      9. Total Devices : 7
      10. Persistence : Superblock is persistent
      11. Update Time : Tue Sep 12 21:50:28 2017
      12. State : clean, degraded
      13. Active Devices : 7
      14. Working Devices : 7
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : OMV-NAS:RAID5 (local to host OMV-NAS)
      20. UUID : 118b93c2:b4c2e708:751d95ca:7a668fc9
      21. Events : 16335
      22. Number Major Minor RaidDevice State
      23. 0 8 16 0 active sync /dev/sdb
      24. 1 8 48 1 active sync /dev/sdd
      25. 2 0 0 2 removed
      26. 3 8 64 3 active sync /dev/sde
      27. 4 8 80 4 active sync /dev/sdf
      28. 5 8 96 5 active sync /dev/sdg
      29. 6 8 112 6 active sync /dev/sdh
      30. 8 8 128 7 active sync /dev/sdi
      Display All
      Ah I read a lot of thing, but do you have a idea to move on ?

      Regard.

      The post was edited 7 times, last by piet ().

    • And if I try the "assemble" command

      Source Code

      1. root@OMV-NAS:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[bihgfedc]
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: /dev/sdb is busy - skipping
      4. mdadm: /dev/sdd is busy - skipping
      5. mdadm: /dev/sde is busy - skipping
      6. mdadm: /dev/sdf is busy - skipping
      7. mdadm: /dev/sdg is busy - skipping
      8. mdadm: /dev/sdh is busy - skipping
      9. mdadm: /dev/sdi is busy - skipping
      10. mdadm: /dev/md0 is already in use.
      11. root@OMV-NAS:~# mdadm --assemble --force --verbose /dev/md0 /dev/sdc
      12. mdadm: looking for devices for /dev/md0
      13. mdadm: /dev/md0 is already in use.
      Display All
    • So the problem was the /dev/sdc.

      After reading a lot of of informations, I made


      # mdadm --zero-superblock /dev/sdc
      # mdadm --manage /dev/md0 --add /dev/sdc


      So now the status is rebuilding.

      Source Code

      1. root@OMV-NAS:~# mdadm --detail /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Wed Sep 7 16:56:39 2016
      5. Raid Level : raid5
      6. Array Size : 13673680384 (13040.24 GiB 14001.85 GB)
      7. Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
      8. Raid Devices : 8
      9. Total Devices : 8
      10. Persistence : Superblock is persistent
      11. Update Time : Tue Sep 12 22:20:16 2017
      12. State : clean, degraded, recovering
      13. Active Devices : 7
      14. Working Devices : 8
      15. Failed Devices : 0
      16. Spare Devices : 1
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Rebuild Status : 1% complete
      20. Name : OMV-NAS:RAID5 (local to host OMV-NAS)
      21. UUID : 118b93c2:b4c2e708:751d95ca:7a668fc9
      22. Events : 16367
      23. Number Major Minor RaidDevice State
      24. 0 8 16 0 active sync /dev/sdb
      25. 1 8 48 1 active sync /dev/sdd
      26. 9 8 32 2 spare rebuilding /dev/sdc
      27. 3 8 64 3 active sync /dev/sde
      28. 4 8 80 4 active sync /dev/sdf
      29. 5 8 96 5 active sync /dev/sdg
      30. 6 8 112 6 active sync /dev/sdh
      31. 8 8 128 7 active sync /dev/sdi
      Display All

      links
      ubuntuforums.org/showthread.php?t=884556
      askubuntu.com/questions/304672…moved-hard-drive-in-raid5
    • and now Raid 5 is ok.

      Source Code

      1. root@OMV-NAS:~# mdadm --detail /dev/md0
      2. /dev/md0:
      3. Version : 1.2
      4. Creation Time : Wed Sep 7 16:56:39 2016
      5. Raid Level : raid5
      6. Array Size : 13673680384 (13040.24 GiB 14001.85 GB)
      7. Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
      8. Raid Devices : 8
      9. Total Devices : 8
      10. Persistence : Superblock is persistent
      11. Update Time : Wed Sep 13 11:32:05 2017
      12. State : clean
      13. Active Devices : 8
      14. Working Devices : 8
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : OMV-NAS:RAID5 (local to host OMV-NAS)
      20. UUID : 118b93c2:b4c2e708:751d95ca:7a668fc9
      21. Events : 16922
      22. Number Major Minor RaidDevice State
      23. 0 8 16 0 active sync /dev/sdb
      24. 1 8 48 1 active sync /dev/sdd
      25. 9 8 32 2 active sync /dev/sdc
      26. 3 8 64 3 active sync /dev/sde
      27. 4 8 80 4 active sync /dev/sdf
      28. 5 8 96 5 active sync /dev/sdg
      29. 6 8 112 6 active sync /dev/sdh
      30. 8 8 128 7 active sync /dev/sdi
      Display All