How to mount single active disk from a failed Raid1

    • OMV 0.5
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • How to mount single active disk from a failed Raid1

      Hey guys one of my Raid1 1GB disks failed and I lost the ability to access the files on the working drive. I don't plan to replace the broken disk because I plan to move the files from the working drive to my Raid5 setup. I simply cannot access the files in this "clean,degraded" state. I looked up the folders under /media/ but nothing. My SMB won't access it either. I figured I'll remove the Raid under Raid management and mount the disk as a single drive. Since the 2 disks were mirrored it should have all the files. Problem now is neither under "File system" nor "Raid management" does the drive show up. How do I get omv to see the drive again? Under filesystem it only shows "missing" besides my Raid5 array.

      Edit: Updated outputs posted in reply

      The post was edited 5 times, last by vulcan4d ().

    • Can you output the results of blkid and also a cat /etc/mdadm/mdadm.conf
      Your raid1 array was it composed of sdd and sde?
      With the info you provided from /proc/mdadm I can't even see the degraded array. Even with [1/2] degraded array you should see it and be able to mount it.
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Thank you for the help subzero. I'll give you fresh outputs including the ones you requested because after a restart my drives reshuffled for some reason. My raid arrays are simply named Raid1 (failed) and Raid3
      Raid1 consists of: sda & sde with the output below

      The GUI shows:
      Physical disks: both drives present for Raid1
      Raid Management: Raid1 is not here and when I pick create, there are no disks to choose
      File system: N/A missing


      blkid

      Source Code

      1. /dev/sdb: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="2a18be27-9a95-6828-83c2-5ee01a1539bc" LABEL="nas2:raid3" TYPE="linux_raid_member"
      2. /dev/md0: LABEL="RaidArray3" UUID="84808b75-2b74-405f-b7e2-77c33f918a04" TYPE="ext4"
      3. /dev/sdc: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="75c609cb-c646-324f-0f73-e30630d14260" LABEL="nas2:raid3" TYPE="linux_raid_member"
      4. /dev/sdf1: UUID="275e2463-bb97-472c-a0e8-6962e0538355" TYPE="ext4"
      5. /dev/sdf5: UUID="e45fde5b-7be1-45cf-a124-02e16fda8d82" TYPE="swap"
      6. /dev/sda: UUID="bc267fc1-4a05-c856-c378-2278946ff688" UUID_SUB="e979f3a4-81a0-ca08-e124-066261c6d3ff" LABEL="nas2:raid1" TYPE="linux_raid_member"
      7. /dev/sdd: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="6a783c49-41b1-00c1-9287-d6b9f8e05019" LABEL="nas2:raid3" TYPE="linux_raid_member"​


      cat /etc/mdadm/mdadm.conf

      Source Code

      1. ​# mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md0 metadata=1.2 name=nas2:raid3 UUID=67b71032:b48c50a6:dc817b2e:efd4e939
      Display All


      cat /proc/mdstat

      Source Code

      1. Personalities : [raid6] [raid5] [raid4]
      2. md0 : active raid5 sdb[0] sdd[3] sdc[1]
      3. 5860530176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      4. unused devices: <none>​


      fdisk -l

      Source Code

      1. ​Disk /dev/sda: 1000.2 GB, 1000203804160 bytes
      2. 255 heads, 63 sectors/track, 121601 cylinders, total 1953523055 sectors
      3. Units = sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 512 bytes
      5. I/O size (minimum/optimal): 512 bytes / 512 bytes
      6. Disk identifier: 0x00000000
      7. Disk /dev/sda doesn't contain a valid partition table
      8. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
      9. 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      10. Units = sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disk identifier: 0x00000000
      14. Disk /dev/sdb doesn't contain a valid partition table
      15. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
      16. 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      17. Units = sectors of 1 * 512 = 512 bytes
      18. Sector size (logical/physical): 512 bytes / 4096 bytes
      19. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      20. Disk identifier: 0x00000000
      21. Disk /dev/sdc doesn't contain a valid partition table
      22. Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
      23. 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      24. Units = sectors of 1 * 512 = 512 bytes
      25. Sector size (logical/physical): 512 bytes / 4096 bytes
      26. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      27. Disk identifier: 0x00000000
      28. Disk /dev/sdd doesn't contain a valid partition table
      29. Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
      30. 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
      31. Units = sectors of 1 * 512 = 512 bytes
      32. Sector size (logical/physical): 512 bytes / 512 bytes
      33. I/O size (minimum/optimal): 512 bytes / 512 bytes
      34. Disk identifier: 0x00000000
      35. Disk /dev/sde doesn't contain a valid partition table
      36. Disk /dev/sdf: 60.0 GB, 60011642880 bytes
      37. 255 heads, 63 sectors/track, 7296 cylinders, total 117210240 sectors
      38. Units = sectors of 1 * 512 = 512 bytes
      39. Sector size (logical/physical): 512 bytes / 512 bytes
      40. I/O size (minimum/optimal): 512 bytes / 512 bytes
      41. Disk identifier: 0x8be41c7d
      42. Device Boot Start End Blocks Id System
      43. /dev/sdf1 * 2048 37380095 18689024 83 Linux
      44. /dev/sdf2 37380096 117210239 39915072 5 Extended
      45. /dev/sdf5 37382144 117209087 39913472 82 Linux swap / Solaris
      46. Disk /dev/md0: 6001.2 GB, 6001182900224 bytes
      47. 2 heads, 4 sectors/track, 1465132544 cylinders, total 11721060352 sectors
      48. Units = sectors of 1 * 512 = 512 bytes
      49. Sector size (logical/physical): 512 bytes / 4096 bytes
      50. I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
      51. Disk identifier: 0x00000000
      52. Disk /dev/md0 doesn't contain a valid partition table
      Display All


      cat /etc/fstab

      Source Code

      1. # /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. #
      7. # <file system> <mount point> <type> <options> <dump> <pass>
      8. proc /proc proc defaults 0 0
      9. # / was on /dev/sda1 during installation
      10. UUID=275e2463-bb97-472c-a0e8-6962e0538355 / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=e45fde5b-7be1-45cf-a124-02e16fda8d82 none swap sw 0 0
      13. /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      14. tmpfs /tmp tmpfs defaults 0 0
      15. # >>> [openmediavault]
      16. UUID=84808b75-2b74-405f-b7e2-77c33f918a04 /media/84808b75-2b74-405f-b7e2-77c33f918a04 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
      17. UUID=9618ef3a-743a-4fd9-be3d-fd4843868a37 /media/9618ef3a-743a-4fd9-be3d-fd4843868a37 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
      18. UUID=0b0e0388-79e1-463d-8b09-a0a084babe12 /media/0b0e0388-79e1-463d-8b09-a0a084babe12 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
      19. # <<< [openmediavault]​
      Display All
    • Just to confirm....you raid1 array (name) is actually a RAID1 array? i mean mirror?

      The OMV UI doesn't allow building a degraded array. However you can use the mdadm tools re-build your raid1 as degraded array.
      You can construct a mirror with one disk missing (sda available and sde missing), after that OMV should be able to see it and mount the array, from there you can move your files to md0.

      I am pretty sure I saw sde in your initial post, did you pull it off?
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • You are correct, dumb naming but it works :). Raid1 mirror was 1TB and consisted of SDA 1TB drive & SDE 1.5TB drive. It wasn't sde initially, that is why I pulled it off after realizing it changed. Didn't want to confuse anyone with the updated information.

      That is good advice, didn't know the UI couldn't help me if a drive failed for a mirror raid. I'll go ahead and read about using the mdadm tools since I am unfamiliar. I'll report back shortly. Thank you.
    • wow it worked. I read up on what you said and creating a degraded Raid1 array and I stubled upon this article:


      For future reference anyone who has this problem again can simply re-create the raid array. I was worried this would overwrite as the command said there was already an existing one but poof it worked

      ​mdadm --create /dev/md1 -l 1 -n 2 /dev/sda missing

      md1 - is the name of the new array. I already had md0
      sda - is the single drive I needed to access

      Thank you for the great help!
    • How to mount single active disk from a failed Raid1

      Also take a look at the backup plugin for offline system operations on HD and raid arrays. As for training in mdadm operations I would recommend the virtual appliance for omv in virtualbox. Comes ready with 9 disks to play around, create, destroy, rebuild etc, whatever you feel like.
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server