Existing RAID 1 Disk Not Detected

  • Hello,


    My OMV V2 server OS drive died over the weekend, so I have now installed OMV V4. My old OMV server had 2 RAID drives, a RAID 1 disk and a RAID 5 disk. Both drives were working fine and the S.M.A.R.T. status was green.


    Once the install was completed, both RAID volumes are shown in the RAID Management section (/dev/md126 & /dev/md127). When I go to the File System section, OMV has only picked up the RAID 5 volume(/dev/md126, which I successfully mounted) but does not list the RAID 1 volume (/dev/md127).


    If I run cat /proc/mdstat I get the following:


    Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
    md126 : active raid5 sde[1] sdf[2] sdd[0]
    1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]


    md127 : active (auto-read-only) raid1 sdc[2] sdb[3]
    1953383360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>


    Any tips on how I can get the system to see the RAID 1 volume in the File Systems section?

  • Thanks for the quick response :thumbup:


    OK, here are the results:


    1.
    Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
    md126 : active raid5 sdd[0] sdf[2] sde[1]
    1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]


    md127 : active (auto-read-only) raid1 sdc[2] sdb[3]
    1953383360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>


    2.
    /dev/sda1: UUID="9243c6a0-1dd7-45aa-b50a-463f05a4c8d0" TYPE="ext4" PARTUUID="6f1e632c-01"
    /dev/sda5: UUID="54c99ffc-66e7-4e51-8231-ea0b17561bf4" TYPE="swap" PARTUUID="6f1e632c-05"
    /dev/sdb: UUID="8a0b7b88-3c40-e913-5c44-4a7b18bfb17e" UUID_SUB="5c7c922e-afb6-afac-9658-23638f375eff" LABEL="TheVault:Data1" TYPE="linux_raid_member"
    /dev/sdc: UUID="8a0b7b88-3c40-e913-5c44-4a7b18bfb17e" UUID_SUB="9cd59c9d-f00f-ed82-b6d7-9626a07b7ee0" LABEL="TheVault:Data1" TYPE="linux_raid_member"
    /dev/sdd: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="e9d4d097-9579-7080-ede4-5e1948321f19" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/sde: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="776f8945-af4a-32b0-7c18-4c585e48df3f" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/sdf: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="0296f82a-7b3c-c5a5-808d-56c60dbae199" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/md126: LABEL="Media5" UUID="37ab6a88-e9c9-4e0b-86a1-cc85718c6c85" TYPE="ext4"


    3.
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md127 metadata=1.2 name=TheVault:Data1 UUID=8a0b7b88:3c40e913:5c444a7b:18bfb17e
    ARRAY /dev/md126 metadata=1.2 name=TheVault:Data5 UUID=4662c924:68c88bc2:ae2d30a9:96b8441a
    MAILADDR root


    4.
    ARRAY /dev/md127 level=raid1 num-devices=2 metadata=1.2 name=TheVault:Data1 UUID=8a0b7b88:3c40e913:5c444a7b:18bfb17e
    devices=/dev/sdb,/dev/sdc
    ARRAY /dev/md126 level=raid5 num-devices=3 metadata=1.2 name=TheVault:Data5 UUID=4662c924:68c88bc2:ae2d30a9:96b8441a
    devices=/dev/sdd,/dev/sde,/dev/sdf


    5.
    2 x 2TB Seagate Barracuda ST2000DM006


    6.
    offset type
    ----------------------------------------------------------------
    0x1d1b90bf000 zfs_member [filesystem]


    0x438 ext4 [filesystem]
    LABEL: Media1
    UUID: e6886cdd-7b87-493b-a912-d8a85a1c9cb0

  • Thanks for the quick response :thumbup:


    OK, here are the results:


    1.
    Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
    md126 : active raid5 sdd[0] sdf[2] sde[1]
    1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]


    md127 : active (auto-read-only) raid1 sdc[2] sdb[3]
    1953383360 blocks super 1.2 [2/2] [UU]


    unused devices: <none>


    2.
    /dev/sda1: UUID="9243c6a0-1dd7-45aa-b50a-463f05a4c8d0" TYPE="ext4" PARTUUID="6f1e632c-01"
    /dev/sda5: UUID="54c99ffc-66e7-4e51-8231-ea0b17561bf4" TYPE="swap" PARTUUID="6f1e632c-05"
    /dev/sdb: UUID="8a0b7b88-3c40-e913-5c44-4a7b18bfb17e" UUID_SUB="5c7c922e-afb6-afac-9658-23638f375eff" LABEL="TheVault:Data1" TYPE="linux_raid_member"
    /dev/sdc: UUID="8a0b7b88-3c40-e913-5c44-4a7b18bfb17e" UUID_SUB="9cd59c9d-f00f-ed82-b6d7-9626a07b7ee0" LABEL="TheVault:Data1" TYPE="linux_raid_member"
    /dev/sdd: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="e9d4d097-9579-7080-ede4-5e1948321f19" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/sde: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="776f8945-af4a-32b0-7c18-4c585e48df3f" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/sdf: UUID="4662c924-68c8-8bc2-ae2d-30a996b8441a" UUID_SUB="0296f82a-7b3c-c5a5-808d-56c60dbae199" LABEL="TheVault:Data5" TYPE="linux_raid_member"
    /dev/md126: LABEL="Media5" UUID="37ab6a88-e9c9-4e0b-86a1-cc85718c6c85" TYPE="ext4"


    3.
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md127 metadata=1.2 name=TheVault:Data1 UUID=8a0b7b88:3c40e913:5c444a7b:18bfb17e
    ARRAY /dev/md126 metadata=1.2 name=TheVault:Data5 UUID=4662c924:68c88bc2:ae2d30a9:96b8441a
    MAILADDR root


    4.
    ARRAY /dev/md127 level=raid1 num-devices=2 metadata=1.2 name=TheVault:Data1 UUID=8a0b7b88:3c40e913:5c444a7b:18bfb17e
    devices=/dev/sdb,/dev/sdc
    ARRAY /dev/md126 level=raid5 num-devices=3 metadata=1.2 name=TheVault:Data5 UUID=4662c924:68c88bc2:ae2d30a9:96b8441a
    devices=/dev/sdd,/dev/sde,/dev/sdf


    5.
    2 x 2TB Seagate Barracuda ST2000DM006


    6.
    offset type
    ----------------------------------------------------------------
    0x1d1b90bf000 zfs_member [filesystem]


    0x438 ext4 [filesystem]
    LABEL: Media1
    UUID: e6886cdd-7b87-493b-a912-d8a85a1c9cb0

    • Offizieller Beitrag

    I am guessing that 6 is the output from wipefs --no-act /dev/md127 if it is can you do me a favour post the full output including the command into </> makes it easier to read and confirms what you have posted.


    But it looks as if there is a zfs signature on there.

  • Hi Geaves,


    The full output is:


    [tt]
    root@TheVault:~# wipefs --no-act /dev/md127
    offset type
    ----------------------------------------------------------------
    0x1d1b90bf000 zfs_member [filesystem]


    0x438 ext4 [filesystem]
    LABEL: Media1
    UUID: e6886cdd-7b87-493b-a912-d8a85a1c9cb0


    root@TheVault:~#
    [tt/]

  • OK, when I ran wipefs --offset 0x1d1b90bf000 /dev/md127, it gave me the following:


    root@TheVault:~# wipefs --offset 0x1d1b90bf000 /dev/md127/dev/md127: 8 bytes were erased at offset 0x1d1b90bf000 (zfs_member): 0c b1 ba 00 00 00 00 00root@TheVault:~#


    Running wipefs --no-act /dev/md127 now gives me:



    root@TheVault:~# wipefs --no-act /dev/md127offset type----------------------------------------------------------------0x1d1b90be000 zfs_member [filesystem]

    0x438 ext4 [filesystem]LABEL: Media1UUID: e6886cdd-7b87-493b-a912-d8a85a1c9cb0

    root@TheVault:~#

  • Right'o, after a few itterations, I now have:


    wipefs --no-act /dev/md127
    offset type
    ----------------------------------------------------------------
    0x438 ext4 [filesystem]
      LABEL: Media1
      UUID: e6886cdd-7b87-493b-a912-d8a85a1c9cb0
    root@TheVault:~#

  • Hmm looks promising


    root@TheVault:~# cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
    md126 : active raid5 sde[1] sdf[2] sdd[0]
      1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    md127 : active raid1 sdb[3] sdc[2]
     1953383360 blocks super 1.2 [2/2] [UU]
    unused devices: <none>
    root@TheVault:~#

  • BINGO!! It is now showing up.


    You are a life saver!


    I am interested to understand what happened. Are you able to give me a run down on what I just did or point me at a website so I can better understand the possible causes of this issue?

  • They were on my old OMV V2 install which crashed, the system drive only. I bought a new drive and installed OMV V4.


    I was hoping the RAID arrays would automatically come back up, which was the case for teh RAID 5 volume.

    • Offizieller Beitrag

    The zfs signature would suggest that the drives were used previously on zfs, zfs adds a signature at the beginning and the end of a drive. When you prepare a drive for use you wipe it and most users use the quick option, until recently this only removed a signature at the beginning of a drive. The drives could still be used with that zfs signature at the end of the drive, hence you were able to set up your raid in v2.
    Obviously due to changes between 2 and 4 it detected the residual zfs signature and subsequently started the array as active (auto-read-only). Once you removed the zfs signature the raid came backup as clean.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!