How to mount single active disk from a failed Raid1

  • Hey guys one of my Raid1 1GB disks failed and I lost the ability to access the files on the working drive. I don't plan to replace the broken disk because I plan to move the files from the working drive to my Raid5 setup. I simply cannot access the files in this "clean,degraded" state. I looked up the folders under /media/ but nothing. My SMB won't access it either. I figured I'll remove the Raid under Raid management and mount the disk as a single drive. Since the 2 disks were mirrored it should have all the files. Problem now is neither under "File system" nor "Raid management" does the drive show up. How do I get omv to see the drive again? Under filesystem it only shows "missing" besides my Raid5 array.


    Edit: Updated outputs posted in reply

  • Thank you for the help subzero. I'll give you fresh outputs including the ones you requested because after a restart my drives reshuffled for some reason. My raid arrays are simply named Raid1 (failed) and Raid3
    Raid1 consists of: sda & sde with the output below


    The GUI shows:
    Physical disks: both drives present for Raid1
    Raid Management: Raid1 is not here and when I pick create, there are no disks to choose
    File system: N/A missing



    blkid

    Code
    /dev/sdb: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="2a18be27-9a95-6828-83c2-5ee01a1539bc" LABEL="nas2:raid3" TYPE="linux_raid_member"
    /dev/md0: LABEL="RaidArray3" UUID="84808b75-2b74-405f-b7e2-77c33f918a04" TYPE="ext4"
    /dev/sdc: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="75c609cb-c646-324f-0f73-e30630d14260" LABEL="nas2:raid3" TYPE="linux_raid_member"
    /dev/sdf1: UUID="275e2463-bb97-472c-a0e8-6962e0538355" TYPE="ext4"
    /dev/sdf5: UUID="e45fde5b-7be1-45cf-a124-02e16fda8d82" TYPE="swap"
    /dev/sda: UUID="bc267fc1-4a05-c856-c378-2278946ff688" UUID_SUB="e979f3a4-81a0-ca08-e124-066261c6d3ff" LABEL="nas2:raid1" TYPE="linux_raid_member"
    /dev/sdd: UUID="67b71032-b48c-50a6-dc81-7b2eefd4e939" UUID_SUB="6a783c49-41b1-00c1-9287-d6b9f8e05019" LABEL="nas2:raid3" TYPE="linux_raid_member"


    cat /etc/mdadm/mdadm.conf


    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdb[0] sdd[3] sdc[1]
          5860530176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    
    
    unused devices: <none>


    fdisk -l


    cat /etc/fstab

    • Offizieller Beitrag

    Just to confirm....you raid1 array (name) is actually a RAID1 array? i mean mirror?


    The OMV UI doesn't allow building a degraded array. However you can use the mdadm tools re-build your raid1 as degraded array.
    You can construct a mirror with one disk missing (sda available and sde missing), after that OMV should be able to see it and mount the array, from there you can move your files to md0.


    I am pretty sure I saw sde in your initial post, did you pull it off?

  • You are correct, dumb naming but it works :). Raid1 mirror was 1TB and consisted of SDA 1TB drive & SDE 1.5TB drive. It wasn't sde initially, that is why I pulled it off after realizing it changed. Didn't want to confuse anyone with the updated information.


    That is good advice, didn't know the UI couldn't help me if a drive failed for a mirror raid. I'll go ahead and read about using the mdadm tools since I am unfamiliar. I'll report back shortly. Thank you.

  • wow it worked. I read up on what you said and creating a degraded Raid1 array and I stubled upon this article:

    Zitat


    For future reference anyone who has this problem again can simply re-create the raid array. I was worried this would overwrite as the command said there was already an existing one but poof it worked


    Zitat

    mdadm --create /dev/md1 -l 1 -n 2 /dev/sda missing


    md1 - is the name of the new array. I already had md0
    sda - is the single drive I needed to access


    Thank you for the great help!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!