Disk bad. File system MISSING - How to proceed to recover RAID 5 Array/File System?

  • So one of my drives started clicking and a visit to the command line for OMV 4 showed DRDY SENSE ERROR, failed command: READ DMA EXT, Print_req_error: I/O error dev sdd sector 7814036992, Buffer I/O error on dev sdd, logical block 976754624, async page read, etc. A visit to the OMV GUI indicated that Drive sdd has been removed and the File System is now missing.


    I know I have to replace the drive sdd. So after I shut down the system and replace Drive sdd, how do I recover the Array? In the past when a disk failed, my file system continued in a degraded mode. In this case, the array is not mounted and the file system is missing.


    I consider myself a Linux and OMV novice, but I can follow instructions and realize at this point I need help.


    I have attached a photo of the command line traffic and a couple of print-to-pdf screen captures of the OMV GUI.


    Anyone?


    Thanks

    • Offizieller Beitrag

    A visit to the OMV GUI indicated that Drive sdd has been removed and the File System is now missing.

    Then there maybe more than just a missing drive as the array should be in a clean/degraded state, what's the output of cat /proc/mdstat post the output in a code box using </> on the menu

    • Offizieller Beitrag

    I don't have putty installed on the computer I am using to access the OMV array.

    You can install the openmediavault-wetty plugin, it will do the same.

  • I hesitate to take the next step without advice. I assume I need to add the new disk that replaced the failed "sdd" and then force the "md127" array to rebuild. Can you confirm the next steps? Detail of the MISSING array follows:


    • Offizieller Beitrag

    You can install the openmediavault-wetty plugin, it will do the same

    You can also use W10 command prompt, which is what I do

    I need to add the new disk that replaced the failed "sdd" and then force the "md127" array to rebuild

    According to the two output's there are four drives in that array, so;


    mdadm --stop /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abce]


    the above should reassemble the array, you can check the state by running cat /proc/mdstat once the array has finished post the output of mdadm --detail /dev/md127 and cat /proc/mdstat

  • Geaves,



    Thanks for your response.


    The issue is that there were originally five drives in the array. OMV is on a sixth drive. When drive "sdd" failed, it was removed by the OMV OS and the array of five drives (i.e., md127) went MISSING. I was expecting the file system to remain in a degraded state when "sdd" failed.


    Should I assemble the broken array without drive "sdd" as listed above or should I add "sdd to the assembly instructions

    (i.e., "mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcde])?


    I don't want to destroy the data when I reassemble the array.


    Respectfully,


    fotafm :/

    • Offizieller Beitrag

    it was removed by the OMV OS and the array of five drives (i.e., md127) went MISSING.

    OMV doesn't fail nor remove anything, raid arrays are set up and maintained using mdadm, what might have happened is a hardware failure.

    I was expecting the file system to remain in a degraded state when "sdd" failed.

    That is expected if and only if mdadm had failed the drive

    Should I assemble the broken array without drive "sdd" as listed above or should I add "sdd to the assembly instructions

    Yes, /dev/sdd is not showing in any of your outputs

    I don't want to destroy the data when I reassemble the array.

    :) that alone suggests you don't have a backup

  • Geaves,



    When following your instructions I received


  • Code
    root@BasementNAS:~# blkid
    /dev/sda: UUID="61598b79-6d66-6f4c-03d9-b5eefec09b0e" UUID_SUB="aeeacb67-8662-1df1-3de6-e6f06aa78657" LABEL="BasmentNAS:Array1" TYPE="linux_raid_member"
    /dev/sdc: UUID="61598b79-6d66-6f4c-03d9-b5eefec09b0e" UUID_SUB="39145358-8976-4eb4-1751-428a5fac8421" LABEL="BasmentNAS:Array1" TYPE="linux_raid_member"
    /dev/sdd: UUID="61598b79-6d66-6f4c-03d9-b5eefec09b0e" UUID_SUB="0c36c235-86ef-5fd7-b3ac-9d6632db5585" LABEL="BasmentNAS:Array1" TYPE="linux_raid_member"
    /dev/sde: UUID="61598b79-6d66-6f4c-03d9-b5eefec09b0e" UUID_SUB="ae9d49f5-39c3-64da-e7a1-a83532df5656" LABEL="BasmentNAS:Array1" TYPE="linux_raid_member"
    /dev/sdf1: UUID="9487c516-0658-4cbd-9885-79a58a5580c3" TYPE="ext4" PARTUUID="f7168621-01"
    /dev/sdf5: UUID="ea64b221-ffe6-472b-bc8f-721b262756b2" TYPE="swap" PARTUUID="f7168621-05"
    /dev/sr0: UUID="2018-04-09-14-52-01-00" LABEL="Shielding" TYPE="udf"
    /dev/sdb: UUID="61598b79-6d66-6f4c-03d9-b5eefec09b0e" UUID_SUB="25080086-c149-d0ca-c423-0cd4ce470b8e" LABEL="BasmentNAS:Array1" TYPE="linux_raid_member"
    /dev/md127: LABEL="RAID5" UUID="1fbeac12-1222-4707-bce6-bc3bd826a845" TYPE="ext4"
  • fotafm

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!