RAID/Filesystem Missing after drive removal for swapping bad drvie

  • Basically i have a small OMV Nas with 2 Seagate Iron Wolf 4Tb in raid 1 on OMV, one of these was failing according to SMART so i decided to remove it and send it back as its still new.. once i booted back up my raid was gone... i tried to put in the drive back and still the same... if i try make the raid again it does not show the 2 drives not even the working one but they do show up in the disks section... trying to make a new file system will erase the good drive as that seems like the only way to make it come back online.. any advise please

    • Offizieller Beitrag

    Basically i have a small OMV Nas with 2 Seagate Iron Wolf 4Tb in raid 1 on OMV

    What hardware


    so i decided to remove it and send it back as its still new

    I take it you physically removed it


    once i booted back up my raid was gone

    Normal mdadm behaviour, the raid comes back up as inactive


    i tried to put in the drive back and still the same

    That doesn't make sense unless the system is not seeing the replaced drive


    if i try make the raid again it does not show the 2 drives not even the working one

    That's because the drives are 'in use' from the previous raid setup


    any advise please

    I'm assuming this is a new setup so starting over is not an issue, but to do that the drives have to wiped, including new ones

  • Its a 1 year old setup have all up and running and i removed the bad drive by taking out.. normal behaviour would be that raid still shows but as degraded not disappears.. nas is a custom built pc with 4 drives on sata ports 2 ssd and 2 hdd (one of which is the bad smart failed one)


    Is there any way i can remount the 1 good drive and make it work and show as a degraded raid or as a single normal drive... or possibly plug t into another pc and recover the data?

    • Offizieller Beitrag

    normal behaviour would be that raid still shows but as degraded not disappears

    No, that's not normal behaviour for mdadm, for a hardware raid yes. Mdadm is software raid so like any piece of software it requires input/information from the user.


    Is there any way i can remount the 1 good drive and make it work and show as a degraded raid or as a single normal drive

    It should be possible post the output from 1 to 5 from here

  • Ok at the moment i have only teh 1 good drive in the system apart from ssds for cache and other stuff


    1


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    unused devices: <none>


    2 - Please note i tried to plug it into my windows machine to see if i can access the files trough a OMV VM or LINUX VM - Thats SDA1


    /dev/sdb1: UUID="ec8d339d-b214-4adb-ae74-bcc1e4f38f73" TYPE="ext4" PARTUUID="369fb5b8-01"
    /dev/sdb5: UUID="a09029cf-c270-4582-a28e-2e611da0fa1f" TYPE="swap" PARTUUID="369fb5b8-05"
    /dev/sdc1: LABEL="SSDStorage" UUID="2dcf4355-8d2c-4cc8-aa8e-d5062f8cc118" TYPE="ext4" PARTUUID="a6eb7aee-6a18-43b0-ad82-0a437da10e44"
    /dev/sda1: PARTLABEL="Microsoft reserved partition" PARTUUID="ee21cbc9-ccd6-4ea7-bec6-b056bbd43073"


    3


    Partition 1 does not start on physical sector boundary.
    Disk /dev/sdb: 59.6 GiB, 64023257088 bytes, 125045424 sectors
    Disk identifier: 0x369fb5b8
    Disk /dev/sdc: 119.2 GiB, 128035676160 bytes, 250069680 sectors
    Disk identifier: F38DD706-EA60-4B96-ADDA-737642914B47
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk identifier: 657CD890-819D-4604-906F-250F366D07E1


    4



    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    INACTIVE-ARRAY /dev/md0 metadata=1.2 name=AlphaNAS:Storage UUID=9e5f4055:1fabb27 3:b708769f:fbe03130



    5


    mdadm: Unknown keyword INACTIVE-ARRAY

    • Offizieller Beitrag

    Ok;


    1. cat /proc/mdstat
    This shows no arrays on your system, that mdadm can recognise.


    2. blkid
    Shows no drives referenced being used as a Linux Raid Member, it also does not display /dev/md0 information
    /dev/sda1: PARTLABEL="Microsoft reserved partition" PARTUUID="ee21cbc9-ccd6-4ea7-bec6-b056bbd43073"
    The above clearly shows that if /dev/sda 3.7Tb (from option 3) is supposed to part of the array it was not wiped as the above is a leftover partition from windows use.


    4. cat /etc/mdadm/mdadm.conf
    This is mdadm's config file it's aware there should an array.


    5. mdadm --detail --scan --verbose
    This output should display more information than option 1. it does not, it only pulls the information from the config file.


    At this moment in time you could try booting with a systenrescuecd with /dev/sda connected and see if there is any data on that drive, but from the above there is no option to recover using OMV itself let alone mounting the raid in a degraded state.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!