Another raid missing post

  • Hey

    So seems I might have had a power outage overnight as I had to start the machine this morning.
    Seems no file system is loading, didnt realise until trying to troubleshoot a docker issue.


    I have 5 3TB WD reds SMART has them all good.


    i see the INACTIVE-ARRAY
    but it also seems to be missing a disk.

    I suspect i can run to recover.


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[acdef]


    trying to figure out if drive issue or not.

  • ryecoaaron

    Hat das Thema freigeschaltet.
  • In theory those are the right commands. But what puzzles me is how drive /dev/sdc is listed by fdisk but not found under blkid. SMART data isn't the whole story, have you check your logs for current and lastboot?

  • Thanks for the tip..


    Seems read errors are afoot.


    Guess its either new drive or run fsck to check disk errors.

    Could i remove said disk wipe it and stick it back in to get rebuilt?

  • AFAIK, you can assemble your (RAID 5?) array now whether, or not, you first pull the drive currently lettered as /dev/sdc. If it's left in the system, I'd expect the array to assemble with 4 out 5 drives. Even if the drive /dev/sdc was not otherwise faulty, it would likely be "out of date" with a different event count date and update time. Something like mdadm -E /dev/sd[a-z] | egrep "Events|sd" and mdadm -E /dev/sdc  would show if the drive is still recognised and it's state. If it's faulty and not recognised is will just be missing.


    But being safety minded and to prevent any possibility of filesystem damage, I'd be inclined to pull /dev/sdc first, then assemble the array. A secure wiping of the drive may or may not put it in a usable state. If it is usable, then you can add it back to the now active and re-synced array via the WebUI using the recover option. I don't see away to avoid two re-syncs.

  • AFAIK, you can assemble your (RAID 5?) array now whether, or not, you first pull the drive currently lettered as /dev/sdc. If it's left in the system, I'd expect the array to assemble with 4 out 5 drives. Even if the drive /dev/sdc was not otherwise faulty, it would likely be "out of date" with a different event count date and update time. Something like mdadm -E /dev/sd[a-z] | egrep "Events|sd" and mdadm -E /dev/sdc  would show if the drive is still recognised and it's state. If it's faulty and not recognised is will just be missing.


    But being safety minded and to prevent any possibility of filesystem damage, I'd be inclined to pull /dev/sdc first, then assemble the array. A secure wiping of the drive may or may not put it in a usable state. If it is usable, then you can add it back to the now active and re-synced array via the WebUI using the recover option. I don't see away to avoid two re-syncs.

    Thanks Krisbee

    Yeah I did just this
    pulled the drive.

    Assembled the array with the 4 disks.
    Tried many a gparted / disk wipe tools.
    Disk reported bad or missing superblock.

    Put drive back, used the secure erase on it via the GUI tool as well.
    This morning, used the recover function to add the sdc disk back in and it's at ~88% rebuilt

    Smart is reporting some errors still.

    Code
    Error 182 [13] occurred at disk power-on lifetime: 56254 hours (2343 days + 22 hours)


    So Im gonna be acquiring some new disks in the next few weeks and replace them all one by one

    Appreciate the knowledge transfer.

  • macom

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!