Disk about go go bad. File system MISSING from rebooting server

  • I had a memory stick go bad on the OMV server and had to shut it down to take it out and replace it. Upon restart, OMV lost the filesystem.

    cat /proc/mdstat gives me this setup

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md127 : inactive sdf[0](S) sde[5](S) sdc[1](S) sdb[3](S) sdh[8](S) sda[4](S) sdd[2](S)

    13674594920 blocks super 1.2


    Should i reload it with

    mdadm --assemble --force --verbose /dev/md127 /dev/sd[fecbhad] ? Or the order of drivers doesn't matter?


    Thanks

  • I did, the array is stopped.


    This is from the logs. Why would OMV lose its filesystem and not mount it?

    Code
    monit[2196]: 'mountpoint_srv_dev-disk-by-uuid-cf0b0c12-0d28-46e5-8271-aaab5e22ebb4' status failed (1) -- /srv/dev-disk-by-uuid-cf0b0c12-0d28-46e5-8271-aaab5e22ebb4 is not a mountpoint
  • blkid -o full gives the disks. I believe sdg1 and sdg2 is the disks that went bad. How can I find which is the physical one?

    • Offizieller Beitrag

    I'm sorry I'm lost, #1 you asked if running mdadm --assemble would start the array, I said no unless you stop mdadm first that command is in #2


    Then #3 there is a reference from a log file regarding a failed mount point


    The blkid output regarding /dev/sg1 and /dev/sg2 in relation to the array makes no sense as the output clearly states TYPE="ntfs"


    So;


    cat /proc/mdstat


    mdadm --detail /dev/md127

  • ok, i found the dead disk and pull it out.

    cat /proc/mdstat gives


    Code
    root@nas:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sde[0](S) sdf[5](S) sdd[1](S) sdg[8](S) sdc[2](S) sdb[4](S) sda[3](S)
    13674594920 blocks super 1.2

    mdadm --detail /dev/md127


    I dont know why it shows raid0 should be raid6

    • Offizieller Beitrag

    So the array is still inactive, until it's active and in sync it's not going to mount


    mdadm --stop dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdefg]


    this should assemble the array and sync the array, then reboot and the array should mount but it must assemble and sync first

  • i ran mdadm --assemble but get this



    Code
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sde is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    mdadm: /dev/sdh is busy - skipping
  • same output - skipping. Is it skipping because they are already in md127?


    Code
    root@nas:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[abcdefh]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sde is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    mdadm: /dev/sdh is busy - skipping
  • this is the output of mdadm --detail. Is this becasue i removed the bad drive?


    • Offizieller Beitrag

    Is this becasue i removed the bad drive

    Yes, probably, a drive has to be failed, then removed from an array


    e.g. mdadm --fail /dev/md127 /dev/sdX then mdadm --remove /dev/md127 /dev/sdX -> X being the drive reference letter

  • sdg is the drive that is bad as shown. I've attached it back it's recognized as Unknown but the array won't start


  • ok so i got it to start by removing both "sdg" and "sdf". This is the output of --detail


    Does this mean i have 2 bad drives?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!