RAID Array File System DRDY (4 of 5 drives). Network won't start.

  • I am trying to repair OMV Array and I assume I have to remove a malfunctioning device from the RAID 5 Array in OMV before pulling the malfunctioning 2 TB HDD from the server. The problem is, I can't get OMV to load as it sees a dirty file system and is stuck on repair before raising the network(i.e., "A start job is running for LSB: Raise network interfaces."). I see in logs "kicking non-fresh sda from array!" and "raid level 5 active with 4 of 5 devices algorithm 2". I am also seeing references to 0xe frozen (I assume this is the malfunctioning drive) . I also see fsck failed with error code 4.


    I am attaching some screen captures.


    Questions:


    1. Is a reinstall of OMV required? ?(


    2. Can I just pull the malfunctioning drive and replace and assume the errors will resolve so I can add the new drive to the array and rebuild? ?(


    Hoping someone can help? I backed up the array before I lost access to the server. I just wish to avoid rebuilding it if I can.



    Fotafm

  • Turned out I had to rebuild the array from a bash command prompt using:


    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc/dev/sdd /dev/sde --verbose --force--


    After restarting the array to see if all was well, I found the 512 vfat boot partition marked "read only" as it had a bad cluster. Unfortunately it was the first cluster (or sector) of the partition and nothing I could do in Linus recovery mode would repair it. I tried GParted from a separate disk as well.


    I replaced the failed SSD and reinstalled OMV 0.4. All is well except for the size of the array with new larger disks. I have started a separate thread in hopes that someone can help me grow my array/file system :S

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!