Posts by fotafm

    I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect.


    cat /proc/mdstat produces


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdb[6] sdc[1] sdd[3] sde[4] sda[5]
    7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] .


    After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back together using bash commands:
    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --verbose --force--
    I then found two drives were busy and I had to stop them. A retry successfully rebuilt the array with 4 of 5 drives.
    I added the 5th drive in OMV and rebuilt the array but the size remained the same.
    I replaced another drive in the same fashion as it was about to fail (according to OMV). Again the array size remained the same.


    I now have 4-4TB and one 2TB drive in the array marked, "clean."


    OMV is running on a separate 60GB SSD.


    The array only recognizes a file system of 7.28TB. Is there a way to grow the array and file system to take advantage of the extra disk space? ?(


    fotafm

    I had my RAID5 array recently disappear after replacing a faulty drive. My issue remains growing the array. I have 18TB now and a little bit over 7TB drive space (i.e., the same size that the 5 original 2TB drives provided (I created a separate thread to get help).


    Are you certain you used the proper command to reassemble the array? I believe I had to open Linux (i.e., OMV) in recovery mode so the drives were not mounted, or I could unmount or stop them if busy, and then I assembled using the following command for a 5-disk array:


    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc/dev/sdd /dev/sde --verbose --force--


    Then my array magically reappeared. Commands I used to expand it to take advantage of the larger disks I installed proved unsuccessful, however. X(


    Just a thought...

    Turned out I had to rebuild the array from a bash command prompt using:


    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc/dev/sdd /dev/sde --verbose --force--


    After restarting the array to see if all was well, I found the 512 vfat boot partition marked "read only" as it had a bad cluster. Unfortunately it was the first cluster (or sector) of the partition and nothing I could do in Linus recovery mode would repair it. I tried GParted from a separate disk as well.


    I replaced the failed SSD and reinstalled OMV 0.4. All is well except for the size of the array with new larger disks. I have started a separate thread in hopes that someone can help me grow my array/file system :S

    I am trying to repair OMV Array and I assume I have to remove a malfunctioning device from the RAID 5 Array in OMV before pulling the malfunctioning 2 TB HDD from the server. The problem is, I can't get OMV to load as it sees a dirty file system and is stuck on repair before raising the network(i.e., "A start job is running for LSB: Raise network interfaces."). I see in logs "kicking non-fresh sda from array!" and "raid level 5 active with 4 of 5 devices algorithm 2". I am also seeing references to 0xe frozen (I assume this is the malfunctioning drive) . I also see fsck failed with error code 4.


    I am attaching some screen captures.


    Questions:


    1. Is a reinstall of OMV required? ?(


    2. Can I just pull the malfunctioning drive and replace and assume the errors will resolve so I can add the new drive to the array and rebuild? ?(


    Hoping someone can help? I backed up the array before I lost access to the server. I just wish to avoid rebuilding it if I can.



    Fotafm