Search Results

Search results 1-5 of 5.

  • OK, sounds reasonable. You guys are the experts and I only pick at Linux when I break it... To clarify, remove the only 2TB Disk in the Array and replace with a 4TB drive and rebuild the array. I can then grow the array and take advantage of 5-4TB disks (20TB with some loss for parity information storage). Please confirm...

  • I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect. cat /proc/mdstat produces Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md127 : active raid5 sdb[6] sdc[1] sdd[3] sde[4] sda[5] 7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] . After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back t…

  • RAID 5 impossible to recreate

    fotafm - - RAID

    Post

    I had my RAID5 array recently disappear after replacing a faulty drive. My issue remains growing the array. I have 18TB now and a little bit over 7TB drive space (i.e., the same size that the 5 original 2TB drives provided (I created a separate thread to get help). Are you certain you used the proper command to reassemble the array? I believe I had to open Linux (i.e., OMV) in recovery mode so the drives were not mounted, or I could unmount or stop them if busy, and then I assembled using the fo…

  • Turned out I had to rebuild the array from a bash command prompt using: sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc/dev/sdd /dev/sde --verbose --force-- After restarting the array to see if all was well, I found the 512 vfat boot partition marked "read only" as it had a bad cluster. Unfortunately it was the first cluster (or sector) of the partition and nothing I could do in Linus recovery mode would repair it. I tried GParted from a separate disk as well. I replaced the failed S…

  • I am trying to repair OMV Array and I assume I have to remove a malfunctioning device from the RAID 5 Array in OMV before pulling the malfunctioning 2 TB HDD from the server. The problem is, I can't get OMV to load as it sees a dirty file system and is stuck on repair before raising the network(i.e., "A start job is running for LSB: Raise network interfaces."). I see in logs "kicking non-fresh sda from array!" and "raid level 5 active with 4 of 5 devices algorithm 2". I am also seeing references…