apparently there is no superblock on the 5th drive (sde)
what does this mean?
apparently there is no superblock on the 5th drive (sde)
what does this mean?
not sure if this would be a good guide, but possibly similar to my situation.
Unfortunately, to my knowledge, a Raid 5 only allows the loss of 1 drive. Raid 6 allows the loss of 2 drives.
I am aware of this scenario, but considering that one of the two failed drives was completely replaced, im thinking that there still may be a chance to recover files. the drive that was 99.99 percent copied may pose issues however.
sorry about multiple posts. i got an error upon posting and thought that the thread wasn't created. not sure how to remove posts
Hello, I am new to This forum and I have a problem that i cant seem to find any information on. or at least is relevant to my situation.
I set up a HP MicroServer N40L fitted with 5 3TB Hard drives (one hard drive mounted in the optical bay) in RAID 5
I had this working for years until one day i discovered that the server wouldnt start up. CrystaldiskInfo indicated two of the 5 drives were Yellow and dying from bad sectors.
I removed the dying drives, did a ddrecovery of the data onto new donor drives and slotted them into the server.
drive 1 had 99.99% data copied, some bad sectors were unrecoverable. drive 4 had 100% data copied!
Upon booting I got the failed code 8 error message.
After crawling through the forum, not knowing what im looking for or what im looking at (im a linux command line n00b) i saw some people were having luck with the command:
"mdadm --assemble /dev/md127 /dev/sd[abcde] --verbose --force"
...so i gave it a try. Picture appended showing results.
As far as i can understand - 4 of the 5 drives are recognized and the filesystem is missing.
open media vault indicates ext4 filesystem but its 'missing'
honestly i'm at a loss as to what else i need to do at this stage.
Any ideas on how to rebuild this raid 5 partition to accept the two 'replacement' drives?
thanks
-Adrian