Doesn't look like it raid management state says clean
That's a start, mdadm --readwrite /dev/md0
Doesn't look like it raid management state says clean
That's a start, mdadm --readwrite /dev/md0
Also checking under the file systems tab the filesystem that was associated to the array just says n/a and missing now.
Is it rebuilding in raid management? the array is active (auto-read-only)
I can see the array now.
Ok, but I would run some of the commands we have used to confirm, is your data visible.
cat /proc/mdstat
cat /etc/mdadm/mdadm.conf
mdadm --detail /dev/md0
At this point there is nothing to lose is yes or y to continue.
Ok make sure the array is stopped mdadm --stop /dev/md0
mdadm --create --assume-clean --level=raid5 --raid-devices=6 /dev/md0 /dev/sd[bcdefg]
If we can't get it to work I'm alright with that. I've kinda accepted the fact that my data is gone at this point.
There is one option this would be sh!t or bust
At this present moment I don't know, either sdf or sdg or both are faulty and will not allow the array to start, and it doesn't like the fact there is no device in slot 0.
From the above something is missing on one of those drives or it's related to the m'board connections, the madam conf is wrong but that can be corrected.
Interesting ok, assuming that sdf in slot -1 is the culprit, lets try this,
mdadm --stop /dev/md0
mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdeg]
That sees the six drives, but I'm now guessing that the array is active cat /proc/mdstat
Back to where we were before without shutting down can you plug that other drive back in and run blkid
That should now stop as it's showing inactive,
mdadm --stop /dev/md0 mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdef]
I pulled the SATA cable for sde and turned the server back on. Now the array isn't showing.
Ok run some of the commands cat /proc/mdstat blkid cat /etc/mdadm/mdadm.conf
sde is showing SMART errors.
As this is V2 if it shows any bad sectors i.e. in the GUI there's a red dot against that drive then that drive needs replacing.
So with that information follow what I'm suggesting and hopefully the array will come back up
I was able to transfer some files but they are a fraction of the size
That means it's not reading the files on the array, I thought that would be the case, but worth a try, so one option, in your post 28 sde was displayed as faulty spare. I think that's the drive that failed the array, is that drive showing any SMART errors, so suggestion is this,
Locate where sde is in your server so that you can disconnect the sata cable.
Shut the server down and disconnect that drive either via the sata port on the board or removing the sata cable from the drive.
Restart the server, I'm hoping the array will come back up clean/degraded.
Yes and no. I can see the files but can’t read any of them.
I think the option now is to leave it alone, get your backup restored and ready, then there are a number of options to get the data off the raid to your backup drive, that's if the files can be moved. I have not come across your situation before but it's a proceed with caution.
There is one option you could try if you have a W10 machine install WinScp and login to the server as root using WinScp and see if you can browse the files on the array, if you can then try and move one file to your W10, if that fails then I have another idea.
So you can access the shares and files?
I take it although it's active it's not visible
If the array doesn't stop running the second command will fail, cat /proc/mdstat
See here that image you tried is not for arm boards.
So sde is now faulty, back to the drawing board,
mdadm --stop dev/md0
mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdfg]