What hardware is this on? How did you create the initial array?
The first one cat /proc/mdstat gives the raid reference, whether the raid is active, active (auto-read-only) or inactive, the raid type i.e. raid1, raid5, raid6 etc and the drives active within the raid.
So from your output;
raid reference = md0
state of raid = active
raid type = raid1
drives = /dev/sda
The above told me a drive was missing
blkid from the man pages -> command line utility to locate/print block device attributes
This is important as it will give information on TYPE, this will tell you the file system type
So from your output;
/dev/sdd was the missing drive from your array
This gives the configuration on the array stored in the mdadm conf file
Lists information about the drives
This will confirm the output from mdstat, most of the time I don't use this as mdadm --detail will give more information
Rather than use the command line it may have worked using the GUI by selecting recover on the menu under raid management. This sometimes works but most of the time it doesn't.
If the output from mdstat had shown the array as inactive the array would not be listed in blkid and would have meant running --assemble to re assemble the array
As far as backing up, if I were to add another drive, not part of the raid obviously, I should be able to use rsync to back it up correct
Yes, it's what I do, I backup all my shares to a single drive in my server
Is there anything I need to do before or after running that command
There shouldn't be
Also, what is it in the logs that showed you that
The output you posted in post 3, showed the state of the raid, the devices in the raid and the information contained in the conf file.
A better way of doing this is run the drives as individuals, one for data and one for backup, so you rsync the data drive to the second drive.
The purpose of raid is about availability, when your raid went into a degraded state you still had access to your data, but if both drives died simultaneously then you have nothing. That's why even with a raid setup a backup procedure is a must.
mdadm --add /dev/md0 /dev/sdd should add the missing drive back
So is there nowhere a guide how i can compile OMV 5 or 6 for mipsel architecture
Not on here and there is nothing in the documentation, I believe most arm based boards are raspbian, and they are covered by installing raspbian then running the install script for omv. Anything else other amd64 usually falls under armbian.
Unless someone can help via github or the google discussion group.
when I should use linear and when stripe
Why are you asking the question, you obviously understand there is a difference between the two and a search would give the necessary information.
So to answer your question -> Never, especially if you don't care about your data.
Raid 0 -> Striped -> One drive fails you lose the lot.
Linear -> Groups drives together and data is allocated sequentially from one drive to the other, data recovery, possibly, but highly unlikely.
If your data is unimportant then either will do!!
I'm not sure why you believe this is spam, and as for 'spruiking', how do you believe kickstarter programs get going. Considering the number of RPi users on this forum, it personally looks interesting from their perspective.
But why can't i do this over web-gui from omv5
You could, however, I have had occasions where a users drive does not show when attempting recover on the menu, the solution to that is to wipe the drive first. The cli option will add it with a single command.
If this happens again then I would look to hardware for the cause.
it looks all good.
All that is telling you is there are no bad sectors detected on any of the drives, it doesn't tell you if there any issues on the drive that was removed.
It should be possible to add the drive back to the array;
mdadm --add /dev/md127 /dev/sdb this will then display the output cat /proc/mdstat
Try mdadm --examine /dev/sdb
OK from the output /dev/sdb has been ejected from the array by mdadm, that confirms the image in your first post, the question is why?
1) The drive has physically failed
2) The sata cable connected to that is faulty
3) The port that drive is connected to is faulty
4) Power surge causing that drive to disconnect
Do you run regular SMART tests on your drives, even if it's only a short one?
Is the drive showing showing a red dot in the smart settings?
Output of mdadm --detail /dev/sdb
Edit: If the the array has connections to shares within a docker container I would suggest stopping the container/s to reduce 'calls' to the array until it's rebuilt.
Yes I know that the current setup is generally frowned upon
It's never frowned upon, the question is why, when the a simpler option is to use one drive for data and the second as a backup.
But before I do that I would like to run a check on my software raid1 mirror
Why? are you experiencing file system problems, this would be unnecessary
or a snapraid setup
If your data files are 'in use' on a regular basis then snapraid is not an option.
Having established what the hardware is, surely the question would be how was OMV installed
Is your system completely up to date
There's no information about the system but judging from the output of docker this is a RPi, so how are the drives powered? are both drives connected to the USB3 ports? Simple test would be to be shutdown and move one of the drives to the a USB2 port, does it solve the problem.
So far I created a RAID 5 in the VM and ripped out one of the drives (virtually of course). OMV recognized and left an degraded RAID as expected
That's because it's in a VM, do that on a physical drive and the Raid will return as inactive, mdadm does not recognise 'hot-swap' drives