Posts by geaves

    The first one cat /proc/mdstat gives the raid reference, whether the raid is active, active (auto-read-only) or inactive, the raid type i.e. raid1, raid5, raid6 etc and the drives active within the raid.


    So from your output;


    raid reference = md0

    state of raid = active

    raid type = raid1

    drives = /dev/sda


    The above told me a drive was missing


    blkid from the man pages -> command line utility to locate/print block device attributes

    This is important as it will give information on TYPE, this will tell you the file system type


    So from your output;


    /dev/sdd was the missing drive from your array


    cat /etc/mdadm/mdadm.conf


    This gives the configuration on the array stored in the mdadm conf file


    fdisk


    Lists information about the drives


    mdadm --examine


    This will confirm the output from mdstat, most of the time I don't use this as mdadm --detail will give more information


    Rather than use the command line it may have worked using the GUI by selecting recover on the menu under raid management. This sometimes works but most of the time it doesn't.


    If the output from mdstat had shown the array as inactive the array would not be listed in blkid and would have meant running --assemble to re assemble the array

    Is there anything I need to do before or after running that command

    There shouldn't be

    Also, what is it in the logs that showed you that

    The output you posted in post 3, showed the state of the raid, the devices in the raid and the information contained in the conf file.


    A better way of doing this is run the drives as individuals, one for data and one for backup, so you rsync the data drive to the second drive.


    The purpose of raid is about availability, when your raid went into a degraded state you still had access to your data, but if both drives died simultaneously then you have nothing. That's why even with a raid setup a backup procedure is a must.

    So is there nowhere a guide how i can compile OMV 5 or 6 for mipsel architecture

    Not on here and there is nothing in the documentation, I believe most arm based boards are raspbian, and they are covered by installing raspbian then running the install script for omv. Anything else other amd64 usually falls under armbian.


    Unless someone can help via github or the google discussion group.

    It's possible or not? And how to do it?

    Having read through this thread and done some research you're not going to find your answer in here, I would start here the compilation of the software required is also specific to the hardware.

    when I should use linear and when stripe

    Why are you asking the question, you obviously understand there is a difference between the two and a search would give the necessary information.


    So to answer your question -> Never, especially if you don't care about your data.


    Raid 0 -> Striped -> One drive fails you lose the lot.


    Linear -> Groups drives together and data is allocated sequentially from one drive to the other, data recovery, possibly, but highly unlikely.


    If your data is unimportant then either will do!!

    But why can't i do this over web-gui from omv5

    You could, however, I have had occasions where a users drive does not show when attempting recover on the menu, the solution to that is to wipe the drive first. The cli option will add it with a single command.


    If this happens again then I would look to hardware for the cause.

    it looks all good.

    All green

    All that is telling you is there are no bad sectors detected on any of the drives, it doesn't tell you if there any issues on the drive that was removed.


    It should be possible to add the drive back to the array;


    mdadm --add /dev/md127 /dev/sdb this will then display the output cat /proc/mdstat

    OK from the output /dev/sdb has been ejected from the array by mdadm, that confirms the image in your first post, the question is why?


    1) The drive has physically failed

    2) The sata cable connected to that is faulty

    3) The port that drive is connected to is faulty

    4) Power surge causing that drive to disconnect


    Do you run regular SMART tests on your drives, even if it's only a short one?

    Is the drive showing showing a red dot in the smart settings?


    Output of mdadm --detail /dev/sdb


    Edit: If the the array has connections to shares within a docker container I would suggest stopping the container/s to reduce 'calls' to the array until it's rebuilt.

    Yes I know that the current setup is generally frowned upon

    It's never frowned upon, the question is why, when the a simpler option is to use one drive for data and the second as a backup.

    But before I do that I would like to run a check on my software raid1 mirror

    Why? are you experiencing file system problems, this would be unnecessary

    or a snapraid setup

    If your data files are 'in use' on a regular basis then snapraid is not an option.