Mirror RAID clean, degraded

  • Hello,


    I have an issue with my Mirror RAID in my OMV. I lost power about a week ago, and logged in to OMV and noticed Raid said clean/degraded. I check and both drives are showing "green" under smart. I am using 2- HITACHI 0F14683 Ultrastar A7K4000 4TB 7200 RPM 64MB cache SATA 6.0Gb/s 3.5 internal hard drives. I have read some other post, but just confused me more. I do have replacement drives on hand if needed. Hopefully not.


    Any help from people more knowledgeable would be great. Not how to insert commands I ran, so they are attached.


    Thanks!


    raid1.txtraid.txt

  • Thanks for the reply and the explanation on how to insert the code.

    Code
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sda[0]
    3906887488 blocks super 1.2 [2/1] [U_]
    bitmap: 26/30 pages [104KB], 65536KB chunk
    unused devices: <none>
    Code
    root@openmediavault:~# blkid
    /dev/sdb1: UUID="67f8957f-4256-43ef-a546-498710a572b3" TYPE="ext4" PARTUUID="951c56cc-01"
    /dev/sdb5: UUID="e9fbcafa-57d3-4b1f-84fd-3817730f2d34" TYPE="swap" PARTUUID="951c56cc-05"
    /dev/sda: UUID="1ac92da3-fccb-cadf-9c4a-9dc00083c1ca" UUID_SUB="1ba3b553-71cb-9121-72d4-7f7d9d4095d9" LABEL="openmediavault:0" TYPE="linux_raid_member"
    /dev/md0: LABEL="RAID" UUID="012310dd-0a1d-4e68-bf83-079109e5f9ec" TYPE="ext4"
    /dev/sdd: UUID="1ac92da3-fccb-cadf-9c4a-9dc00083c1ca" UUID_SUB="5c462bbe-2613-4aa3-9102-cb1917d7df2d" LABEL="openmediavault:0" TYPE="linux_raid_member"
    /dev/sdc1: UUID="8A58FCA958FC9563" TYPE="ntfs" PARTUUID="95cc0009-01"
    Code
    root@openmediavault:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=openmediavault:0 UUID=1ac92da3:fccbcadf:9c4a9dc0:0083c1ca
    devices=/dev/sda
  • Thanks geaves!


    Is there anything I need to do before or after running that command?


    Also, what is it in the logs that showed you that? I looked at them, but I am still new to this part of OVM.


    Thanks again for the quick reply.

  • Is there anything I need to do before or after running that command

    There shouldn't be

    Also, what is it in the logs that showed you that

    The output you posted in post 3, showed the state of the raid, the devices in the raid and the information contained in the conf file.


    A better way of doing this is run the drives as individuals, one for data and one for backup, so you rsync the data drive to the second drive.


    The purpose of raid is about availability, when your raid went into a degraded state you still had access to your data, but if both drives died simultaneously then you have nothing. That's why even with a raid setup a backup procedure is a must.

    Raid is not a backup! Would you go skydiving without a parachute?

  • That command did the job! My raid is back to normal. I looked again at the logs I posted, still can't figure out what it is you saw.


    As far as backing up, if I were to add another drive, not part of the raid obviously, I should be able to use rsync to back it up correct?


    Thank you again for all your help!

  • As far as backing up, if I were to add another drive, not part of the raid obviously, I should be able to use rsync to back it up correct

    Yes, it's what I do, I backup all my shares to a single drive in my server

    Raid is not a backup! Would you go skydiving without a parachute?

  • The first one cat /proc/mdstat gives the raid reference, whether the raid is active, active (auto-read-only) or inactive, the raid type i.e. raid1, raid5, raid6 etc and the drives active within the raid.


    So from your output;


    raid reference = md0

    state of raid = active

    raid type = raid1

    drives = /dev/sda


    The above told me a drive was missing


    blkid from the man pages -> command line utility to locate/print block device attributes

    This is important as it will give information on TYPE, this will tell you the file system type


    So from your output;


    /dev/sdd was the missing drive from your array


    cat /etc/mdadm/mdadm.conf


    This gives the configuration on the array stored in the mdadm conf file


    fdisk


    Lists information about the drives


    mdadm --examine


    This will confirm the output from mdstat, most of the time I don't use this as mdadm --detail will give more information


    Rather than use the command line it may have worked using the GUI by selecting recover on the menu under raid management. This sometimes works but most of the time it doesn't.


    If the output from mdstat had shown the array as inactive the array would not be listed in blkid and would have meant running --assemble to re assemble the array

    Raid is not a backup! Would you go skydiving without a parachute?

  • Hi geaves, I have the same issue as tazzz013 but now I cannot recover the RADI using the command "mdadm --add /dev/md0 /dev/sdd" because is telling me that the device is busy.


    this RAID is giving me so many issues. i prefer to move to your idea geaves to use the two disk as individual and using rsync to have a backup data in the second drive.

    Can you help me to move from RAID 1 to individual disk without losing any data or OMV configuration?


    Many thanks.


    Regards,

  • Can you help me to move from RAID 1 to individual disk without losing any data or OMV configuration

    I read your previous thread here your last post would suggest there may be a hardware issue, either the sata cable, or the sata port the drive is connected too.


    But to answer your question, under Raid Management select the array then click remove on the menu, remove one of the drives from the array and will leave the array in a clean/degraded state.


    You will then need to wipe the drive you have removed, then create a file system.


    Before you do any of that download and read this there is a section regarding rsync

    Raid is not a backup! Would you go skydiving without a parachute?

  • Many thanks geaves. How then I can change from degraded raid to single disk without losing data or configuration? Now my nextcloud sync is not working due to the degraded raid.


    Many thanks.


    Regards,

  • How then I can change from degraded raid to single disk without losing data or configuration

    You won't lose data, but you will have to reconfigure, all your shares are on your Raid, that's why I suggested looking at the hardware first.

    Raid is not a backup! Would you go skydiving without a parachute?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!