Beiträge von danieliod

    hello,


    I'm desperate now, I need some serious help trying to recover my RAID, please guide me what should be the steps:





    Hi,


    I got an Email regarding Degraded Array.
    AS you can see, it's expecting sdb as part of the RAID.


    On the other hand, In the System Info it seems that now the drive is sdf.


    Is there an easy way to fix it without formatting sdf and doing recovery?


    Thank you


    Code
    This is an automatically generated mail message from mdadmrunning on openmediavaultA DegradedArray event had been detected on md device /dev/md127.Faithfully yours, etc.P.S. The /proc/mdstat file currently contains the following:Personalities : [raid6] [raid5] [raid4]md127 : active raid5 sdb[3](F) sdd[2] sdc[1]      5855716352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]


    Hi,


    From some reason during the RAID Repair my Server shutdown.
    When it boot up again, it ave an error regardign the RAID filesystem and ask for Manual intervantion.


    I executed:

    Code
    fsck.ext4 /dev/md127

    when it finished it gave a warninig that there still File system issues.
    I ran it again and it gave the same error, I restarted and run it again and still the same error.
    Now every time I want to restart the server I need to press Contorl+D.


    How do perform a file system check that will solve the issue?
    This is my boot log:


    This is the checkfs log:


    I also get SpareEvent emails:

    Code
    A SparesMissing event had been detected on md device /dev/md127.


    One more thing, how do I recover data from lost+found, I can't even access the folder.


    Thank you

    Hi,


    I have followed the steps and in step 2 recieved the below outcome.
    Is it ok? Can i start RAID Recovery now?


    Code
    sudo dd if=/dev/zero of=/dev/sdb bs=512 count=10000
    [sudo] password for media:
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 4.99599 s, 1.0 MB/s
    $ sudo mdadm --zero-superblock /dev/sdb
    mdadm: Unrecognised md component device - /dev/sdb
    $ sudo mdadm --zero-superblock /dev/sdb
    mdadm: Unrecognised md component device - /dev/sdb

    Thank you very much for the prompt reply.
    It seems that one of the drives is out of date and was not captured as part of the RAID.


    How to I know the progress?



    updated system info:

    Hi,


    I shutdown my server and connected the SATA cables better, then I saw all my Drives again.
    I ran both commands from above and recieved the following:


    Code
    mdadm: stopped /dev/md127
    sudo mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcd]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 2.
    mdadm: no uptodate device for slot 0 of /dev/md127
    mdadm: no uptodate device for slot 1 of /dev/md127
    mdadm: added /dev/sdd to /dev/md127 as 2
    mdadm: /dev/md127 assembled from 1 drive - not enough to start the array.


    I have looked on the System Info and saw that suddently I have also md126 which is inactive


    What action should I take in order to preserve my data?


    Thank you

    Hello,


    I recieved today notification from my OMV server that 2 drives faild.
    The RAID status in the WebGUI shows "clean, FAILD" and oly drive /dev/sdd listed.


    This is the info:


    cat /etc/mdadm/mdadm.conf

    Code
    ARRAY /dev/md/NAS metadata=1.2 spares=1 name=openmediavault:NAS UUID=332d8084:c2b3a139:44a4f8e1:6865cc49


    Please let me know what more info I need to provide.


    Thank you

    Hi,I have added the new drive and completed the RAID sync.
    Should I change now the spare=1 to spare=0 or remove the spare=1 totally?
    This is my current configuration:

    Code
    # definitions of existing MD arrays
    ARRAY /dev/md/NAS metadata=1.2 spares=1 name=openmediavault:NAS UUID=332d8084:c2b3a139:44a4f8e1:6865cc49

    Thank you for the detailed info, it's very appreciated.
    I have replaced the SATA cable and the drive boot up correctly.
    Then I wiped it via the OMV GUI, restarted and started RAID recovery.
    It's running now, let's hope it will last long enough...
    I will update once finished.


    thank you