RAID 5 Missing - need help for rebuild

  • Hello guys,


    i've read the last 2 hours and tried to get into my RAID5 problem a bit more but i was afraid of damaging more than to resolve my problem.
    Im running a 4x 3 TB RAID5 which ran great for the last few months.
    Two days before it just disappeared from the Web-UI.
    Then i checked my connections and started the server back up again.
    Now i see all my drives again but the RAID and filesystem (ext4) is still "Missing".


    I think its possible that some sata connection were cut off because of some weak connectors while the system was running .


    Im hoping for the best to get my Raid back and that you guys can help me out.


    Here ist my log ID: 8PXR9KwV


    Thanks in advance!

  • I've tried it again with the following popup:

  • blkid:

    Code
    /dev/sda: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="e9b6baef-5d97-6040-ed7e-615dc6d35ce3" LABEL="omv:eraid5" TYPE="linux_raid_member"
    /dev/sdc: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="c36c760a-e06f-3b29-f802-623e82ff21c8" LABEL="omv:eraid5" TYPE="linux_raid_member"
    /dev/sdb: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="d0522415-14c4-149a-a2a7-013672fc8f6e" LABEL="omv:eraid5" TYPE="linux_raid_member"
    /dev/sdd: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="9c680d8d-7c2d-f849-95f3-74d14001779f" LABEL="omv:eraid5" TYPE="linux_raid_member"
    /dev/sde1: UUID="afa8270c-6e80-432e-900e-ed5d7b395d6c" TYPE="ext4"
    /dev/sde5: UUID="c49b3b4f-c9d2-4cd0-9cf7-a9abcfbcf6c6" TYPE="swap"



    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>
    • Offizieller Beitrag

    No, I was sleeping :)


    I would try: mdadm --assemble /dev/md127 /dev/sd[abcd] --verbose --force

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    A mdadm array can become disassembled for lots of reasons. Not sure why in this case.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    That is nonsense. I have been using it for years and years with no issues with large drives.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    You are missing the point. raid 5 is not a backup and too many people think it is. It is about keeping your system running (availability) when one drive fails. This gives you time to sync your backup and the URE should not affect this situation.


    Yes, I agree with bigger drives, you might hit a URE recovering. But, if one drive is starting to fail and all of your drives are the same age, it is probably time to replace them all. You should still have a full backup.


    And I also agree that many people on this forum do not need the availability of raid 5 and really just want pooling. That is why we have aufs, mhddfs, greyhole, snapraid, etc to give people a lot of the raid 5 features without the "danger" of raid 5.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I never said RAID is a backup. But if your RAID fails you are most likely going to lose data, unless a backup was made just before it failed.
    I see no reason you would use RAID5 these days. HDDs are cheap and RAID5 is slow.

    • Offizieller Beitrag

    I know you didn't say raid was a backup but I thought I explained why that article is pointless. How often do you have two drives fail at the same time? If only one fails, your array keeps working just fine. If you say two drives fail at the same time quite often, then even a raid 1 mirror or a single drive with a single backup is dangerous.


    Why would you use raid 5?? Pooling with a little bit of protection. My server at home is an 8 drive raid 5 (2 tb drives). It backs up nightly. It has been working flawlessly for the last four years.


    Slow?? I can write to my array at 320 MB/s. Show me how to beat that when you need 14 TB of storage without spending more money.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Don't worry too much about that, ryecoaaron. As long as I am on IT since 1992 every day another prophet from the desert came around the corner flooding the world with his theories. If we all believe and trust in those theories and assumptions the whole world would have committed suicide right before December 21. in 2012.
    Of course there is a risk of loosing data if they are stored on a raid in case of a failing disk, but this is common to all kind of raids. My mission-critical servers, even the ESXi-hosts, are running on raid 10. If two disks of the same stripe fail at the same time, the whole raid will be gone.
    My home system is storing data on a raid-5, but nonetheless I sleep very well. Because I always have a backup.

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

  • Today i had the same situation again.
    But with more details:


    The server did some unzipping and moving of files while i was out.
    Then monit send 2 emails because of my loadavg level:


    Code
    Date:        Sat, 22 Nov 2014 19:40:37
             Action:      alert
             Host:        omv.local
             Description: loadavg(5min) of 6.3 matches resource limit [loadavg(5min)>4.0]


    and

    Code
    Date:        Sat, 22 Nov 2014 19:40:38
             Action:      alert
             Host:        omv.local
             Description: loadavg(1min) of 12.8 matches resource limit [loadavg(1min)>8.0]


    As i arrived back home the Raid5 was gone.
    So i checked your instructions as last time:


    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4]
    md126 : inactive sdc[4](S) sdb[3](S) sdd[2](S)
          8790796680 blocks super 1.2
    
    
    md127 : inactive sda[5]
          2930265560 blocks super 1.2
    
    
    unused devices: <none>


    blkid

    Code
    /dev/sda: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="e9b6baef-5d97-6040-ed7e-615dc6d35ce3" LABEL="omv:eraid5" TYPE="linux_raid_member"/dev/sdb: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="d0522415-14c4-149a-a2a7-013672fc8f6e" LABEL="omv:eraid5" TYPE="linux_raid_member"/dev/sdc: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="c36c760a-e06f-3b29-f802-623e82ff21c8" LABEL="omv:eraid5" TYPE="linux_raid_member"/dev/sdd: UUID="60064c68-bfb1-2ec1-ddbe-f232d344e5dd" UUID_SUB="9c680d8d-7c2d-f849-95f3-74d14001779f" LABEL="omv:eraid5" TYPE="linux_raid_member"/dev/sde1: UUID="afa8270c-6e80-432e-900e-ed5d7b395d6c" TYPE="ext4"/dev/sde5: UUID="c49b3b4f-c9d2-4cd0-9cf7-a9abcfbcf6c6" TYPE="swap"[b]


    Then i tried ryecoaaron's command to reassemble md127 but it didnt worked:

    Code
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping


    I was wondering why there is md126 listed.
    sda was kicked of md127?
    Or what is wrong here?


    So i stopped md126 and md127 and started the reassembling again with the following output:


    The Raid5 is no back up but needs to synchronize itself after i mounted it in OMV.


    Is there something i did wrong or is some of my hardware faulty?
    Maybe one of my SATA-cables is cheapy and crap?


    Ah and by the way i donated and received a neat little thank-you mail from Volker :thumbup:

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!