DegradedArray Event

  • I've enabled messaging and recieved this but cannot make sense of it. Is this something I need to be concerned with, maybe a disk failing?


  • Thanks for the reply, I've not ignored you - I haven't port forwarded 22 on the router so cannot access the server via command line right now... Need to make a house visit

  • Looks like the disk has died, so being shipped out for warranty replacement. Once I recieve the replacement, what steps do I need complete to add this drive back into the RAID? Thanks

  • Replace the drive, look what letter it got assigned (sda, sdb, sdc, etc...) and replace it with sdX with the code below: (when sda is your system drive!)


    Code
    mdadm --stop /dev/md127
    mdadm --zero-superblock /dev/sdX
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force
  • It is, goto the RAID tab and expand the array and choose the new drive when you install it.


    Not 100% sure this removes the failed drive from the array, would hope it does otherwise its a pretty pointless operation

  • after you have rebuilt your raid, please check /etc/md/md.conf if youre spare settings are correct (correct number for spare disks). Otherwise you will start getting messages from the system because of missing spares.

    Everything is possible, sometimes it requires Google to find out how.

  • Within the gui, used recover and selected the new drive and it started the process great I thought. I'm now receiving the following messages.... I've only got access to the server via a mobile handset right now so no to access info.... have I done something wrong?


    This is an automatically generated mail message from mdadm
    running on OMV


    A DegradedArray event had been detected on md device /dev/md/OMVRaid5.


    Faithfully yours, etc.


    P.S. The /proc/mdstat file currently contains the following:


    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[4](S) sdb[0] sde[3] sdc[1](F)
    8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [U__U]


    unused devices: <none>


    Looking at the raid it shows....


    Version : 1.2
    Creation Time : Tue May 28 15:18:10 2013
    Raid Level : raid5
    Array Size : 8790795264 (8383.56 GiB 9001.77 GB)
    Used Dev Size : 2930265088 (2794.52 GiB 3000.59 GB)
    Raid Devices : 4
    Total Devices : 4
    Persistence : Superblock is persistent


    Update Time : Mon Aug 18 13:20:37 2014
    State : clean, FAILED
    Active Devices : 2
    Working Devices : 3
    Failed Devices : 1
    Spare Devices : 1


    Layout : left-symmetric
    Chunk Size : 512K


    Name : N36L-OMV:OMVRaid5
    UUID : 860d598b:36d96569:15b715f8:06ffde88
    Events : 417048


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 0 0 1 removed
    2 0 0 2 removed
    3 8 64 3 active sync /dev/sde


    1 8 32 - faulty spare /dev/sdc
    4 8 48 - spare /dev/sdd

  • Hers one of the other message I revived
    This email was generated by the smartd daemon running on:


    host name: OMV
    DNS domain: WORKGROUP
    NIS domain: (none)


    The following warning/error was logged by the smartd daemon:


    Device: /dev/disk/by-id/wwn-0x50014ee60338b828 [SAT], 5 Offline uncorrectable sectors



    For details see host's SYSLOG (default: /var/log/syslog).


    You can also use the smartctl utility for further investigation.
    No additional email messages about this problem will be sent.

  • And another while it was recovering the raid..
    This is an automatically generated mail message from mdadm
    running on OMV


    A Fail event had been detected on md device /dev/md/OMVRaid5.


    It could be related to component device /dev/sdc.


    Faithfully yours, etc.


    P.S. The /proc/mdstat file currently contains the following:


    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[4] sdb[0] sde[3] sdc[1](F)
    8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [U__U]
    [==================>..] recovery = 93.4% (2738177408/2930265088) finish=335.8min speed=9532K/sec


    unused devices: <none>

  • Thanks....but looks like it cannot find the command.. from what I can see (remotely) the shares are not accessible


    root@OMV:~# smartctl -a /dev/sdc
    -bash: smartctl: command not found
    root@OMV:~#

  • Weird... whats the output of


    'dpkg -l | grep smartmontools'


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Check the local output of you NAS. Maybe it holds because of a missing drive.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Yeah unfortunately I'm no where near the server right now, so it'll have to wait until next Monday. It's a headless unit so cannot even talk anyone through looking at it. Strange since all I've done is replace the faulty drive, asked our to rebuild the raid 5 and now this. Surely it would be pretty bad luck to get another faulty drive
    Thanks for your help

  • So home now and have physical access to the server, results to comand as follows:

    Einmal editiert, zuletzt von WastlJ ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!