Broken Raid

  • Hi guys,


    Just want to put all cards down and say that I am definitely a noob on debian.
    But I've been using OMV for a few years and I must say is an excellent NAS open source O/S compared to a lot of them our there.


    Anyway, one day after rebooting the NAS, O/S wouldn't start.
    So I took the liberty and re-installed the O/S with all of the HDD dismounted while reloading the O/S
    However, once O/S is back, I'm unable to see any of the RAID.


    I've this similar issue last year but was able to bring it all back after reloading this, but this time not having such luck.


    Anyway, I've googled around and still stuck with getting the RAID back and now I'm turning to this forum for help and any help to get this back is muchly appreciated!


    I'm not sure if this is a good start but here's the output of cat /proc/mdstat
    ===============================================================================
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sda[0](S) sdd[3](S) sdc[4](S) sdb[1](S)
    7814054240 blocks super 1.2


    unused devices: <none>
    ===============================================================================


    Please help. Thanks

  • Mr Blindguy.
    If I'm within a close proximity I might just give you a big smooch!!!
    Thank you for this as it worked like a charm! I've been googling nonstop and couldn't believe that it can be fixed with such a simple commands!!!
    I cannot thank you enough as you have safe my family photos from being formatted ( as I was that close of doing it)


    Anyhoo, I have another question which hopefully will be the last one.


    I have just realised that 3 our of the 4 disks were made into RAID 5 and the last one was only a RAID 0.
    Just wondering if there's a simple command to mount this?


    Cheers

  • Glad to help.


    Be sure to copy all data off the RAID0 first.
    I would assume that you could delete that RAID0 filesystem and then delete the array then just add that drive to the RAID5 array and then grow the RAID5 array.


    I think all this can be done via the OMV GUI.

  • I think i must have not explained myself properly.
    I guess what I'm trying to do is just mount that HDD as raid 0 because I had some stuff in it.
    I'm not trying to expand or anything, I'm just trying to mount it as is so i can access the files.
    But even though GUI recognised the HDD physically, it's unable to mount the volume if I'm making sense?

    • Offizieller Beitrag

    What is the output of:


    blkid
    cat /proc/mdstat

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi ryecoaaron,


    Here 'tis


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra id10]



    md127 : active raid5 sda[0] sdd[3] sdb[1]
    5860538880 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]


    unused devices: <none>
    root@openmediavault:~#



    I just checked the total capacity on the GUI and it's 5.46T and I have 4 X 2TB RAID 5.
    Based on RAID calc I did, it looks like all 4 HDDs were set on RAID 5. (sorry I'm getting old, my memory tells me that I had 3 on RAID 5 and 1 on RAID 0, so looks like i might have changed it to all 4 HDDs on RAID 5).

    • Offizieller Beitrag

    What about the output from blkid? The fourth drive is not a member of any array according mdadm. If a filesystem shows in blkid for the fourth drive, you will be able to mount it. Otherwise, you will need a recovery tool like photorec/testdisk to get file from the drive.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.7


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • according to the GUI the state is degraded.
    is there any way to fix this? or do i just have to replace sdc?



    Details of the RAID
    ========================================================
    Version : 1.2
    Creation Time : Thu Feb 28 16:45:27 2013
    Raid Level : raid5
    Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
    Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
    Raid Devices : 4
    Total Devices : 3
    Persistence : Superblock is persistent


    Update Time : Tue Mar 11 22:20:33 2014
    State : clean, degraded
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : openmediavault:DATA (local to host openmediavault)
    UUID : 7ce3434f:3fa4cb2a:6ebc2837:96aceefc
    Events : 114004


    Number Major Minor RaidDevice State
    0 8 0 0 active sync /dev/sda
    1 8 16 1 active sync /dev/sdb
    2 0 0 2 removed
    3 8 48 3 active sync /dev/sdd


    ================================================================

  • Hi All,


    I'm a bit further and now that I'm able to mount the old RAID array and I can see it in the GUI
    however, under the File Systems I can see /dev/md127 under device and under mount row, it says no.
    But if I try to mount it, I get this error


    Error #3005:
    exception 'OMVException' with message 'The configuration object is not unique: A mount point already exists for the given filesystem' in /var/www/openmediavault/rpc/filesystemmgmt.inc:682
    Stack trace:
    #0 [internal function]: FileSystemMgmtRpc->mount(Array)
    #1 /usr/share/php/openmediavault/rpc.inc(265): call_user_func_array(Array, Array)
    #2 /usr/share/php/openmediavault/rpc.inc(98): OMVRpc::exec('FileSystemMgmt', 'mount', Array)
    #3 /var/www/openmediavault/rpc.php(44): OMVJsonRpcServer->handle()


    Is there anyway to force unmount this and then remount it again properly?


    Thanks

  • Try

    Code
    mount -a


    via CLI.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Edit /etc/openmediavault/config.xml maually (make a backup before!). Remove all entrys for the particular UUID of your raid.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Shouldn't hurt the recovery process if you do it right now.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Mate,


    You are a freekin' genius!!!
    Thanks you are a star!


    Lastly, I can see that one of my HDDs is in degraded status, is there a quick way through CLI to confirm which disk that's degraded?

  • Code
    cat /proc/mdstat


    ?


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • really?
    that command only gives me the current RAID and doesn't tell me which HDD is degraded.


    md127 : active raid5 sdc[0] sdd[1] sdb[3] sda[4]
    5860538880 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU]
    [=>...................] recovery = 5.0% (99339904/1953512960) finish=299 9.2min speed=10303K/sec

  • Well, thats because you're already rebuilding, therefore the previously degraded disk is already back in the array...


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!