Hard system disk crash ! (really dead)

  • Hello,


    Let me open this topic because I have not found the information by searching.
    I present my apologies in advance if the answer was already in another thread.


    Here is my problem:


    My system disk that contains openmediavault is a Compact Flash card type, mounted on an IDE adapter. The rest of my records are a bunch of 7x2To raid 5.


    Now my system drive me let go. No more boot, no detection via live cd (clonezilla, deft, etc ...). No detection either by plugging the card on a compact flash drive. (Yes I think she's dead ...)


    Of course, I have no backup of the system disk ...


    My question is this:


    Is it possible to get my 7x2To cluster with a new installation on a new IDE drive ?
    Or is it possible to collect data on whether the relocation does not allow it ? (that the data are spread on the disc)


    In advance thank you for your answers.


    Version : 2.2.1 (Stone burner)



    PS: Please excuse my google translation to English.

  • @Naabster yes, you not lost your data on the cluster after new installation.
    The flashmemory plugin extended quite considerably the life of the boot medium. You have to install AND configure!


    1. shutdown the OMV machine.
    2. disconnect the data hdd
    3. power on the machine and install OMV
    4. install omv extras org
    5. install and configure flashmemory plugin
    6. shutdown the OMV machine
    7. reconnect the data hdd
    8. power on the OMV machine and you will see your data system
    9. configure the OMV system

    2 BananaPi, 1 OrangePiPC+, 1 OrangePiPC with OMV 6.0.x

    2 Mal editiert, zuletzt von omavoss ()

  • I may (hope) have said something stupid ...


    In fact, all my disks are listed in physical disks:


    7 x 2 To + 2 x 250 Go, like the picture :


    When I go into managing raid, I just see that the volumes of the Debian installation. (2 x 250 Go in RAID 1 --> 2 Go /boot + 45 Go swap + 200 Go for / )



    But when I want to create a new volume, he sees only one disk 2 TB.



    Because actually, my cluster comprised 5 disks in RAID 5 + 1 spare, the last disk was there in case ... (not in cluster)


    I concluded that my cluster to be seen by the system but not through the web interface ....


    Suggestions to try mounting by hand?

    • Offizieller Beitrag

    I need to see the output from the commands listed in this post.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @omavoss, @ryecoaaron, a big thank to track my problem !

    • The output of 'cat /proc/mdstat' :


    • The output of 'blkid' :


    • The output of 'fdisk -l' :


    While this can be useful, here is the output :

    • mdadm --detail --scan
    Code
    ARRAY /dev/md/0 metadata=1.2 name=Hightower:0 UUID=4e557c08:58909801:fb622a57:d87dc65d
    ARRAY /dev/md/1 metadata=1.2 name=Hightower:1 UUID=bb326408:2c33de89:c6f033c1:a1010c8f
    ARRAY /dev/md/2 metadata=1.2 name=Hightower:2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724


    • mdadm --examine --scan :
    Code
    ARRAY /dev/md/0 metadata=1.2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
    ARRAY /dev/md/1 metadata=1.2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
    ARRAY /dev/md/2 metadata=1.2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
    ARRAY /dev/md/warehouse metadata=1.2 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
       spares=1


    my cluster is there, but I can not mount her
    Especially as the device "/dev/md/warehouse" does not exist, I do not understand where the command "mdadm --examine --scan" will look for the result ...


    Code
    ls -l /dev/md/
    total 0
    lrwxrwxrwx 1 root root 6 mars  16 16:43 0 -> ../md0
    lrwxrwxrwx 1 root root 6 mars  16 16:43 1 -> ../md1
    lrwxrwxrwx 1 root root 6 mars  16 16:43 2 -> ../md2
    • Offizieller Beitrag

    Try: mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdefghi] 

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron


    The output with more test :


    • Offizieller Beitrag

    Since sde and sdh don't seem to have superblocks and I thought you were using a 7 drive array, try:
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sdf is busy - skipping
    mdadm: Cannot assemble mbr metadata on /dev/sdg
    mdadm: /dev/sdg has no superblock - assembly aborted


    but there's better :


    Code
    cat /proc/mdstat 
    Personalities : [raid1] 
    md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
          7813534048 blocks super 1.2
    
    md2 : active raid1 sde3[0] sdg3[1]
          195181440 blocks super 1.2 [2/2] [UU]
    
    md1 : active (auto-read-only) raid1 sde2[0] sdg2[1]
          46841728 blocks super 1.2 [2/2] [UU]
    • Offizieller Beitrag

    I didn't realize it assembled in your previous output. So, you need to stop it before assembling again.


    mdadm --stop /dev/md127
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]


    But it looks like sdg is missing superblock data. I don't know what your original drives were in the raid. So, you can keep trying different combinations.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[bdfgi]
    mdadm: looking for devices for /dev/md127
    mdadm: no RAID superblock on /dev/sdg
    mdadm: /dev/sdg has no superblock - assembly aborted


    Differents cominations it same for /dev/sde & /dev/sdg :
    - no superblock
    or
    - ressource is busy


    My last comand is :



    But i 'm to afraid to say YES ....

    • Offizieller Beitrag

    Very dangerous to use --create. I have seen some users have luck but others get a nice clean empy array. I will never tell a user to use clean. Too risky for me. The --assume-clean helps. The missing drive makes it even more dangerous. I can't recommend anything here.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron


    Indeed, reading various forums, this air to be fifty-fifty...
    Although my wife and daughters have already made their mourning data, I would do it as a last resort.



    I will try to solve the problem superblock on discs /dev/sde and /dev/sdg to put them back in the cluster...

  • @ryecoaaron, @omavoss


    I'm sorry, I'm a big newbie...


    At no time have I thought of using --assemble option !!! Yet the solution was there !!!

    • I even once the relaunch --examine option but with verbose mode
    Code
    mdadm --examine --scan -v
    ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=2 UUID=4e557c08:58909801:fb622a57:d87dc65d name=Hightower:0
       devices=/dev/sdg1,/dev/sde1
    ARRAY /dev/md/1 level=raid1 metadata=1.2 num-devices=2 UUID=bb326408:2c33de89:c6f033c1:a1010c8f name=Hightower:1
       devices=/dev/sdg2,/dev/sde2
    ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2 UUID=09d4b1a6:51ce7a4a:43736b37:acc34724 name=Hightower:2
       devices=/dev/sdg3,/dev/sde3
    ARRAY /dev/md/warehouse level=raid5 metadata=1.2 num-devices=5 UUID=cdf25234:31f75570:5a623191:15939f7a name=the-vault:warehouse
       spares=1   devices=/dev/sdf,/dev/sdi,/dev/sdc,/dev/sdb,/dev/sdd,/dev/sda


    This adds also disks used by the cluster.

    • Re-check


    Code
    cat /proc/mdstat
    Personalities : [raid1]
    md127 : inactive sdb[1](S) sdi[6](S) sdf[4](S) sdd[3](S)
          7813534048 blocks super 1.2
    […]


    • Inactive, so... let's go
    Code
    mdadm -A /dev/md127
    mdadm: /dev/md127 not identified in config file.


    • Indeed md127 does not exist in mdadm.conf seen that option --examine talks about ARRAY /dev/md/warehouse
    • Off Course !!! mdadm.conf !!!
    Code
    mdadm -A /dev/md/warehouse
    mdadm: /dev/md/warehouse assembled from 2 drives - not enough to start the array.
    • 2 drives ???
    Code
    mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    
    
    cat /proc/mdstat
    Personalities : [raid1]
    md126 : inactive sda[0](S) sdc[5](S)
          3906767024 blocks super 1.2
    [...]


    • md126 ???
    Code
    mdadm --stop /dev/md126
    mdadm: stopped /dev/md126
    • It's good ?We can go !!?
    Code
    mdadm -A /dev/md/warehouse
    mdadm: /dev/md/warehouse has been started with 5 drives and 1 spare.


    Whhhaaaaaaaaatttttttttt !!! No error messages ???!

    • Let's chek
    Code
    cat /proc/mdstat
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sda[0] sdi[6](S) sdf[4] sdd[3] sdc[5] sdb[1]
          7813531648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
    [...]


    Morality, the simplest solution is often the right solution !!


    Hope this can help other people in the same situation.


    Again many thanks for your help and your time !!



    I'm going to do my backups :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!