Upgraded to stoneburner now raid is missing

  • Hi Guys,


    I probably made a mistake by not un-mounting the raid array before upgrading to stoneburner. I made a complete re-install to my ssd system disk but now OMV doesn't see raid array although it sees physical disks.
    I'm really in need of help here.


    Code
    root@OMVmaxi:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdb[0] sdd[2] sdc[1]
          8790406536 blocks super 1.2


    Code
    root@OMVmaxi:~# blkid
    /dev/sdc: UUID="27f66ba7-0bf4-a9fd-48f8-2763f7ad8793" UUID_SUB="f7ee4689-fc08-baf4-2cd0-05c11114f104" LABEL="omv2:omv2" TYPE="linux_raid_member"
    /dev/sdf: UUID="27f66ba7-0bf4-a9fd-48f8-2763f7ad8793" UUID_SUB="4ce9e842-b7e4-783f-5dfa-ece257681110" LABEL="omv2:omv2" TYPE="linux_raid_member"
    /dev/sdb: UUID="27f66ba7-0bf4-a9fd-48f8-2763f7ad8793" UUID_SUB="0679e4eb-b5fb-832d-382f-d597465f55fc" LABEL="omv2:omv2" TYPE="linux_raid_member"
    /dev/sde: UUID="27f66ba7-0bf4-a9fd-48f8-2763f7ad8793" UUID_SUB="1256d9c1-a8fa-fdf5-335c-bde3b63400a8" LABEL="omv2:omv2" TYPE="linux_raid_member"
    /dev/sdd: UUID="27f66ba7-0bf4-a9fd-48f8-2763f7ad8793" UUID_SUB="05919054-c5eb-d548-8002-a1b658d1c523" LABEL="omv2:omv2" TYPE="linux_raid_member"
    /dev/sda1: UUID="f63e4bf3-5f8e-42e8-8884-a4fcec06a9aa" TYPE="ext4"
    /dev/sda5: UUID="b7fdbf58-7fcf-4bbd-b9bf-1f3f8569b204" TYPE="swap"




  • @votdev
    Thanks for your fast replay but sorry to say that I'm really un-experienced on linux. I just get the previous codes from the forum how to check the raid status. I don't want to loose the data on 5 disks. If someone can guide me I will be very glad.

    • Offizieller Beitrag

    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcdef]
    mdadm --examine --scan >> /etc/mdadm.conf
    update-initramfs -u

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

  • @ryecoaaron


    Thanks but here it is




    • Offizieller Beitrag

    What is the output of: cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@OMVmaxi:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdb[0] sdf[4](F) sdd[2] sdc[1]
          11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/3] [UUU__]
    
    
    unused devices: <none>
    • Offizieller Beitrag

    You have one failed drive and the other is missing. Check your connectors. You can't start raid 5 with two missing disks.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Just because smart says it is ok does not mean it is ok. I agree that does probably mean the cable is ok. All I can say is that I help with so many raid issues that I get them mixed up. I am really tired of all these arrays not starting.


    What I do know is that mdadm is saying one drive failed and another drive doesn't have array info. With raid 5, that is very bad. If you zero one drive to re-add it to the array, you might lose all data. You need to be able to start the array with four drives and have it working to zero one drive. Because my other command didn't work, there aren't many options left.


    I would run longer smart tests on each drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry for the late replay.
    I was out of Country for 14 days and I hadn’t got time to deal with server.
    Made self-long tests for the five disks an all completed without error. I have dumb question. Although I’m nearly sure I haven’t done it. Can it be caused by swapped sata cables?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!