RAID6 not showing up in GUI anymore, rebuilding tutorials fail

  • Posting the required outputs first:


    Code
    root@helios64:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdb[5] sdd[3]
          21486144856 blocks super 1.2
           
    unused devices: <none>


    Code
    root@helios64:~# blkid
    /dev/mmcblk0p1: UUID="a7a41236-bd7e-4b26-a31d-e30f47633de7" TYPE="ext4" PARTUUID="436f9809-01"
    /dev/sdb: UUID="67baeec5-36b4-e5e1-d749-5890cc380e14" UUID_SUB="1248bea6-bcef-ee81-d3b8-515c78ddd198" LABEL="helios64:almazen" TYPE="linux_raid_member"
    /dev/sdd: UUID="67baeec5-36b4-e5e1-d749-5890cc380e14" UUID_SUB="81e6bcbf-ae4d-3161-9017-77a947d48ea4" LABEL="helios64:almazen" TYPE="linux_raid_member"
    /dev/mmcblk0: PTUUID="436f9809" PTTYPE="dos"
    /dev/sda: PTUUID="76dfa8c5-4b8e-4e76-a105-0be6129a4bfe" PTTYPE="gpt"
    /dev/sdc1: PARTUUID="aa9c8eeb-1e28-4a5f-9045-91ee1ea7ef43"
    /dev/sde: PTUUID="88a18f82-e542-4c6d-a9dc-96a265a2563f" PTTYPE="gpt"


    Code
    root@helios64:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 level=raid6 num-devices=5 metadata=1.2 name=helios64:almazen UUID=67baeec5:36b4e5e1:d7495890:cc380e14
       devices=/dev/sdb,/dev/sdd


    I had 3x8TB + 2x14TB, my goal is to update all of them to 14TB to then expand the file system size. Before that, I wanted to switch 2 of the 8TB with 14TB drives and just rebuild the previous system.


    Not gonna lie, I messed up the rebuild. I turned off the machine, switched two of the drives, and then realized I should have removed the two disks from the RAID array before removing them. Swapped back, did the removal, switched the drives and rebooted. The RAID array no longer showed up.


    The two disks I swapped are /dev/sda and sde.


    My impression from the output is that /dev/sdc isn't recognized as a RAID member any more even though I did not swap it or do anything with it.


    Is the array salvageable? The data are backed up, I can deal with losing it. Also, I would probably make the switch to a different kind of RAID so I could increase its size already now and not just when I'll swap the 8TB for a 14TB.

    • Offizieller Beitrag

    Is the array salvageable

    I don't see how, mdadm is only seeing two of the potential 5 drives, and /dev/sde is being shown as /dev/sdc1 which suggests a partition on that drive, you could try mdadm --examine /dev/sde and see what that throws back.

    But in all honesty I think you've lost it

  • Code
    root@helios64:~# mdadm --examine /dev/sde
    /dev/sde:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    root@helios64:~# mdadm --examine /dev/sdc
    /dev/sdc:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)


    Yeah I think what I need to do is go back. I still have the two drives that I took out and they were OK. I think I need to put one of them back and rebuild the RAID.


    It's however strange that only 2 disks show up, I really don't know what I did to cause that.

    • Offizieller Beitrag

    It's however strange that only 2 disks show up, I really don't know what I did to cause that

    The only thing I can think of is you physically removed the wrong drives, but then again if that were the case one would expect to see three of the original drives.

    I think I need to put one of them back and rebuild the RAID.

    That would be a start, if the array did rebuild with 3 out of the 5 drives, then add the new drives one at a time obviously wipe them first before adding them.

  • Inserted one of the old ones back in and was able to mdadm --run /dev/md127. It then showed up in the GUI as degraded and is now rebuilding.


    Zitat

    then add the new drives one at a time


    Curious and eager to learn: Why one at a time? Lower risk of things going south like they did now for me? Won't the stress on the other drives be a lot if they have to go through 2 rebuilds instead of 1?


    Aside/off-topic: I do feel mighty stupid for how I executed this switch. It wasn't even the first disk switch I've ever done and yet I totally messed it up. At least, the data was already backed up and I didn't panic. I'm not a total noob, I'm more of a seasoned noob.

    • Offizieller Beitrag

    1) Why one at a time?

    2) Lower risk of things going south like they did now for me?

    3) Won't the stress on the other drives be a lot if they have to go through 2 rebuilds instead of 1?

    1) This is the approach I would take which leads to point 2

    2) IMHO yes, but that also leads to point 3

    3) Any amount of rebuild will lead to stress within the array.


    But I will say using Raid6 with more than 4 drives is the right way to go, not sure If I would do it with 14TB drives though, due to rebuild time :)


    But 2 :) I would have approached this differently, leaving in 1 x 3TB reduces the arrays potential size, if I have done this correctly ->


    1x3TB + 4x14TB in a Raid 6 will give you a 9TB capacity array

    4x14TB in a Raid 6 will give you a 28TB capacity array with the potential to further add 14TB drives at a later date, however (there's always a however or a but :) ) this would mean a complete rebuild including moving to OMV6 -> this would be my choice provided I had a backup

    Aside/off-topic: I do feel mighty stupid for how I executed this switch. It wasn't even the first disk switch I've ever done and yet I totally messed it up. At least, the data was already backed up and I didn't panic. I'm not a total noob, I'm more of a seasoned noob

    What I do is all my drives have white sharpie numbers on them, I then have an Excel sheet with each drive number, the slot they occupy, the current omv drive reference, serial no. and model no. should a drive need to be replaced I can then locate it and then update the sheet.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!