Beiträge von Smogshaik

    Inserted one of the old ones back in and was able to mdadm --run /dev/md127. It then showed up in the GUI as degraded and is now rebuilding.


    Zitat

    then add the new drives one at a time


    Curious and eager to learn: Why one at a time? Lower risk of things going south like they did now for me? Won't the stress on the other drives be a lot if they have to go through 2 rebuilds instead of 1?


    Aside/off-topic: I do feel mighty stupid for how I executed this switch. It wasn't even the first disk switch I've ever done and yet I totally messed it up. At least, the data was already backed up and I didn't panic. I'm not a total noob, I'm more of a seasoned noob.

    Code
    root@helios64:~# mdadm --examine /dev/sde
    /dev/sde:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)
    root@helios64:~# mdadm --examine /dev/sdc
    /dev/sdc:
       MBR Magic : aa55
    Partition[0] :   4294967295 sectors at            1 (type ee)


    Yeah I think what I need to do is go back. I still have the two drives that I took out and they were OK. I think I need to put one of them back and rebuild the RAID.


    It's however strange that only 2 disks show up, I really don't know what I did to cause that.

    Posting the required outputs first:


    Code
    root@helios64:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdb[5] sdd[3]
          21486144856 blocks super 1.2
           
    unused devices: <none>


    Code
    root@helios64:~# blkid
    /dev/mmcblk0p1: UUID="a7a41236-bd7e-4b26-a31d-e30f47633de7" TYPE="ext4" PARTUUID="436f9809-01"
    /dev/sdb: UUID="67baeec5-36b4-e5e1-d749-5890cc380e14" UUID_SUB="1248bea6-bcef-ee81-d3b8-515c78ddd198" LABEL="helios64:almazen" TYPE="linux_raid_member"
    /dev/sdd: UUID="67baeec5-36b4-e5e1-d749-5890cc380e14" UUID_SUB="81e6bcbf-ae4d-3161-9017-77a947d48ea4" LABEL="helios64:almazen" TYPE="linux_raid_member"
    /dev/mmcblk0: PTUUID="436f9809" PTTYPE="dos"
    /dev/sda: PTUUID="76dfa8c5-4b8e-4e76-a105-0be6129a4bfe" PTTYPE="gpt"
    /dev/sdc1: PARTUUID="aa9c8eeb-1e28-4a5f-9045-91ee1ea7ef43"
    /dev/sde: PTUUID="88a18f82-e542-4c6d-a9dc-96a265a2563f" PTTYPE="gpt"


    Code
    root@helios64:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 level=raid6 num-devices=5 metadata=1.2 name=helios64:almazen UUID=67baeec5:36b4e5e1:d7495890:cc380e14
       devices=/dev/sdb,/dev/sdd


    I had 3x8TB + 2x14TB, my goal is to update all of them to 14TB to then expand the file system size. Before that, I wanted to switch 2 of the 8TB with 14TB drives and just rebuild the previous system.


    Not gonna lie, I messed up the rebuild. I turned off the machine, switched two of the drives, and then realized I should have removed the two disks from the RAID array before removing them. Swapped back, did the removal, switched the drives and rebooted. The RAID array no longer showed up.


    The two disks I swapped are /dev/sda and sde.


    My impression from the output is that /dev/sdc isn't recognized as a RAID member any more even though I did not swap it or do anything with it.


    Is the array salvageable? The data are backed up, I can deal with losing it. Also, I would probably make the switch to a different kind of RAID so I could increase its size already now and not just when I'll swap the 8TB for a 14TB.