RAID Array Dissapeared After Reboot [Solved]

  • Hi All,


    So i was having some issues updating PLEX so i decided to reboot my system, but after i rebooted it my RAID array dissapeared.


    System details are:

    OMV Version: 4.3.35-1 (Arrakis)

    Kernal Version: Linux 4.19.0-0.bpo.8-amd64


    Also my disk layout is as follows:

    sda: Storage HDD 1

    sdb: Storage HDD 2

    sdc: Storage HDD 3

    sdd: Storage HDD 4

    sde: Boot SSD


    I've also tried a few other threads and a few other commands, hopefully this might be a bit of help.


    cat /etc/mdadm/mdadm.conf




    fdisk -l | grep "Disk " | grep sd | sort

    Code
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    
    Disk /dev/sde: 111.8 GiB, 120040980480 bytes, 234455040 sectors



    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]

    Code
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping



    cat /proc/mdstat

    Code
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sda[0] sdd[3] sdc[2] sdb[1]      10743789056 blocks super 1.2


    blkid

    Code
    /dev/sda: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c5ee2f55-4f93-8b6b-e330-f54589e9a3d8" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdb: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c7de846a-b549-e1dd-bb97-14d6bf392102" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdc: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="c87cd183-dabe-92d4-6db3-d65d117fe444" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdd: UUID="a1c01a92-146a-9649-e182-8fba2a7e3bd0" UUID_SUB="5186cf9d-59ff-ec57-254b-f085e33e529d" LABEL="cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sde1: UUID="5AB9-B88F" TYPE="vfat" PARTUUID="f6477c78-7c5e-4f1a-9775-55554ec754fa"
    /dev/sde2: UUID="b0affabe-1dae-479d-bd18-10500434d0ad" TYPE="ext4" PARTUUID="9300084e-f3ae-4fee-b931-2cfd1fb5b6dd"
    /dev/sde3: UUID="a95d884f-b667-4f4e-b003-8a461b653c82" TYPE="swap" PARTUUID="1a637345-b2e8-491f-b181-35633d447317"




    fdisk -l | grep "Disk "

    Code
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sde: 111.8 GiB, 120040980480 bytes, 234455040 sectors
    Disk identifier: B1773E09-BD8B-471C-A22D-AA686CA7A7A7



    cat /etc/mdadm/mdadm.conf



    mdadm --detail --scan --verbose

    Code
    ARRAY /dev/md0 level=raid0 num-devices=4 metadata=1.2 name=cb-server:CorePool UUID=a1c01a92:146a9649:e1828fba:2a7e3bd0   devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd
  • i managed to stop it ok, but the second command gave an error 524 :(


    Code
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 2.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 3.
    mdadm: added /dev/sdb to /dev/md0 as 1
    mdadm: added /dev/sdc to /dev/md0 as 2
    mdadm: added /dev/sdd to /dev/md0 as 3
    mdadm: added /dev/sda to /dev/md0 as 0
    mdadm: failed to RUN_ARRAY /dev/md0: Unknown error 524
  • ok, so i was reading through another form and i found someone else tried this command and it seems to work and fix my problem :D


    echo 2 > /sys/module/raid0/parameters/default_layout


    then i had to stop the disks again because they were busy


    so i did:


    mdadm --stop /dev/md0


    and finally the RAID command again:


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]


    and it's back!



    Thanks so much geaves for the stop command, looks like that's what fixed it!

  • SeaBee

    Hat das Label gelöst hinzugefügt.
  • SeaBee

    Hat den Titel des Themas von „RAID Array Dissapeared After Reboot“ zu „RAID Array Dissapeared After Reboot [Solved]“ geändert.
    • Offizieller Beitrag

    Thanks so much geaves for the stop command, looks like that's what fix

    Well that's good, I did find that doing a search, but you do realise there is one caveat to a Raid 0, one drive fails and you lose the lot all the data's toast with no way to recover, there are much better options.

  • Yeah, i know raid 0 is horrible if one disk goes down, but i need to have it setup that way due to the large files i am dealing with, plus i have an external backup so its not a big fuss if i loose it all, but just a pain to restore it

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!