Linear RAID array is no longer working

  • Hello,


    I had a linear array with 2 drives on it... I'm not exactly sure what I did that caused it to break, but it no longer mounts at all. I'm looking for a way to either repair it or recover the data. Here's what it looks like now:



    Disk 1:


    root@openmediavault:~# mdadm --examine /dev/vda
    /dev/vda:
    Magic : a92b4efc
    Version : 1.2
    Feature Map : 0x0
    Array UUID : 89b4bdb7:47e966c7:bdf65f7c:0daab043
    Name : openmediavault:JBOD (local to host openmediavault)
    Creation Time : Sun Feb 16 12:30:43 2020
    Raid Level : linear
    Raid Devices : 2


    Avail Dev Size : 15627788976 (7451.91 GiB 8001.43 GB)
    Used Dev Size : 0
    Data Offset : 264192 sectors
    Super Offset : 8 sectors
    Unused Space : before=264112 sectors, after=0 sectors
    State : clean
    Device UUID : c3018e8b:0475c41a:b3c05b7a:7fff9ebd


    Update Time : Sun Feb 16 12:30:43 2020
    Bad Block Log : 512 entries available at offset 8 sectors
    Checksum : be74f84d - correct
    Events : 0


    Rounding : 0K


    Device Role : Active device 0
    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)



    Disk 2:


    root@openmediavault:~# mdadm --examine /dev/vdb
    mdadm: No md superblock detected on /dev/vdb.




    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive vda[0](S)
    7813894488 blocks super 1.2


    unused devices: <none>



    root@openmediavault:~# blkid
    /dev/sr0: UUID="2019-08-16-07-08-11-00" LABEL="openmediavault 20190816-09:05" TYPE="iso9660" PTUUID="55882e75" PTTYPE="dos"
    /dev/vda: UUID="89b4bdb7-47e9-66c7-bdf6-5f7c0daab043" UUID_SUB="c3018e8b-0475-c41a-b3c0-5b7a7fff9ebd" LABEL="openmediavault:JBOD" TYPE="linux_raid_member"
    /dev/sda1: UUID="3f8fadd9-9fae-424e-97bd-c0fbe0eca50a" TYPE="ext4" PARTUUID="cc273775-01"
    /dev/sda5: UUID="27c2afa3-023e-4adc-823f-9f6fda5b16f3" TYPE="swap" PARTUUID="cc273775-05"



    root@openmediavault:~# fdisk -l | grep "Disk "
    Disk /dev/vda: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/vdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/sda: 16 GiB, 17179869184 bytes, 33554432 sectors
    Disk model: QEMU HARDDISK
    Disk identifier: 0xcc273775



    root@openmediavault:~# cat /etc/mdadm/mdadm.conf
    cat: /etc/mdadm/mdadm.conf: No such file or directory



    root@openmediavault:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=openmediavault:JBOD UUID=89b4bdb7:47e966c7:bdf65f7c:0daab043
    devices=/dev/vda



    Anything I can do to fix this?


    Thanks.

  • I was shuffling hard drives and had just reformatted my backup when this happened. Currently deep scanning it to see what I can recover, but obviously the file structure is gone on that one.

    • Offizieller Beitrag

    TBH I've never used this option and it's something I would stay clear of without a backup.


    A linear raid is a grouping of drives to create one single large virtual drive, linear raid provides no redundancy but it decreases reliability, if one member drive fails the whole array cannot be used.


    If this was mirror using the output information from disk 2 it's recoverable because the data is mirrored, in linear data is sequential.


    You could try this as the raid /dev/md0 is showing inactive;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/vda


    that might bring md0 back as clean degraded and it might be possible to mount it, but I've never dealt with this before, just normal raid errors.


    As to the second drive there a missing superblock, is that drive recoverable, I don't know, you can zero the superblock, then add the drive back in a raid 1 or raid 5/6, in this scenario I don't think it's possible.


    That's why I asked do you have a backup?

  • Yeah, my thought was to use the snapraid plugin to create a parity disk for the disks in the linear array, but I realized later that I need to use mergefs for that anyway.


    For what it's worth, there was only ever maybe 4TB of data in the array to begin with, so I assume the second drive doesn't actually have anything on it.



    root@openmediavault:~# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0


    root@openmediavault:~# mdadm --assemble --force --verbose /dev/md0 /dev/vda
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/vda is identified as a member of /dev/md0, slot 0.
    mdadm: no uptodate device for slot 1 of /dev/md0
    mdadm: added /dev/vda to /dev/md0 as 0
    mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive vda[0](S)
    7813894488 blocks super 1.2


    unused devices: <none>

    • Offizieller Beitrag

    mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

    Ok the array will not start, if you can't start it then there is nothing to do.


    md127 : inactive vda[0](S)

    The raid reference can change, but usually after a reboot or change of hardware, you could try the same again replacing md0 with md127 but I think you are on a losing wicket.

  • Yeah, that's what I was afraid of. I'll leave it alone for a couple of days in case anyone has any ideas, but I'm going to pull what I can off a deep scan for now.


    Thanks for looking.

  • Well, I decided to just recreate the array and hope for the best.


    mdadm --create --assume-clean --level=linear --raid-devices=2 /dev/md0 /dev/vda /dev/vdb


    I put this in and rebooted the VM, and amazingly, my array was restored. All data appears to be intact. Hopefully it lasts long enough for me to copy it to another disk. :)


    To anyone from the future, I would only recommend this as a last resort... YMMV.

    • Offizieller Beitrag

    Well, I decided to just recreate the array and hope for the best.

    Well done, I know that can be done but usually it creates more issues than it resolves, but if you've only got around 4Tb of data perhaps a different setup might be worth considering.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!