Lost all drives

  • Disaster! I have a Helios-4 that's been running OpenMediaVault just fine for months. It has 4 8 gig drives in the raid. I've had some (house) power problems and noticed that two of the four drives where missing, not listed at all. Thought perhaps the connectors had gotten jostled, so re-seated everything. MUCH to my surprise when I powered back up, now all the physical drives are there, but the raid is gone. Do I have any hope of getting it back without losing all the files on it?


    Here's the info:


    root@helios4:~# cat /proc/mdstat
    Personalities : [raid10]
    unused devices: <none>
    root@helios4:~# blkid
    /dev/mmcblk0p1: UUID="1f489a8c-b3a3-4218-b92b-9f1999841c52" TYPE="ext4" PARTUUID="7fb57f23-01"
    /dev/sda: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="9495186e-6df6-a7b1-c67b-4fd4ca1d6468" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/sdb: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="253f9091-6914-fe71-ab40-68961aa3dbb6" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/sdc: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="3186ee11-0837-b283-c653-37e39d1923d8" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/sdd: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="0da721df-e67c-8141-cc93-afe7e2e66f7a" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/zram0: UUID="93800f56-0eed-43cd-8c66-7159b0badb38" TYPE="swap"
    /dev/zram1: UUID="9fe054d5-829c-4c59-be30-1950f8e3738d" TYPE="swap"
    /dev/mmcblk0: PTUUID="7fb57f23" PTTYPE="dos"
    /dev/mmcblk0p2: PARTUUID="7fb57f23-02"
    root@helios4:~# fdisk -l | grep "Disk "
    Disk /dev/mmcblk0: 29.8 GiB, 32010928128 bytes, 62521344 sectors
    Disk identifier: 0x7fb57f23
    Disk /dev/sda: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk /dev/zram0: 504.6 MiB, 529104896 bytes, 129176 sectors
    Disk /dev/zram1: 504.6 MiB, 529104896 bytes, 129176 sectors
    root@helios4:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 spares=1 name=helios4:Store UUID=d1e18bf2:0b0e760b:84bec773:f4dbf945
    root@helios4:~# mdadm --detail --scan --verbose

  • Calling @geaves !

    You rang M'lud


    Do I have any hope of getting it back without losing all the files on it?

    Depends on which two drives 'disappeared' raid 10 is two mirrored arrays striped. However, you can lose a drive from each mirror and it's recoverable, lose two drives from one of the mirrors and the whole lot is toast.


    What's the output of mdadm --detail /dev/md0 and cat /etc/fstab


    EDIT: The fact that cat /pro/mdstat shows no output would suggest that one of the mirrors has failed. Since re-seating the connections and switching the unit back on and the physical drives are displayed have you rebooted since.

  • When the system still had two drives I was lucky and the was a pair, operational, but degraded.


    mdadm --detail /dev/md0
    mdadm: cannot open /dev/md0: No such file or directory



    root@helios4:~# cat /etc/fstab
    UUID=1f489a8c-b3a3-4218-b92b-9f1999841c52 / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1
    # >>> [openmediavault]
    /dev/disk/by-label/Store /srv/dev-disk-by-label-Store ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    /dev/disk/by-label/MTVO /srv/dev-disk-by-label-MTVO ntfs defaults,nofail 0 2
    /dev/disk/by-label/Seagate8 /srv/dev-disk-by-label-Seagate8 ntfs defaults,nofail 0 2
    # <<< [openmediavault]
    tmpfs /tmp tmpfs defaults 0 0



    Note - I haven't "monkeyed" with any thing after the reboot and the raid disappeared.


    To olthers that replied here - I do realize "raid" is not a back up, and *most* of the files were backed up on a completely separate drive (MTVO). However, restoring is slow, and incomplete. "Disaster" may have been an overstatement, showing my panic. ...but I would be very happy if I can get the raid back since the drives and the data on them should be okay.

  • You could try restarting the system and see if it comes back up.


    The cat /proc/mdstat file shows the kernel's raid md state, yours is empty.


    mdadm --detail /dev/md0 returns an error


    fstab output shows no entry for a raid


    blkid returns no reference for a raid array


    The above would suggest the system has become corrupt, or at least parts of it.


    After a reboot you could try mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd] if the raid does not reappear this may or may not work, if it does come back.

  • Great! Thank you!


    root@helios4:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]
    mdadm: looking for devices for /dev/md0
    mdadm: /dev/sda is identified as a member of /dev/md0, slot 2.
    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 1.
    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 0.
    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 3.
    mdadm: forcing event count in /dev/sda(2) from 116615 upto 116649
    mdadm: forcing event count in /dev/sdd(3) from 116615 upto 116649
    mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sdd
    mdadm: Marking array /dev/md0 as 'clean'
    mdadm: added /dev/sdb to /dev/md0 as 1
    mdadm: added /dev/sda to /dev/md0 as 2
    mdadm: added /dev/sdd to /dev/md0 as 3
    mdadm: added /dev/sdc to /dev/md0 as 0
    mdadm: /dev/md0 has been started with 4 drives.


    So the raid itself is back, but it seems to have lost all the files, well the whole file system. Is there any chance of getting the data back now?


    (Sorry for the duplicates, when I hit "Submit" the system gave me an error msg and to try again later, but they were posted)

  • root@helios4:/mnt# mdadm --detail /dev/md0
    /dev/md0:
    Version : 1.2
    Creation Time : Sun Feb 18 14:53:39 2018
    Raid Level : raid10
    Array Size : 15627790336 (14903.82 GiB 16002.86 GB)
    Used Dev Size : 7813895168 (7451.91 GiB 8001.43 GB)
    Raid Devices : 4
    Total Devices : 4
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Fri Jan 24 19:10:23 2020
    State : clean
    Active Devices : 4
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 0


    Layout : near=2
    Chunk Size : 512K


    Name : helios4:Store (local to host helios4)
    UUID : d1e18bf2:0b0e760b:84bec773:f4dbf945
    Events : 116649


    Number Major Minor RaidDevice State
    6 8 32 0 active sync set-A /dev/sdc
    4 8 16 1 active sync set-B /dev/sdb
    7 8 0 2 active sync set-A /dev/sda
    5 8 48 3 active sync set-B /dev/sdd

  • LMAO.. Uh you pasted a magnet link in there to a torrent.. that um.. has a pretty interesting title. Never heard of Rebecca Volpetti.. pretty attractive but I'm really not much into porn.

  • Larry you keep posting.. I'm assuming you're trying to edit out your post.


    I was just busting your balls a bit, we're all adults here. I'll edit the post, don't worry about it.


    Good luck getting your RAID fixed.

  • Is there some trick going on? Everytime I "Submit" I get an error message to try again later, but it seems that the messages are getting posted.


    As far as why I was in /mnt... I had to be somewhere.

  • root@helios4:~# cat /proc/mdstat
    Personalities : [raid10]
    md0 : active (auto-read-only) raid10 sdc[6] sdd[5] sda[7] sdb[4]
    15627790336 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
    bitmap: 0/117 pages [0KB], 65536KB chunk


    unused devices: <none>

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!