Disk dropped out of array

  • Over the last few days I noticed that when I'd copy a file to OMV that one of the drive lights would flicker for a second, then all four would flicker. Last night I saw the opposite - three would flicker, one never would. I looked on the web interface and the raid reports "clean but degraded" and sda, sdb, and sdc were in the raid, but sdd was missing. I came here and saw a post that requested this info:
    root@helios4:/home/larry# cat /proc/mdstat
    Personalities : [raid10]
    md0 : active raid10 sda[0] sdc[3] sdb[2]
    15627790336 blocks super 1.2 512K chunks 2 near-copies [4/3] [U_UU]
    bitmap: 26/117 pages [104KB], 65536KB chunk


    unused devices: <none>



    root@helios4:/home/larry# blkid
    /dev/mmcblk0p1: UUID="1f489a8c-b3a3-4218-b92b-9f1999841c52" TYPE="ext4" PARTUUID="7fb57f23-01"
    /dev/sda: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="09d3b6c9-f312-e5b8-14c4-dc128ed0abde" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/sdb: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="b2ad94c4-58fa-554e-0508-fb8cbf6f6eec" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/md0: UUID="GmgEll-khiX-a7DB-5HNZ-KGRm-5vGq-1vPV4w" TYPE="LVM2_member"
    /dev/sdc: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="4235d123-bcec-3f18-5ec2-6e530400c8b4" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/mapper/Store-Store: LABEL="Store" UUID="6c7b4b44-4cae-4169-95fe-d9a14d04e814" TYPE="ext4"
    /dev/zram0: UUID="e94d3e0b-c8fb-4b8c-b780-035797842a7d" TYPE="swap"
    /dev/zram1: UUID="4b9d8a94-1260-49a9-b23f-57fb627229d6" TYPE="swap"
    /dev/sdd: UUID="d1e18bf2-0b0e-760b-84be-c773f4dbf945" UUID_SUB="f69820ac-517a-b430-0a2b-ae6c52d1922f" LABEL="helios4:Store" TYPE="linux_raid_member"
    /dev/mmcblk0: PTUUID="7fb57f23" PTTYPE="dos"
    /dev/mmcblk0p2: PARTUUID="7fb57f23-02"



    root@helios4:/home/larry# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=helios4:Store UUID=d1e18bf2:0b0e760b:84bec773:f4dbf945



    root@helios4:/home/larry# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=helios4:Store UUID=d1e18bf2:0b0e760b:84bec773:f4dbf945
    devices=/dev/sda,/dev/sdb,/dev/sdc


    Is there a was to tell what happened to the fourth drive and get it back in?
    thanks, Larry


    HELP!! A second disk has dropped out of the array. Now sdb and sdd are missing. Physical drives still shows all four. How do I figure out why the system isn't using them and how to convince it to put them back in? I have a spare drive that I can sub-in if that will help.


    Can someone at least help me ensure I am interpreting the lsscsi output correctly? Given this:
    lsscsi --verbose[0:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sda dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/platform/soc/soc:internal-regs/f10a8000.sata/ata1/host0/target0:0:0/0:0:0:0][1:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sdb dir: /sys/bus/scsi/devices/1:0:0:0 [/sys/devices/platform/soc/soc:internal-regs/f10a8000.sata/ata2/host1/target1:0:0/1:0:0:0][2:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sdc dir: /sys/bus/scsi/devices/2:0:0:0 [/sys/devices/platform/soc/soc:internal-regs/f10e0000.sata/ata3/host2/target2:0:0/2:0:0:0][3:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sdd dir: /sys/bus/scsi/devices/3:0:0:0 [/sys/devices/platform/soc/soc:internal-regs/f10e0000.sata/ata4/host3/target3:0:0/3:0:0:0]do I take it that sda is on sata1, sdb on sata2, sdc on sata3 and sdd on sata4I have a spare drive that I can plug in, but I'd kinda like to make sure I put it on sdb or sddthanks

    2 Mal editiert, zuletzt von LarryM04 () aus folgendem Grund: second drive lost

  • Thank you very much for the reply. The hardware is a Helios system and I've also been working with the Kobol people trying to figure this out. Long story a little shorter, after a few hardware tests what I ended up doing was using the "Wipe" on one of the physical drives not in the raid, then going to the raid tab and recovering it back in. These are 8TB drives so in 700+ minutes I should be able to repeat for the other one and then be back to full strength. :) Not sure why neither drive showed there until I wiped (not even a brand new drive), but okay, on my way now.


    While I was initially convinced it was a hardware problem, that the wipe (which I assume means "re-format") is now allowing the drive to be recovered, I now assume this means there was some sort of data problem on the drive which the wipe cleared. Right?


    One more question to any raid expert: I had these four drives set up in RAID 10, meaning stripped and mirrored. Does it pair two physical drives and stripe data between them, pair the other two, then mirror the two pairs. Or is that it makes two drives mirrors, then strips the two mirrored pairs. The difference is subtle, I'm just curious.


    thanks, Larry

  • Yes, they were brand new drives.


    As to why - the hardware guys blame the current draw of the drives. These are 4 Seagate Barracuda's that I ripped out of USB drives (I got all four 8TB drives for just under $125 each). I thought being 7200 RPM drives would be a good thing. I looked it up and the current draw is 2A, on a NAS drive 1.8A. I dunno, 200 milliamps doesn't sound like a big difference to me, but that's their thought.

  • Oh the Helios box is a really nice little system. Up to four 3.5" disks, two USB-3's, and gigabyte ethernet, runs a Debian Linux and OMV, in a case that's not much bigger than the 4 drives, and a very reasonable price. Other than a little beefier power supply the only change I'd like to see is a HDMI port so I could run conky on it.


    I use mine to hold tons of movies and tv shows for my kodi devices - it even has mysql so they all keep in sync.


    Very nice little box.

  • Update: After using "wipe" on the excluded disks, I was able to get them back into the array and all is fine now.


    Not sure what caused them to burp out of the array in the first place, but there hasn't been any problems since.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!