raid array unavailable

  • I am a noob at using OMV and raid so I apologize in advance. I have not updated the system, and for some reason the raid array has stopped working. It no longer appears in the web GUI, and I am unable to access files through any means. I have performed tests on the drives and determined none of the four have died in this RAID5 setup. I use OMV on a raspberry pi with the QUAD Sata kit. I am not sure where to go from here to repairing the array, if theres any additional information I can provide or command outputs to present let me know. Hoping someone can help me with this issue

  • cat /proc/mdstat

    Code
    Personalities :
    unused devices: <none>

    blkid

    Code
    /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="592B-C92C" TYPE="vfat" PARTUUID="39bcf0e4-01"
    /dev/mmcblk0p2: LABEL="rootfs" UUID="706944a6-7d0f-4a45-9f8c-7fb07375e9f7" TYPE="ext4" PARTUUID="39bcf0e4-02"
    /dev/sda1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="6b756cf5-a31a-93e0-2e01-c609ac27ae56" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="dd0495b9-e94a-7c43-9fb1-63380a450899"
    /dev/sdb1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="41717c85-3617-0255-77a6-86556384fc1e" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="12baed32-f89d-ca4b-b42b-81517a6c9059"
    /dev/sdc1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="b2f78f74-776c-6b30-9ffb-2a06c72c4e34" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="98256492-5627-4641-a620-70e1e2d69f83"
    /dev/sdd1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="ef3a0011-2f2e-7088-a4a7-932d24f797cc" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="31c36c97-cc56-d741-b8b9-65558edbbda0"
    /dev/mmcblk0: PTUUID="39bcf0e4" PTTYPE="dos"

    fdisk -l | grep "Disk "

    cat /etc/mdadm/mdadm.conf

    mdadm --detail --scan --verbose

    Code
    no output
  • How was the array created? the output from blkid suggests the drives were partitioned and/or had a filesystem on them prior to the array being created.


    As OMV uses the complete drive the output from blkid should be /dev/sda not /dev/sda1, also this from mdadm conf

    Code
    # definitions of existing MD arrays
    ARRAY /dev/md/vol1 metadata=1.2 name=PiNAS:vol1 UUID=9e5083d2:fa18e001:43bf1c25:d1c1bf8c

    has me puzzled as usually the array is created with a reference e.g. md0, md127 etc.


    you could try mdadm -A -R --force /dev/md/vol1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I have no idea if that will work

    Raid is not a backup! Would you go skydiving without a parachute?

    Edited once, last by geaves ().

  • The Quad Sata Kit connects the drives to the pi using its usb ports and OMV would not allow me to create the raid array in the web GUI. I made the array from the command line in SSH. I followed this guide https://www.ricmedia.com/build…erry-pi3-raid-nas-server/

    which used the following commands

    Code
    mdadm --create --verbose /dev/md/vol1 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md/vol1
    sudo mount /dev/md/vol1 /mnt

    The guide notes that

    if your volume name doesn’t show, it’ll be called “md127” or similar, this is a bug in mdadm, but continue the guide using the name you gave your array

  • The Quad Sata Kit connects the drives to the pi using its usb ports and OMV would not allow me to create the raid array in the web GUI

    I know that, the ability to create an array using USB attached devices was removed.

    I followed this guide which used the following commands

    That guide is specific for creating an array on a RPi Lite OS, OMV does not mount it's drives under /mnt and that's the same for creating an array.

    I also believe that if the filesystem is created on the cli OMV is not aware of it, so if you followed that guide exactly I fail to understand how your system worked.


    1) Deploy OMV as per this guide

    2) Connect hard drives, these will be displayed in Storage -> Disks

    3) Wipe the drives

    4) Create the array from the cli using the block device /dev/sdX where X is the drive reference

    5) Create the filesystem and mount the array in OMV's GUI

    Raid is not a backup! Would you go skydiving without a parachute?

  • Well once I created the array on the CLI, OMV did indeed recognize it in the Web GUI and i was able to proceed from there. It was working for almost a year. I used the command you gave me above

    Code
    mdadm -A -R --force /dev/md/vol1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

    with the following output

    Code
    mdadm: forcing event count in /dev/sdc1(2) from 16093 upto 16100
    mdadm: forcing event count in /dev/sdd1(3) from 16093 upto 16100
    mdadm: clearing FAULTY flag for device 2 in /dev/md/vol1 for /dev/sdc1
    mdadm: clearing FAULTY flag for device 3 in /dev/md/vol1 for /dev/sdd1
    mdadm: Marking array /dev/md/vol1 as 'clean'
    mdadm: /dev/md/vol1 has been started with 4 drives.

    and mounted it with the command

    Code
    sudo mount /dev/md/vol1 /mnt

    the web GUI once again recognized the raid array and everything is working again. I thank you for your help and when I get the chance I will resetup the system as you described

  • geaves

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!