Posts by Ian-Polito

    Well once I created the array on the CLI, OMV did indeed recognize it in the Web GUI and i was able to proceed from there. It was working for almost a year. I used the command you gave me above

    mdadm -A -R --force /dev/md/vol1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

    with the following output

    mdadm: forcing event count in /dev/sdc1(2) from 16093 upto 16100
    mdadm: forcing event count in /dev/sdd1(3) from 16093 upto 16100
    mdadm: clearing FAULTY flag for device 2 in /dev/md/vol1 for /dev/sdc1
    mdadm: clearing FAULTY flag for device 3 in /dev/md/vol1 for /dev/sdd1
    mdadm: Marking array /dev/md/vol1 as 'clean'
    mdadm: /dev/md/vol1 has been started with 4 drives.

    and mounted it with the command

    sudo mount /dev/md/vol1 /mnt

    the web GUI once again recognized the raid array and everything is working again. I thank you for your help and when I get the chance I will resetup the system as you described

    The Quad Sata Kit connects the drives to the pi using its usb ports and OMV would not allow me to create the raid array in the web GUI. I made the array from the command line in SSH. I followed this guide…erry-pi3-raid-nas-server/

    which used the following commands

    mdadm --create --verbose /dev/md/vol1 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
    mkfs.ext4 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md/vol1
    sudo mount /dev/md/vol1 /mnt

    The guide notes that

    if your volume name doesn’t show, it’ll be called “md127” or similar, this is a bug in mdadm, but continue the guide using the name you gave your array

    cat /proc/mdstat

    Personalities :
    unused devices: <none>


    /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="592B-C92C" TYPE="vfat" PARTUUID="39bcf0e4-01"
    /dev/mmcblk0p2: LABEL="rootfs" UUID="706944a6-7d0f-4a45-9f8c-7fb07375e9f7" TYPE="ext4" PARTUUID="39bcf0e4-02"
    /dev/sda1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="6b756cf5-a31a-93e0-2e01-c609ac27ae56" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="dd0495b9-e94a-7c43-9fb1-63380a450899"
    /dev/sdb1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="41717c85-3617-0255-77a6-86556384fc1e" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="12baed32-f89d-ca4b-b42b-81517a6c9059"
    /dev/sdc1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="b2f78f74-776c-6b30-9ffb-2a06c72c4e34" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="98256492-5627-4641-a620-70e1e2d69f83"
    /dev/sdd1: UUID="9e5083d2-fa18-e001-43bf-1c25d1c1bf8c" UUID_SUB="ef3a0011-2f2e-7088-a4a7-932d24f797cc" LABEL="PiNAS:vol1" TYPE="linux_raid_member" PARTUUID="31c36c97-cc56-d741-b8b9-65558edbbda0"
    /dev/mmcblk0: PTUUID="39bcf0e4" PTTYPE="dos"

    fdisk -l | grep "Disk "

    cat /etc/mdadm/mdadm.conf

    mdadm --detail --scan --verbose

    no output

    I am a noob at using OMV and raid so I apologize in advance. I have not updated the system, and for some reason the raid array has stopped working. It no longer appears in the web GUI, and I am unable to access files through any means. I have performed tests on the drives and determined none of the four have died in this RAID5 setup. I use OMV on a raspberry pi with the QUAD Sata kit. I am not sure where to go from here to repairing the array, if theres any additional information I can provide or command outputs to present let me know. Hoping someone can help me with this issue