OMV6 RAID 5 with Raspberry Pi4

  • Hi

    I got a little setup with my raspberry pi. running rasbian lite 64bit on it. the raspberry pi hasnt 4 usb3 ports so i decided to attach a quad sata hat : https://wiki.radxa.com/Dual_Quad_SATA_HAT


    for drives i used 4x 4TB WD Red plus


    omv cant use drives for raid over a usb connection so i manually setup my raid with this

    Code
    sudo mdadm --create --verbose /dev/md/raid5 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

    after some seconds the raid appears in the webui or i can watch status with


    Code
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[4] sdc[2] sdb[1] sda[0]
          11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
          [>....................]  recovery =  0.6% (23970784/3906886144) finish=627.4min speed=103140K/sec
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    but after 5-10mins i got

    Code
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdd[4](S) sdc[2] sdb[1] sda[0]
          11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    if im not totally incorrect "[UUU_]" means that the 4th drives is down


    the output from "mdadm --detail /dev/md127"

    so i think this is the problem.

    Code
    -       0        0        3      removed



    can someone please help me? im searching all over the internet but i cant find any solution for this problem.. (or im searching wrong..)


    Thanks anyways for any help

  • KM0201

    Approved the thread.
    • Official Post

    can someone please help me?

    This is why we tell people over and over and over not to use raid on an RPi (or usb).

    omv 8.0.10-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.4 | compose 8.1.2 | cterm 8.0 | borgbackup 8.1 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This is why we tell people over and over and over not to use raid on an RPi (or usb).

    i can see why, but i got everything and cant buy some new stuff. already spent enough money..


    would it help if i use debian 11 or even ubuntu on my RPi? and not raspian lite?


    otherwise thanks for the reminder but this isnt helping me

    • Official Post

    cant buy some new stuff.

    Why do you need new stuff? Just format the drives individually and pool them with mergerfs.


    this isnt helping me

    I am trying to help you by telling you not to use raid and avoid this problem. Why do you need raid?


    would it help if i use debian 11 or even ubuntu on my RPi? and not raspian lite?

    No. The problem is not the distro. The problem is usb not being good for mdadm. You can't guarantee when all of the drives will be ready and mdadm will have already tried to assemble the pool leaving it in a bad state like you are seeing.

    omv 8.0.10-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.4 | compose 8.1.2 | cterm 8.0 | borgbackup 8.1 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Why do you need new stuff? Just format the drives individually and pool them with mergerfs.

    ok, i never used mergerfs but i will look into it, thanks


    I am trying to help you by telling you not to use raid and avoid this problem. Why do you need raid?

    is raid so bad?

    incase one of the drives fails



    No. The problem is the distro. The problem is usb not being good for mdadm. You can't guarantee when all of the drives will be ready and mdadm will have already tried to assemble the pool leaving it in a bad state like you are seeing.

    i see


    thanks

    • Official Post

    is raid so bad?

    There is nothing wrong with raid at all when it is used on proper hardware. An RPi is not proper hardware because there is nothing redundant about it. And yes, it can help keep the system running if a drive fails but it is not backup. If you are using mergerfs and a drive fails, you only lose the content on that disk. Replace the drive and restore the files from that drive from backup and you are good to go again. When you replace the drive in a raid 5 array, the array is still up but when it is rebuilding, it will be very slow especially if you are trying to use it.

    omv 8.0.10-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.4 | compose 8.1.2 | cterm 8.0 | borgbackup 8.1 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • so if i want the raid with my RPi it would be a pain in the ass at this moment, right?


    actually i can use the qnap boards (i modified a qnap case for my RPi and i got everything to turn it to it old stand)

    • Official Post

    would be a pain in the ass at this moment, right?

    It won't be reliable which is the reason for using raid. Every time you start the RPi, you might have to stop the array and reassemble it. Reassembling will kick off a rebuild which makes the array slow and is hard on the disks.


    I don't use raid at home anymore. I pool disks with mergerfs myself.

    omv 8.0.10-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.4 | compose 8.1.2 | cterm 8.0 | borgbackup 8.1 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It won't be reliable which is the reason for using raid. Every time you start the RPi, you might have to stop the array and reassemble it. Reassembling will kick off a rebuild which makes the array slow and is hard on the disks.

    i see the problem.


    thanks for the help ryecoaaron!

  • schmaex

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!