OMV6 RPI CM4 IO ZFS with 6x HDD 4TB. Error 500/404.

  • I'm new here so please be understanding. I ran OMV6 in the latest available version using an RPI CM4 8GB. I used 6x identical 4TB HDDs as disks. I wanted to create RAID5.

    It takes a long time and every few seconds I get an error:


    Where is a problem ?

  • RPI CM4 8GB. I used 6x identical 4TB HDDs as disks. I wanted to create RAID5.

    Which carrier board?

    How are the drives connected?

  • Which carrier board?

    How are the drives connected?

    Via CM4 IO Board and PCIE controller with 10 ports SATA ASMedia 1061.

    All 6 disks are recognized on OMV, cleaned.


    I interrupted the operation, which lasted over half an hour. Now I cannot format individual drives because it says:

    Code
    Warning: Partition table header claims that the size of partition table
    entries is 0 bytes, but this program supports only 128-byte entries.
    Adjusting accordingly, but partition table may be garbage.
    Warning: Partition table header claims that the size of partition table
    entries is 0 bytes, but this program supports only 128-byte entries.
    Adjusting accordingly, but partition table may be garbage.
    
    ** CONNECTION LOST **
    



    Additionally, a software raid has been created, but I can't do anything with it. Or delete it.


    Ok. Now after restart i was format HDDs and destry the software raid, but i can't create a new ZFS RAIDZ-1 partions. I have a error:


    modprobe: FATAL: Module zfs not found in directory /lib/modules/6.1.61-v8+



    Reinstallation ZFS handler report errors:


    I had to upgrade the kernel version because the standard version did not properly support fan control on the CM4 IO board.


    Do I need to upgrade the kernel version to proxmox using the plugin system?



    I found a solution that I used.


    Code
    sudo apt purge --auto-remove zfsutils-linux
    sudo apt install zfs-dkms
    sudo apt install zfsutils-linux



    Now I have visible BOOT resources, but the ZFS tab has disappeared again from menu.




    Next:

    How do I install from the plugin system: openmediavault-zfs 6.0.14

    then the ZFS option appears in the menu but all visible file systems disappear, including the BOOT device.

    I don't know any of this anymore and I'm asking for help.

  • Jedrek

    Changed the title of the thread from “OMV6 Software Raid (Raid5) with 6x HDD 4TB. Creating or nothing ?” to “OMV6 Software Raid (Raid5) or ZFS with 6x HDD 4TB. Creating for nothing ?”.
    • Official Post

    Regardless of the problems that ZFS may generate in a PI, a possible problem that you are going to have is this. That pcie sata board is not a good choice, and even less so to create a Raid. As you can see on the Asmedia website, that chip only works with one lane and is multiplying the SATA ports through commands. https://www.asmedia.com.tw/pro…X3HyepH7/58dYQ8bxZ4UR9wG5

    That means that until one disk has finished executing a command, a command is not sent to the next one. This behavior in itself is bad, but in a Raid it is unacceptable.


    If you still have time I would try to return that card. Here is a list of chips that may be recommended. RE: Unexpected disk errors seen with PCIE card when adding more than 2 disks

  • Thank you for the information, but for now it seems to me that the problem is partly on the software side. If you could review the logs I posted and the descriptions in the posts below, I would be grateful.



    Why is it that when I uninstall the ZFS plugin: openmediavault-zfs 6.0.14, I see other resources, e.g. BOOT. I can then create a software Raid 5 that works without any problems. As soon as I install the ZFS plug-in, everything disappears, I can't see any resources except disks. Then I always get error 500.

    I do not understand this.

    this image shows the state when I want to add a ZFS resource. I have a 404 error, and when I add a ZFS application from plugins, I get a 500 error in the ZFS tab.

    • Official Post

    Regarding ZFS, it seems like a problem with the compilation of the ZFS module in the kernel. I don't really know how to solve this on a PI or if it can be solved, it depends on the kernel you are using and if that kernel supports it or not, you will have to wait for someone else's help for that.

    I also tell you that you cannot use the openmediavault-kernel plugin on a PI, it does not work.

    In any case, if you manage to solve it, you will still have the problem with that card, any type of Raid will not work properly with it, or, if it does, it will be very far from optimal operation.

  • I understand you perfectly, for now I want to deal with the software problem, later I will deal with the hardware. now I don't even know exactly where the bigger problem is, but you have to start somewhere, you have to eliminate individual problems systematically and one by one.

    Still needs help - probably with the kernel.


    I read that you need to install the Proxmox kernel but I have problems. I was inspired by this topic: Kernel Proxmox installation

    I have a error when i add a installation kernel (look at screen)

  • Jedrek

    Changed the title of the thread from “OMV6 Software Raid (Raid5) or ZFS with 6x HDD 4TB. Creating for nothing ?” to “OMV6 RPI CM4 IO ZFS with 6x HDD 4TB. Error 500/404.”.
  • Hi Jedrek,

    That will not work, becauseovm-installproxmox will try to download the amd64 version of the proxmox kernel, which will indeed not work with the arm64 architecture of the raspberry pi 4.

    I am now trying out OMV on a RPI4 before I set it up as a NAS


    ⚠️ NO GUARANTEE OF ANYTHING WORKING AND I TAKE ZERO/NO RESPONSIBILITY IF THINGS BREAK


    This is my edited ovm-installproxmox which gets the kernel from the Pimox 7 repo available here

    First get the GPG key curl https://raw.githubusercontent.com/pimox/pimox7/master/KEY.gpg | apt-key add -


    Then overwrite ovm-installproxmox with this version and install the kernel

    Once you rebooted uname -a should show

    Now install the zfs plugin which will build the modules for the new kernel (give it some time)


    This was the result

    Now I have to get some disks connected to the RPI4 and verify if ZFS is actually working
    If anyone is also experimenting with this, please share feedback or test results.

    Thanks!

    QNAP SS-439-Pro EOL - OMV6 soon™

    1 Kingston A400 120GB SSD 2.5" - OMV /

    4 WD Red 1 TB NAS SSD 2.5" - ZFS striped-mirror

    Evaluating™

    2x OMV6
    Raspberry Pi 4
    Geekworm NASPi Gemini 2.5 V2.0 Dual 2.5'' SATA HDD/SSD NAS Case
    2 WD Red 1TB NAS HDD 2.5" 5400 RPM - ZFS stripe

  • I added a single USB Stick ZFS pool (just to see if it works) and it does



    Although the ZFS plugin does not show anything in the web interface, or worse. The gateway goes in timeout.

    QNAP SS-439-Pro EOL - OMV6 soon™

    1 Kingston A400 120GB SSD 2.5" - OMV /

    4 WD Red 1 TB NAS SSD 2.5" - ZFS striped-mirror

    Evaluating™

    2x OMV6
    Raspberry Pi 4
    Geekworm NASPi Gemini 2.5 V2.0 Dual 2.5'' SATA HDD/SSD NAS Case
    2 WD Red 1TB NAS HDD 2.5" 5400 RPM - ZFS stripe

  • please do not play with ZFS on a Raspi is a waste of time

    I recently began exploring ZFS and experimenting with it on a Raspberry Pi 4 in my homelab to understand how it works.

    I've learned that while ZFS can technically operate on a Raspberry Pi 4, the Pi's limited capabilities coupled with ZFS's extensive resource requirements make it a less practical option. However, it serves as an excellent learning tool for experimenting with non-essential devices and identifying potential issues in a test environment.


    Could you provide any recommendations for sustainable Raspberry Pi 4 OMV NAS filesystems that ensure data integrity and reliability? Thank you.

    QNAP SS-439-Pro EOL - OMV6 soon™

    1 Kingston A400 120GB SSD 2.5" - OMV /

    4 WD Red 1 TB NAS SSD 2.5" - ZFS striped-mirror

    Evaluating™

    2x OMV6
    Raspberry Pi 4
    Geekworm NASPi Gemini 2.5 V2.0 Dual 2.5'' SATA HDD/SSD NAS Case
    2 WD Red 1TB NAS HDD 2.5" 5400 RPM - ZFS stripe

  • I compiled the hardware and software myself and I'm impressed. Everyone writes that ZFS is not suitable for RPI, but I tried it myself and it is different.

    From the beginning.

    I have a Raspberry CM4 with 8GB RAM and 32GB eMMC. Since OMV cannot be officially installed on Debian 12, I used the step-by-step commands and installed 6 disks of 4TB each in ZFS1 and then ran Samba to check what data transfers I could achieve. What's more, I used a PCIE controller card for 10 SATA3 ports, which I bought on Aliexpress. The card is a pore replicator on an ASM1062 chip. People on this forum advised me not to use this card, but it works beautifully.

    I tested my solution for a week, sending lots of files back and forth. The speed is much faster than the RAID5 software. While in raid5 I had speeds of 30MB/s, in ZFS I have up to the link speed, i.e. up to approx. 113MB/s. Data transfer ranges from 80 to 113 MB/s.

    I am satisfied with the assembled server. It works quickly and reliably despite comments here on the forum that the controller is bad and that ZFS on RPI makes no sense. I will write this, after assembling and starting the server, it turned out that it makes sense.


    The only problem that I forgot to mention is that you cannot put the drives to sleep because then the controller has a problem identifying the drives, but once they are running non-stop, everything is fine.


    I am at the stage of inserting the entire structure along with 6 3.5-inch HDD drives into the Dell380 computer case.

    • Official Post

    that ZFS on RPI makes no sense.

    I still makes no sense. Raid on an RPi makes no sense. Nothing about an RPi is highly available/redundant.


    But I would curious to see if you have the same problem I did on my RPi/zfs test - it would segfault after 6-8 TB when trying to transfer about 14-15tb of data over the network (nfs and rsync).

    The speed is much faster than the RAID5 software. While in raid5 I had speeds of 30MB/s

    Your testing was bad then. mdadm raid5 requires less cpu than zfs just because zfs is CoW. When I tested a five drive mdadm/ext4 array on a CM4, it had no problem saturating the network adapter.

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I wrote my observations after a week of testing, in correspondence to the information on this forum.

    I confirm that in my case raid5 had lower performance than ZFS, and that was a lot.

    • Official Post

    I wrote my observations after a week of testing, in correspondence to the information on this forum.

    Most people don't write 6-8 TB in a week as I said caused my issue.


    I confirm that in my case raid5 had lower performance than ZFS, and that was a lot.

    And I said it doesn't make sense since zfs is more cpu intensive. I have spent years working on RPis and OMV. I have setup raid arrays on the RPi4, RPi5 (no sata hat yet), cm4 with RPi carrier board, CM4 with the Axzez board, CM4 with the DeskPi 6c board, and CM4 with the Turing Pi 2 board, and have not seen the bad performance you found.

    omv 7.4.7-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.3 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'm calmly waiting for the release of OMV 7 for RPI with the latest OS release. We'll see if I can recreate my configuration on this system.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!