As per: https://www.cnx-software.com/2…80-pcie-gen-3-x1-sockets/
Only posting about it here because Friendlyelec also recommends and has a prebuilt image with OMV6.
As per: https://www.cnx-software.com/2…80-pcie-gen-3-x1-sockets/
Only posting about it here because Friendlyelec also recommends and has a prebuilt image with OMV6.
FriendlyElec is going to send me a dev sample. So, I will be able to test this board and give more info about how it works.
So, I got the cm3588.
Overall, having a fast cpu, 2.5 GBe networking, up to 16 GB ram, and four nvme slots, it might be one of the best SBC boards I have used. I will post more about it as I do more testing.
Sounds about right for 1x PCI-E 3.0 (per slot), it also makes sense that JMB585 works here as tested in my https://github.com/HeyMeco/Roc…ards_m2/JMicron_JMB585.md when trying it out with current NanoPi/Rockchip.
I'm interested in how it would perform with raid 0 for fun, 1 & 5 for practical usage and maybe even raid 10 since it's all done in software.
I'm interested in how it would perform with raid 0 for fun, 1 & 5 for practical usage and maybe even raid 10 since it's all done in software.
I am going to try to get five drives setup today. So, I can test those configs.
Some results so far while raid 10 array is rebuilding...
files from single xfs hard drive to ext4 nvme
$ sudo dd if=/dev/zero of=/dev/sda bs=1M count=20000 conv=fdatasync status=progress && sync
20824719360 bytes (21 GB, 19 GiB) copied, 99 s, 210 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB, 20 GiB) copied, 116.723 s, 180 MB/s
files from nvme to raid 0 array of four WD Red Pro hard drives formatted ext4
$ sudo dd if=/dev/zero of=test.dd bs=1M count=20000 conv=fdatasync status=progress && sync
20797456384 bytes (21 GB, 19 GiB) copied, 37 s, 562 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB, 20 GiB) copied, 42.6164 s, 492 MB/s
Will post more when raid 10 is done syncing. So far this board is rock solid. I haven't seen a CPU temp above 39 C.
files from nvme to raid 10 array of four WD Red Pro hard drives formatted ext4
$ sudo dd if=/dev/zero of=test.dd bs=1M count=20000 conv=fdatasync status=progress && sync
20700987392 bytes (21 GB, 19 GiB) copied, 73 s, 284 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB, 20 GiB) copied, 85.141 s, 246 MB/s
files from nvme to raid 5 array of four WD Red Pro hard drives formatted ext4
$ sudo dd if=/dev/zero of=test.dd bs=1M count=20000 conv=fdatasync status=progress && sync
20734541824 bytes (21 GB, 19 GiB) copied, 74 s, 280 MB/s
20000+0 records in
20000+0 records out
20971520000 bytes (21 GB, 20 GiB) copied, 86.5658 s, 242 MB/s
Rsync files in mergerfs pool from my primary server with 10GBe network connection to rsync module on four disk raid 5 array on CM3588 connected via 2.5GBe connection.
sent 9,459,374,653,812 bytes received 324,477 bytes 134,828,637.70 bytes/sec
It might be faster but one of my drives has bad sectors. It might also be faster with no mergerfs involved. I am [/tt]going to try NVMe on primary server to raid 0 pool of three drives on cm3588 using rsync module again.
Rsync files on nvme from my primary server with 10GBe network connection to rsync module on three disk raid 0 array on CM3588 connected via 2.5GBe connection.
sent 2,772,165,189,743 bytes received 1,500 bytes 259,165,632.80 bytes/sec
Hi, there, long time lurker, first time poster.
I'm very interested in this board, as the solutions for NVME NAS setups are horrendously expensive. I'm specifically targeting my use case, of course, to serve my UHD rips with redundancy on NVME storage.
How simple would it be for me to fork the standard OMV to this board to avoid using the FriendlyELEC image?
It is no reflection on FriendlyELEC, of course, but I prefer such things to be properly open source.
---
EDIT - OK, so if I'm reading this FriendlyELEC thread correctly performance with 4 NVME is not much better than equivalent SATA.
This raises two additional complications, then:
NVME tends to be cheaper in my locale, too.
Because if so, I can just go x86 and hang the expense out on the SATA drives.
How simple would it be for me to fork the standard OMV to this board to avoid using the FriendlyELEC image?
What? Why would you have to fork anything? You don't have to use the Friendly Image. Install Armbian and run the install script - https://wiki.omv-extras.org/do…:armbian_bookworm_install
but I prefer such things to be properly open source.
What isn't open source on their image?
Whether there are SATA SSDs with as good reliability as their NVME cousins
Both use wear leveling. I doubt you will see much of a difference in lifespan.
Whether the SATA format has the longevity that said drives will still be viable when ones bought now fail
Sata has been around a long time and isn't going anywhere any time soon. You will be replacing the board before sata is gone.
Did anyone test this with 4 NVMe drives?
The spec sheet says that this can only draw upto 15W of power but some NVMe drives can consume as much as 11W+.
Turns out the 15W limit is for the RK3588 board only.
The NAS kit board has no limit.
files from nvme to raid 5 array of four WD Red Pro hard drives formatted ext4
- rsync all files from nvme - sent 556,048,060,225 bytes received 580,859 bytes 121,900,392.65 bytes/sec
- dd a file
Was this done off a single PCI-E to SATA adapter? Would this board actually be able to support 4 adapters and like 20 drives? Or, more practically, about 16 drives to not saturate the PCI-E 3x1 too much?
Also, where were you getting SATA power from?
Was this done off a single PCI-E to SATA adapter?
Yep.
Would this board actually be able to support 4 adapters and like 20 drives? Or, more practically, about 16 drives to not saturate the PCI-E 3x1 too much?
I don't see why not. I only have one adapter to try but I have nvme drives in the other slots.
Also, where were you getting SATA power from?
An ATX power supply with pin 16 and 17 shorted together.
have you tested the different pcie config for utilizing the nvme slots. And what was your experience with zfs. I read in a reddit post that one need to do some workaround to get zfs running,. OMV7 CM3588 ZFS setup I donk know if I recap it right, it must have something todo with with proxmox kernel not available for arm cpu.
Thanks for your feedback
have you tested the different pcie config for utilizing the nvme slots. And what was your experience with zfs. I read in a reddit post that one need to do some workaround to get zfs running,. OMV7 CM3588 ZFS setup I donk know if I recap it right, it must have something todo with with proxmox kernel not available for arm cpu.
I tested with two nvme sticks and one pcie-to-sata adapter (5 ports). I haven't tried zfs on the system. Everything I have tried is in this thread.
The proxmox kernel is not needed. It just helpful on amd64 systems to avoid compiling the zfs module. On ARM boards, you just have to install the kernel headers and zfs-dkms package before installing the plugin. I would also enable backports in omv-extras to get the newer version of zfs.
Hi, is it any guide to install OMV 7 at CM3588? Official Friendlyelec images is OMV 6 only...
is it any guide to install OMV 7 at CM3588?
Install Debian 7 and run the install script. https://wiki.omv-extras.org/do…:armbian_bookworm_install
Don’t have an account yet? Register yourself now and be a part of our community!