Best filesystem for MergerFS/SnapRAID

  • I have 5x 8TB Red Drives that I want to use for mostly media storage and I intend to use MergerFS/SnapRAID.


    As the drives are empty, what would be the best filesystem for me to format them to for the long run?


    Ext4/BTRFS/XFS etc.


    Thanks for any guidance.

    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

    • Offizieller Beitrag

    Keep it simple and use ext4.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I've just tried to format a disk to ext4 with OMV, it seems to go fine but produces this error at the end:


    Failed to get the 'ext4' filesystem implementation or '/dev/sda1' does not exist.


    I am unable to mount this drive.


    and upon reboot OMV does not launch and gives the following errors:


    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

  • I am unable to mount this drive.

    For the reasons clearly written in your screenshot as well as what 'the OS' is recommending to you. Your filesystem is corrupted, you need to run fsck manually and you need to use systemctl to get more information and to proceed. Though I'm a bit surprised that a filesystem corruption on a data drive is able to interrupt the boot process.

  • Hi,


    I've run fsck without any options on the drive, and it completes with the following:


    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

  • Since you asked yesterday which fs to choose, you report today a corrupted ext4 filesystem I would assume we're talking about

    • a new NAS build
    • a freshly created filesystem that shows already unrecoverable errors

    Filesystem corruption usually happens for a reason and if both assumptions above are right then first thing I would check is cable/connector problems. SATA sends data over the wire combined with a short CRC checksum so in case data corruption 'on the wire' happens the receiver can take notice and asks the sender for a retransmit.


    The SMART standard has an attribute defined to notify users of this problem: SMART attribute 199. Unfortunately some disks do not record these errors even if they happen and some disk families only too late (one of the reasons I would never ever buy any WD SATA drive again).


    TL;DR: Most probably data gets corrupted at the hardware layer. If your drives support it then checking for SMART attribute 199 is a great way to find out.

  • Okay well, I've managed to get a disk formatted correctly using the onboard SATA passed through to OMV from ProxMox.


    I was able to set up a share and transfer data to it from my desktop no problem.


    I then connected the single drive back to the HBA also passed through to OMV and whilst it shows up in OMV I am unable to access the share.


    This leads me to believe that the issue is either with my HBA or the breakout cables.


    I've ordered some new cables and will try those (I have tried other ends of the breakout cable with similar results already).


    Failing that I would conclude that it's either the HBA itself or the manner in which it is passed through, so will try some alternative BIOS/Firmwares.


    I've already been having trouble consistently making sure SMART is passed through, so perhaps I will skip running OMV in a hypervisor and consider my options.


    I checked the SMART entry 199 as you suggested and the value for my drive was 0.


    Thanks for your help, if you have any suggestions based on what I've said here it'd be appreciated and I will report back as my testing continues!

    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

  • I checked the SMART entry 199 as you suggested and the value for my drive was 0.

    Did you read what I wrote above? 'Unfortunately some disks do not record these errors even if they happen and some disk families only too late'' (this being one of my reasons to avoid WD)?


    If you have WD disks you need to also check 'power on hours' and start/stop count. This behaviour to not properly deal with this SMART attribute is really annoying since usually cabling/contact problems occur with new disks.

  • Did you read what I wrote above? 'Unfortunately some disks do not record these errors even if they happen and some disk families only too late'' (this being one of my reasons to avoid WD)?
    If you have WD disks you need to also check 'power on hours' and start/stop count. This behaviour to not properly deal with this SMART attribute is really annoying since usually cabling/contact problems occur with new disks.

    I used the link you provided, and my drive has over 8 hours of power-on time (the drives have been connected for a while whilst I fiddled around with ProxMox) so the value give as zero rather than 253.


    Edit: my drives are WD so the above information was important to check.

    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

  • Well, the new breakout cables arrived and the drives still weren't playing nicely, so I reinstalled OMV as the main boot OS and everything now seems to be working fine, I can only assume in this case that the issue was between Proxmox, the HBA and the PCI passthrough settings.


    All my drives are now happily formatted to ext4 and I've combined them using MergerFS.


    My issue now is that I wanted to run a torrent client/nzb downloader etc in VMs/containers through VPN, will this be possible with OMV, would it be possible with Docker perhaps?



    Also, I have OMV installed onto a 250gb SSD, I also have a 60gb SSD from my previous NAS, would it be best to use the 60gb SSD for boot as OMV doesnt take up too much space, plus neither would any additional VM's I think.


    In that case:


    1) what would be the best way to copy my OMV installation and settings to the 60gb and use for boot


    2) could the 250gb SSD be of any use in this build for a cache drive or fast storage to download to before moving to the combined storage.


    Thanks again for your help with my previous issues, I am happy with the solution, even if it isn't what I originally intended, but there was probably always going to be on-going issues with it, in partiuclar the SMART pass through working intermittently at best.

    OMV 4.1.4 Arrakis | 34TB SnapRAID+MergerFS
    Supermicro X10SLM+-F| Xeon E3-1285L | 16gb ECC Ram | LSI SAS9220-8i
    5 x 8TB WD Red | 2x 3TB WD Red | 128gb Samsung 830 EVO

  • I think most people will tell you it's much easier just to manually screenshot or take some notes about your OMV settings, install OMV 4 fresh to the other SSD, and set it up from your notes.


    Personally, I installed a spare SSD I had lying around to use as Plex's transcode drive and the "incoming" file directory for my usenet downloader NZBGet. My torrent downloads go directly to my pool storage though.


    Also, Docker is a good solution for your requirements. I may be biased though, since I spent a lot of time learning it and wrote this guide for commonly used media server applications.

    • Offizieller Beitrag

    Do you know of any documentation on the supported file systems for MergerFS?

    Nope. Most important info is here but I don't see anything about required filesystem features. While I'm guessing HFS+ would work, @trapexit would know more.


    I use drives in Apple's HFS+

    Why? The idea of a NAS is to permanently attach drives (especially drives in a pool) because you should never had to connect them to the client. Therefore, you should use a native filesystem of the NAS.


    This is supported by SnapRaid but I am not sure if my problems with MergerFS are related to HFS+.

    mergerfs needs a few more filesystem features than snapraid needs.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I don't list explicit filesystems but the docs do say "works with heterogeneous filesystem types".


    So long as the underlying filesystem resembles a POSIX filesystem in function (more or less) it should work fine.


    As for using HFS+ on a Linux system in a NAS setup: I wouldn't recommend that (same with using NTFS). They simply aren't as well supported. Especially in writes. If something gets wonky you're far more likely to lose data. I'd suggest ext4, XFS, or btrfs.


    What "problems" are you having with mergerfs?

  • I've tried it with two EXT4 drives now.


    1) Add 2 EXT4 drives to Union Filesystems -> UFs
    2) Create shared Folder on UFs and set Privileges for my user
    3) Add shared Folder to SMB/CIFS


    4) On my Mac I open the shared Folder and try to copy a file. -> Error: The operation can’t be completed because you don't have permission to access some of the items.
    5) Use Resetperms Plugin on shared Folder
    6) Try to open shared Folder again -> Error: The operation can’t be completed because the original item for “hd320” can’t be found.
    7) OMV > File Systems
    Loading... -> Error: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; df -PT '/srv/06b298b4-8e7c-4ac6-9637-454d43c33506' 2>&1' with exit code '1': df: /srv/06b298b4-8e7c-4ac6-9637-454d43c33506: Transport endpoint is not connected

  • In OMV > Storage > File Systems both EXT4 drives and the UFs are mounted.
    Reset Permissions on UFs trows an error -> cannot read directory '/srv/06b298b4-8e7c-4ac6-9637-454d43c33506/hd320'


    In the shell lsblk -f shows that both EXT4 drives are mounted.
    Unmounting and mounting them does not help:
    sudo umount /dev/sdd1
    sudo umount /dev/sde1
    sudo mount /dev/sdd1 /srv/dev-disk-by-label-d320a
    sudo mount /dev/sde1 /srv/dev-disk-by-label-d320b
    sudo mount -o remount,rw /dev/sdd1 /srv/dev-disk-by-label-d320a
    sudo mount -o remount,rw /dev/sde1 /srv/dev-disk-by-label-d320b

    When I go to OMV > Storage > File Systems again, there is this error -> -PT '/srv/06b298b4-8e7c-4ac6-9637-454d43c33506' 2>&1' with exit code '1': df: /srv/06b298b4-8e7c-4ac6-9637-454d43c33506: Transport endpoint is not connected

    • Offizieller Beitrag

    Only for parity recommended to use XFS because of limitation of file size on EXT4 = 16TB, of course if your pool is bigger than that.

    ext4 isn't limited to 16tb as long as the e2fsprogs are new enough. This shouldn't be a problem on a supported OMV install. And until 100TB drives come out, ext4 is fine.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ext4 isn't limited to 16tb as long as the e2fsprogs are new enough. This shouldn't be a problem on a supported OMV install. And until 100TB drives come out, ext4 is fine.

    But it is recommended on Snapraid page, because parity has only one huge file and EXT4 has limitation 16TB for file size (by default, I don`t know how to increase it, it just a fact), for example I have 18 + 18 data drives I can have them in EXT4, but my parity 18TB drive is XFS because of this limitation. It`s really easily my parity file can be 17TB

    ext4 - Wikipedia

    • Offizieller Beitrag

    But it is recommended on Snapraid page, because parity has only one huge file and EXT4 has limitation 16TB for file size (by default, I don`t know how to increase it, it just a fact), for example I have 18 + 18 data drives I can have them in EXT4, but my parity 18TB drive is XFS because of this limitation. It`s really easily my parity file can be 17TB

    I was just correcting the fact that ext4 is not limited to 16tb because people will read it and think that is true. It was not directly related to snapraid. And snapraid (and now the plugin) support split parity files on one drive removing any ext4 limitations. The FAQ you are reading on the snapraid page referencing ext4 limitations is very old since it doesn't mention split parity at all.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!