Why you should not use non-linux filesystems, such as ntfs, exfat, fat32, hfs+, apfs, etc.

  • I have become tired of constantly writing this information in replies on the forum, so I am hoping that this can serve as an easily findable rationale in the guides section of the forum. Perhaps the moderators can even pin it to the top of the list so it does not get lost as people submit how to guides.


    Filesystems that are not linux native are problematic and always will be if you use them as permanent data drives. This basically means that any filesystem that you can not create in the omv interface should not be used as permanent data drives. They can however be used as a way to transfer files between your computers and omv.


    The first issue and simplest reason is because non-native filesystems lack POSIX permissions. Linux as the OS and most if not all linux software expects the filesystem to handle linux ownership and permissions for files and directories. Filesystems that do not store these permissions can only allow the files to be accessed as a root user, and if the software trying to access the files is not operating as root, it can't access the files. This means that if you use a docker install that runs as a non-root user by using the docker user environment variable ot the PUID/PGID common in a lot of containers, you will have problems because that container is not running as root, and docker itself will not function correctly the moment you direct it to store it's configuration files on those filesystems.


    The second issue, which is also of fairly minor concern, is that non-native filesystems normally use a filesystem driver to translate from the filesystem structure on the drive to a format linux can read which normally results in slower performance.


    The third and arguably the biggest of these reasons is the lack of support for the filesystem's native journaling if it supports journaling (ntfs, hfs+, apfs) or the complete lack of journaling if it doesn't (exfat, fat32). Journaling is a massive benefit to help protect against data loss from power outages, misbehaving software, etc., by tracking and approving filesystem change requests before they are allowed to happen. Non-native journaled filesystems on linux as data drives are really no safer than an old fat32 or an exfat non-journaled drive because of the lack full journaling support (see below for a horror story example of what no journaling can do).


    The general rule of thumb for any operating system is to use it's best native filesystems for data, so ntfs on windows, hfs+ or apfs on mac, or a fully supported linux filesystem on linux (ext4, xfs, btrfs, etc.)


    Now for the horror story to speak to the lack of journaling. This horror story is not linux/omv specific but is here to illustrate the danger of not having journaling support, which is a fact that does apply to linux in general. Remember, this could happen to you.


    I work in film and video post production as an in-house technician keeping the facility running. I have had several people come to me after going out and buying an external drive to use to for editing a movie or tv series with their mac (macOS does do journaling on it's native filesystems of hfs+ and apfs because the filesystems support it). The drive packaging says it's compatible with windows and mac, so they just plug it in and go, not realizing that the windows/mac compatibility means an exfat drive with no journaling.


    After months or sometimes years of work they have a power outage or the drive gets unplugged without a proper unmount, and the next time they go to try to work, there is no data on the drive because the file allocation table has been erased/damaged but not corrupted to point where it is unreadable, so the system never reverts to the backup copy for the file allocation table. If it does revert to the backup copy they could continue to work, but the primary copy is not necessarily automatically rebuilt, so the next power or unmount problem now damages the backup copy and all data vanishes.


    I have done data recovery on those drives, and there is a 50/50 chance of being able to recover most of the data in tact, depending on if the backup copy of the file allocation table is in tact. If it isn't, the data can often still be recovered, but since the file names are stored in the file allocation table which is not being read or is empty all the names are all gone and replaced with the hexidecimal block address of the file, resulting in the person having to look at every file and try to rename them to what they should be. Journaling would help protect against that because it would theoretically not let the file allocation table be erased without reason and the changes are tracked in the journal.

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

    Edited 2 times, last by BernH ().

  • votdev

    Approved the thread.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!