Posts by nightrider

    I did a new fresh install once again and this time it worked out ok.


    This time I only did OMV update under Update management tab and rebooted, nothing else.
    Then I installed the OMV Extras with the script.


    Very weird error indeed. After all OMV 5 is still in beta status?

    Hi,


    On a fresh install I also get the exact same problem during installation of OMV Extras->



    And when trying to activate it in GUI I get this error->

    If you remove the diskstats plugin, do you have this issue? I don't on my test system.

    I have not installed the diskstats plugin, so it is not enabled? I also tried to disable the "Monitoring" tab, but did not do anything.


    I went back to OMV5 for some testing. I think there is something with the UnionFS plugin, I will try to explain.


    First when I had the unionfs plugin installed, I always got this error message I showed above. Also for normal 1 drive nfs share setup, with no other unionfs share setup on the system.


    Then I went ahead to uninstalled the unionfs, rebooted and setup a single drive nfs share, now I did not have any error message. But I could still not browse or create a NFS video source on KODI. Then I went ahead to install the unionfs plugin again, and created a share, now the error message is back when I enter "Shared folders" tab after created a nfs share or deleted the nfs.


    I hope this will help, but I cannot help you anymore for the coming month, I will be away for work for a while unfortunately.

    I can't tell where the problem is here. Are you trying to use the unionfilesystem plugin to pool a remotemount nfs share?

    First: Scanning the library problem can have been my fault, I made a mistake in the docker-compose file on how to set the environments for MariaDB. So maybe it would have worked if I tried to scan when I could browse the pre-set nfs share from the source.xml file in KODI. Did not test that, I already went back to OMV4.
    Second: The error with browsing the NFS share is still there, I tried also with only 1 drive but same problem. Exact same setup work in OMV4.


    Yes, I do use the unionfilesystem plugin as you can see in the error message above, but that error shows up even if I do not use the plugin. That was only an example. I create a unionfilesystem pool, then create a shared folder and then create a NFS share.

    Cannot create and browse the NFS share created on OMV5 in KODI app on Nvidia Shield. I can mount and browse the NFS share on another computer.


    I can though browse the share if I create a source.xml file with the path pre set, but I cannot scan the library.


    With the exact same setup in OMV4 there is no problem to browse the NFS share in KODI.
    Has anyone else had the same experience on OMV5 as me? For me this seems like a bug of some sort?


    There is also another bug that is always coming up as soon as I change some settings in OMV5. One example: IF I delete one NFS share and then go to "Shared Folders" tab, I always get this error warning below. And the shared folder and all the other folders has dissapeared, the remaining shared folders only shows up again after a reboot. This was just one example.

    There are the mergerfs tools which offer a tool to balance drives. You'd install the drive, run the balance tool, then use as normal. Or you use the rand policy.

    That I have missed that there was a tool for mergerfs. That is a very useful feature, then it is room for improvements on the UnionFS-plugin in OMV to include it in the future, that would be really nice so you do not need to go to command line to run the tasks. So to run balance you just type # mergerfs.balance /media and it will balance out everything on the pool that is in the relative path folder "/media"? And rsync package is needed to be installed I see.


    Ok, you can spread out the newly written data by using and change to "random" policy instead, that will help a little of course, but it will not move any old data on the drives.


    What data do you propose to cache on this SSD?

    I am no expert on filesystems, but a wild brainstorming idea is to have the mergerfs to read and store all metadata while you write or change any data to the pool? The SSD would have all cache to everything on the pool and when Plex needs to scan, the mergerfs cache would give the answer instead of having Plex to scan the whole pool (If there is no new data on the pool that is). I do not know, this would have been an awesome feature and the scan for Plex would be super fast if there is no new data on the pool, but maybe it is not technically possible. :-) ... With this feature, you can obviously not be working with the drives directly, only via the pool so mergerfs can cache all metadata written.


    "MergerFS timeout", how often will mergerfs scan the pool?

    Did you create the *full* relative paths on both drives and try creating something *in* that directory?

    Yes the same folder name on both drives. This have been working on my old OMV3 in the past. I am in the process to move over to a new server build running OMVs in VMs on Proxmox with HBA passthrough. OMV is only a NAS for me, docker apps I run on other VMs instead.



    What does "relatively in order" mean? Order of when you created them?

    Yes, exactly, i like to have the option of hot swap, but not exactly for the reason as you described though. I understand your point about the increased risk of data loss, but this is why we have Snapraid to reduce the risk of that.


    By using "mfs" or "lus", let say for example, you have 3 drives in a pool, these 3 drives is filled to 70% all together and you add 1 more drive, this will make all the new data to be written to drive number 4 only until it also reach 70%? Then you have the same problem with most recent data written on 1 drive, am I right? Or does MergerFS have an option of balancing out the data like it moves it over to the new drive so you will have same even (lower) percentage written data on all 4 drives? If this is possible, that would be a very cool and powerful feature.


    You are right though about if several users access the pool, then I really see the benefit of balancing out all the data for increased speed. Maybe I will use this "mfs" option in the future, especially if there was a feature to balancing out the data as described above when adding new drives to the pool.



    You're mistaken thinking drives won't spin up or that the endurance will be the best. Drives will spin up if data from them is necessary.

    I do understand all of that. For sure, access data and spin-up drives several times a day will only harm the HDDs more than keeping them spinning. For my use case it can go very long time in periods (weeks) until I need to access a file on that pool. This is why OMV with MergerFS and Snapraid is the perfect solution for me compared to using FreeNAS with ZFS. With MergerFS you can always add more drives to a pool that you cannot do with ZFS.


    Another thought I have, is it not possible in the future to add a SSD cache to MergerFS that can keep all the Metadata and so on, so that not all the drives needs to spin-up when an app like Plex or Kodi needs to scan the drives for new files? That would be an even more powerful feature.


    I have to thank you for your thorough written answers here, I appreciate it. Please continue and improve MergerFS, i very much appreciate your work on this app. (If there is anything more that can be improved that is) .... :-)

    As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors.

    I did set up the pool with 2 drives and it did not work for me with "Existing path, least free space", I could not continue write files to that pool. I did try with the same relative path on both drives, only thing was that one of the drives was full and already reached the minimum free space. I do not know, maybe it was only a temporary bug.


    If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?

    I like the idea of having all the files relatively in order on the drives, I just add a new drive when the pool starts to be filled up. For my use case, OMV in conjunction with MergerFS and Snapraid is the perfect solution. I store my files for long term use, I mean I write it once and then leave it there and I do not have to spin up all the drives unless I need to access that specific file. Power saving and HDD endurance at its best.


    How much performance do I loose really by using it like I do? I mean I write it once and then leave it there?

    I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:

    Thank you very much, that solved my problem. It is not working with setting "Create policy" -> "Existing path, least free space", but it is working with "Least free space" option and to have same created relative path on all disks.



    I can't replicate these problems.

    The error "Couldn't extract an UUID from the provided path" that showed up for me after created the NFS share when clicking back to "Shared folders" tab, disappeared after a reboot.


    Everything seems to work for now, except what I mentioned about "Existing path" that seems to not work in UnionFS plugin. Same what is mentioned in the article dropje linked to above.


    Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?

    Hi,


    I was just searching for this similar error with UnionFS plugin. I can add HDDSs and create a pool, but after I create a shared folder, I start to get this error. Every time I click on "Shared folder" tab, this error shows.

    I cannot see the created shared folder as long as the unionfs pool is there. When I remove the pool, the shared folder shows up again.


    My fstab->

    output of the commands if it helps->
    grep mergerfs /etc/fstab

    Code
    root@OMV-2:~# grep mergerfs /etc/fstab
    /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD /srv/9368d400-3871-428a-b909-6cc9f251b578 fuse.mergerfs defaults,allow_other,direct_io,use_ino,noforget,category.create=eplfs,minfreespace=40G,x-systemd.requires=/srv/dev-disk-by-label-15-Series3-HDD,x-systemd.requires=/srv/dev-disk-by-label-14-Series2-HDD,x-systemd.requires=/srv/dev-disk-by-label-13-Series1-HDD 0 0

    dpkg -l | grep -e openm - merg

    Code
    root@OMV-2:~# dpkg -l | grep -e openm - merg
    (standard input):ii openmediavault 5.1.1-1 all openmediavault - The open network attached storage solution
    (standard input):ii openmediavault-clamav 5.0.1-1 all OpenMediaVault ClamAV plugin
    (standard input):ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
    (standard input):ii openmediavault-omvextrasorg 5.1.6 all OMV-Extras.org Package Repositories for OpenMediaVault
    (standard input):ii openmediavault-snapraid 5.0.1 all snapraid plugin for OpenMediaVault.
    (standard input):ii openmediavault-unionfilesystems 5.0.2 all Union filesystems plugin for OpenMediaVault.
    grep: merg: No such file or directory


    I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.