Install problem – no space left because disks are not pooled properly?

  • Hi,


    I tried, on my first OMV run, an install with snapraid and I ran into storage problems, the first data drive (of three) being full, the parity drive full and I could not write any more to the OMV server.


    I could not fix it, so I thought I would reduce the complexity and removed snapraid to better troubleshoot. Later, I would quite like to try snapraid again, but for the time being I would be happy if I could simply fully use the three data disks in jbod mode, but as a pooled drive.


    I unpooled the drives again with Union FileSystems, pooled them again, created the share again.


    I have three disks pooled with Union FileSystems, or rather, I want them to be pooled, but it is not working.

    The three disks were setup with Union Filesystems with the default settings: They are standard 8TB HDDs, formatted as ext4, sdb1, sdc1 and sdd1. The create policy was not changed, it is Existing path, most free space, minimum space free 4G, the options are the default ones:

    defaults,allow_other,cache.files=off,use_ino


    The pooled drive is shown as T20_NAS in the Union FileSystem overview.


    Here is a screenshot of my file system overview:




    I am using Version 5.6.26-1.


    When I created the Share under services → SMB/CIFS → Shares,

    I linked it to the pooled drive, which was shown with the correct capacity (almost 15 GB, as one of the three data drives is full.


    I would expect to be able to simply put more files onto the pooled drive via the share, As sdd1 is full and sdb1 and sdc1 are empty, I would expect OMV to automatically keep on writing on the empty drives.


    Instead, I get an error message in Windows:: On share (\\T20) there is not enough space left.

    At the moment I can’t even create a folder because I tried writing recently and I got an error message, too. Now there is not a single byte left, apparently. It is definitely not an access problem but a space problem or rather that despite my having used the pooled drive in creating the share, it does not work properly.


    I am accessing the OMV server via Windows right now but I also have a Linux Mint partition on the computer and tried accessing it in this way. I does not make a difference.



    Any ideas?

  • macom

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    Check the mergerfs policies here. https://github.com/trapexit/mergerfs#policy-descriptions

    If you use the existing route policy it will continue writing to the same drive until it is full. It will not be written to the rest of the drives if you do not create the path manually. You can set other policies.

  • Thanks for the response.


    My settings in Merger FileSystem are these:




    As you can see, it is already set to "existing path, most free space". That is exactly the thing. That is the default setting and should be the right one, correct? Or would "Most free space" (withouth "existing path") make any difference...?


    When I tried snapraid, it was set to a different policy (I don't remember which one) because I followed a guide, but then I ran into my problems and recreated the pooled drive. It is almost as if the new policy did not stick...


    At the moment, I have run out of ideas short of wiping the disks and reinstalling everything, maybe upgrading to OMV 6.

    • Offizieller Beitrag

    SnapRaid and mergerfs are independent packages. They have nothing to do with each other although they are clearly related in the end. Your problem is exclusively with mergerfs and its policies.

    Read the link I gave you. In it you can see how the policies work. As I said, if you continue with the existing route policy you must create the route manually on the other disks so that it is written to them. If you don't want to do that you must change the policy to any without existing route.

  • Do you fully understand what "existing path" means and what it requires to behave the way you want the pool to work?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    maybe upgrading to OMV 6.

    Yeah. You should upgrade to OMV6, of course. OMV5 and debian 10 are obsolete.

  • SnapRaid and mergerfs are independent packages. They have nothing to do with each other although they are clearly related in the end. Your problem is exclusively with mergerfs and its policies.

    Read the link I gave you. In it you can see how the policies work. As I said, if you continue with the existing route policy you must create the route manually on the other disks so that it is written to them. If you don't want to do that you must change the policy to any without existing route.

    Sorry, I did not understand that the stress in your reply was, in fact, on "existing route" (=path). Rereading it, it now makes sense. I will try it later setting it to just "mfs" and see if that changes things. I am not sure "what creating it manually" means other than setting the policy when creating the pool, which I did, only with apparently the wrong policy. Thanks again!

  • Do you fully understand what "existing path" means and what it requires to behave the way you want the pool to work?

    I reread the github documentation and I now have a vague understanding of what "existing path" means. The way I understand it, OMW remembers previous paths. But the path itself, as far as I understand it, did not change, only the policy. That's why I find it still puzzling that "existing path, most free space" results in this out of space error. Anyway, I will try "mfs" without existing path and see if it changes things. And I am going to look into how to upgrade, of course. It would be great if I didn't have to wipe and rewrite the full 8 TB drive. Thanks for your input.

    • Offizieller Beitrag

    Sorry, I did not understand that the stress in your reply was, in fact, on "existing route" (=path). Rereading it, it now makes sense. I will try it later setting it to just "mfs" and see if that changes things. I am not sure "what creating it manually" means other than setting the policy when creating the pool, which I did, only with apparently the wrong policy. Thanks again!

    Yes, I guess I didn't explain myself as well as I would have liked. Rereading the thread, I said the same thing three times with almost the same words. :)

    When you have a merger created on multiple disks with an existing path policy it means the following: Imagine that the path of your media files is /media. You start copying files into the pool, and what mergerfs does is create that path on the first disk and copy the first file. On the rest of the disks, that folder is NOT created. When you write the second file, with that policy, what mergerfs does is look for which disks have that folder created. In this case, only the first disk has that folder created, so the second file and subsequent files will always end up on the first disk, leaving the other disks without files, because no one has created that folder on them, and neither has mergerfs. will do. In order for mergerfs to write to the second disk you would have to go to the command line, find the second disk and create the /media folder manually. Then mergerfs will also copy files to that folder and to that disk.

    However, the operation is different if you use a policy that does not control the existing routes on the different disks. What mergerfs does in this case is create the /media folder on the first disk and copy the first file. When the second file arrives in the pool, mergerfs will follow the established policy criteria but without looking at the created paths, then it will create the same /media folder on the second disk and copy the file to the second disk. Only the free disk space or the chosen policy will then count, regardless of whether or not the folder is created on the disk in question. mergerfs will create it if necessary.

    I hope I have clarified a little more...

  • Yes, I guess I didn't explain myself as well as I would have liked. Rereading the thread, I said the same thing three times with almost the same words. :)

    When you have a merger created on multiple disks with an existing path policy it means the following: Imagine that the path of your media files is /media. You start copying files into the pool, and what mergerfs does is create that path on the first disk and copy the first file. On the rest of the disks, that folder is NOT created. When you write the second file, with that policy, what mergerfs does is look for which disks have that folder created. In this case, only the first disk has that folder created, so the second file and subsequent files will always end up on the first disk, leaving the other disks without files, because no one has created that folder on them, and neither has mergerfs. will do. In order for mergerfs to write to the second disk you would have to go to the command line, find the second disk and create the /media folder manually. Then mergerfs will also copy files to that folder and to that disk.

    However, the operation is different if you use a policy that does not control the existing routes on the different disks. What mergerfs does in this case is create the /media folder on the first disk and copy the first file. When the second file arrives in the pool, mergerfs will follow the established policy criteria but without looking at the created paths, then it will create the same /media folder on the second disk and copy the file to the second disk. Only the free disk space or the chosen policy will then count, regardless of whether or not the folder is created on the disk in question. mergerfs will create it if necessary.

    I hope I have clarified a little more...

    Wow, thank you so much! THAT really was an exhaustive explanation!

    The only puzzling thing is that I can't quite understand why the existing path policy is the default setting. I am sure there must be good reasons for this, but for my, probably standard use case, it only created problems.

    Thanks again.

    • Offizieller Beitrag

    Wow, thank you so much! THAT really was an exhaustive explanation!

    You are welcome. :thumbup:

    The only puzzling thing is that I can't quite understand why the existing path policy is the default setting. I am sure there must be good reasons for this, but for my, probably standard use case, it only created problems.

    Well. Someone decided a long time ago that this policy, of all the existing ones, could be the most suitable default for general use. I couldn't tell you the reasons.

    Personally, I always use the most free space, that way the disks are filled simultaneously, always leaving the same space to fill in each of them. I don't like the existing routing policy either. It can be good to try to organize the folders on the disks, but in the end it is more complicated to maintain, I like to keep things simple.

  • IIRC, existing path comes to play if you merge several disks that already have the same named folders.


    For eg, you have 2+ disks that had ../movies folder on them.

    Now, you merge them and the pool will see ALL items inside both disks, hence the existing path

    Any files added to the merged pooled Path will be balanced between the disks.


    But if you merge a disk that has ../movies/ but the other hasn't, existing path won't create it on the 2nd disk.

    Any files added to movies will only end on the disk that has ../movies/

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!