So I got 5 drives in my mhddfs pool. Pool configured from webgui with default settings. All of which was empty at time of creation. I started filling the pool and it is about 80% full. I did some sorting of the files, renamed, deleted, adding more etc trying to sort everything.
I started to get problems copying large files (8GB) to the pool from a windows pc. I later found that the space got very low on the first disk as it was trying to copy the big file onto the disk which would obviously not fit in the free space available according to the webgui info. Windows and other copying tools reported that the "network name was not available any more" and stopped the copying altogether.
according to this site, mhddfs will copy the data on the fly and cause no interruption to the application.
I quote:
ZitatAlles anzeigenIf an overflow arises while writing to the hdd1 then a file
content already written will be transferred to a hdd containing
enough of free space for a file. The transferring is processed
on-the-fly, fully transparent for the application that is
writing. So this behaviour simulates a big file system.
WARNING: The filesystems are combined must provide a possibility
to get their parameters correctly (e.g. size of free space).
Otherwise the writing failure can occur (but data consistency
will be ok anyway).
I was wondering how this problem occured? Does mhddfs check the amout of free space before receiving a file or only if the disk is full? The minimum theshold limit was set at "4G". I increased it to "15GB". The problem went away after I rebooted but not sure why the 4G parameter did not work? At stages the "available" space on sda1 showed 0.00MiB.
Can someone please explain what is cooking here? Is it because of all the renaming and sorting of files that a bug creaped in somewhere? Perhaps my second & third drive also did not allow for the file to be written causing bottleneck and overrun of the buffers?