I am using UFS drive storage, says its full but I have lots of space left.

  • Hi


    I am currently running debian xfce and I have 2 8tb drives together in UFS. But I have a problem now when I copy some files to the UFS drive it says error splicing file no space left on device. So I checked and one of my drives is full. I find this very strange when I use UFS that I would get a message saying that the drive is full when I have a lot of space left. What can I do to resolve this?

  • revise your Boot device, use command du or df


    https://opensource.com/article…eck-free-disk-space-linux

  • please post result of "df -ha"

  • This is what I get:


    Filesystem Size Used Avail Use% Mounted on

    sysfs 0 0 0 - /sys

    proc 0 0 0 - /proc

    udev 16G 0 16G 0% /dev

    devpts 0 0 0 - /dev/pts

    tmpfs 3.1G 11M 3.1G 1% /run

    /dev/sdf1 221G 27G 182G 13% /

    securityfs 0 0 0 - /sys/kernel/security

    tmpfs 16G 45M 16G 1% /dev/shm

    tmpfs 5.0M 4.0K 5.0M 1% /run/lock

    tmpfs 16G 0 16G 0% /sys/fs/cgroup

    cgroup2 0 0 0 - /sys/fs/cgroup/unified

    cgroup 0 0 0 - /sys/fs/cgroup/systemd

    pstore 0 0 0 - /sys/fs/pstore

    bpf 0 0 0 - /sys/fs/bpf

    cgroup 0 0 0 - /sys/fs/cgroup/blkio

    cgroup 0 0 0 - /sys/fs/cgroup/cpu,cpuacct

    cgroup 0 0 0 - /sys/fs/cgroup/net_cls,net_prio

    cgroup 0 0 0 - /sys/fs/cgroup/rdma

    cgroup 0 0 0 - /sys/fs/cgroup/memory

    cgroup 0 0 0 - /sys/fs/cgroup/cpuset

    cgroup 0 0 0 - /sys/fs/cgroup/perf_event

    cgroup 0 0 0 - /sys/fs/cgroup/devices

    cgroup 0 0 0 - /sys/fs/cgroup/freezer

    cgroup 0 0 0 - /sys/fs/cgroup/pids

    systemd-1 - - - - /proc/sys/fs/binfmt_misc

    mqueue 0 0 0 - /dev/mqueue

    hugetlbfs 0 0 0 - /dev/hugepages

    debugfs 0 0 0 - /sys/kernel/debug

    sunrpc 0 0 0 - /run/rpc_pipefs

    nfsd 0 0 0 - /proc/fs/nfsd

    /dev/loop1 58M 58M 0 100% /snap/jdownloader2/13

    /dev/loop2 65M 65M 0 100% /snap/gtk-common-themes/1514

    /dev/loop3 56M 56M 0 100% /snap/core18/1932

    /dev/loop0 218M 218M 0 100% /snap/gnome-3-34-1804/60

    /dev/loop4 52M 52M 0 100% /snap/snap-store/518

    /dev/loop5 98M 98M 0 100% /snap/core/10577

    /dev/loop6 52M 52M 0 100% /snap/snap-store/498

    /dev/loop7 219M 219M 0 100% /snap/gnome-3-34-1804/66

    /dev/loop9 98M 98M 0 100% /snap/core/10583

    /dev/loop8 56M 56M 0 100% /snap/core18/1944

    tmpfs 16G 16K 16G 1% /tmp

    binfmt_misc 0 0 0 - /proc/sys/fs/binfmt_misc

    tmpfs 3.1G 4.0K 3.1G 1% /run/user/113

    gvfsd-fuse - - - - /run/user/113/gvfs

    fusectl 0 0 0 - /sys/fs/fuse/connections

    overlay - - - - /var/lib/docker/overlay2/e9b0f01cf00d77bd64b60cde84c700e1bb5dfda1341b719b874fc1b7e68bcbd2/merged

    overlay - - - - /var/lib/docker/overlay2/99f2f8ba7b9e593095282305520ea1280c6c78aef7ec3ccf005e11c33ffe8ec9/merged

    overlay - - - - /var/lib/docker/overlay2/6882eae904399d50aba13a8a7c4d9dbb3cfc90777c8a6e09254b2df427b8f777/merged

    overlay - - - - /var/lib/docker/overlay2/9af81c4da1c27a07fc2ce359e3c3bcb0fc51640fa655e28a61b0398d29ca034d/merged

    overlay - - - - /var/lib/docker/overlay2/24517b8ba77f882990a597eb176fba78603d5474999e8cf402cc85561e56addf/merged

    overlay - - - - /var/lib/docker/overlay2/e9fbcbc980b36dac6de2b5cb0d42a7982839caea4f01f2648d73c392e7c8dc11/merged

    overlay - - - - /var/lib/docker/overlay2/585ae5e136c554b1baf06bdc85e4dbe7a54de8920d393816e08b60f2c508db96/merged

    overlay - - - - /var/lib/docker/overlay2/96b96f6827fed38567e4436e7192383a0af328287e6514f166c63949f3638593/merged

    shm - - - - /var/lib/docker/containers/19109303a1f64ad86d4ce2f9afaeeb6db23d8597c7bb71c360eae5371efa28aa/mounts/shm

    shm - - - - /var/lib/docker/containers/8d4830db4de8c292e69b6817241d21db665f41db8a00a8dc13400e0d96945ef7/mounts/shm

    shm - - - - /var/lib/docker/containers/b5c01f1fd69a78715e3b856cb52b8f616b21ae0e4ae7b3e2b7dc9b76c43ea977/mounts/shm

    shm - - - - /var/lib/docker/containers/67f7bbc3c7cf8ba6d6dd1f5de964bc33265d45f1e17982389a48df91c9e2df56/mounts/shm

    shm - - - - /var/lib/docker/containers/68d9839697c1d54b50a3783684ded35512e2e4a621b03ba8e38703f57b69de77/mounts/shm

    shm - - - - /var/lib/docker/containers/e354d39e8fe788418351d0d7adacf08bd8dcdc7440e8e8005837355e789e9d74/mounts/shm

    nsfs - - - - /run/docker/netns/9f38de60dd9a

    nsfs - - - - /run/docker/netns/9a5aab122c73

    nsfs - - - - /run/docker/netns/259c170a0f37

    nsfs - - - - /run/docker/netns/9bdd1433bec6

    nsfs - - - - /run/docker/netns/0107864296a3

    nsfs - - - - /run/docker/netns/f3f91b37d6e3

    nsfs - - - - /run/docker/netns/d087941632ff

    nsfs - - - - /run/docker/netns/383552d847fd

    tmpfs 3.1G 12K 3.1G 1% /run/user/1000

    gvfsd-fuse 0 0 0 - /run/user/1000/gvfs

    /dev/mapper/sda-crypt 7.3T 7.3T 64K 100% /srv/dev-disk-by-label-erlendstorageI

    /dev/mapper/sdb-crypt 7.3T 81M 7.3T 1% /srv/dev-disk-by-label-erlendstoragepI

    /dev/mapper/sdc-crypt 7.3T 100G 7.2T 2% /srv/dev-disk-by-label-erlendstoragepII

    /dev/mapper/sdd-crypt 7.3T 742G 6.6T 10% /srv/dev-disk-by-label-erlendstorageII

    /dev/mapper/sde-crypt 1.9T 413G 1.5T 23% /srv/dev-disk-by-label-erlendstorageIII

    erlendmainstorage:d01012a5-eb7d-4f98-9de1-b17f816bcf4e 15T 8.0T 6.6T 55% /srv/d01012a5-eb7d-4f98-9de1-b17f816bcf4e

    erlendmainstoragep:e1929b5b-1179-4228-a6ea-16ce178dc576 15T 100G 15T 1% /srv/e1929b5b-1179-4228-a6ea-16ce178dc576

  • /dev/mapper/sda-crypt 7.3T 7.3T 64K 100% /srv/dev-disk-by-label-erlendstorageI


    is full


    is your destination?

  • Yeah it is full. But this is not:

    erlendmainstorage:d01012a5-eb7d-4f98-9de1-b17f816bcf4e 15T 8.0T 6.6T 55% /srv/d01012a5-eb7d-4f98-9de1-b17f816bcf4e

    Its this location I am copying files to and it says error splicing file no space left on device. When I use UFS then erlendstorageII should start filling up with data.

  • move or delete some files on /srv/dev-disk-by-label-erlendstorageI and retest if copy works and no error messages appear, this test show you if /srv/dev-disk-by-label-erlendstorageI is the problem.

  • I tried what you said. But I still get the same message: error splicing file no space left on device. I also find this situation very strange because when I use the UFS folder to copy files into this problem should not be happening. I have both erlendstorage I & II together as one where I 15tb in total, when one drive is full it should start copying to the next automatically. I find this UFS situation very confusing.

  • What policy do you have in UFS configuration?

    If you have a path preserving policy, you must manually create the folder on the second disk, or change the UFS policy to one that does not preserve path.

    See https://github.com/trapexit/mergerfs

    Core I3 3225, RAM 4GB, SnapRaid and UnionFS, 1 parity disk (5TB), 3 data disks (5TB+4TB+4TB), 1 x 32GB USB disk for startup, 1 x 60GB SSD disk for docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes I am not well understood :)

  • It has been shown that it is better to be more concise than to give so many explanations :D:D

    Core I3 3225, RAM 4GB, SnapRaid and UnionFS, 1 parity disk (5TB), 3 data disks (5TB+4TB+4TB), 1 x 32GB USB disk for startup, 1 x 60GB SSD disk for docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes I am not well understood :)

  • A pleasure, thanks to you for your help :thumbup:

    Core I3 3225, RAM 4GB, SnapRaid and UnionFS, 1 parity disk (5TB), 3 data disks (5TB+4TB+4TB), 1 x 32GB USB disk for startup, 1 x 60GB SSD disk for docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes I am not well understood :)

  • It has been shown that it is better to be more concise than to give so many explanations

    Well, one adds to the other.

    That is the great thing of a forum. Different persons can give their input. Some users just want a fix and others want to understand and spend the effort to learn. And you never know what a particular user wants at this point in time.

  • If you're using MergerFS (it's called UnionFS on OMV) changing the policy to Most Free Space would solve the problem

    If memory serves me correctly that is not the default setting, I wonder if it's possible to change that within the plugin as this seems to come up as a root cause of data distribution.

  • If memory serves me correctly that is not the default setting

    The default setting is "epmfs (existing path, most free space)".


    When the first disk is full, users do not know what to do to fill the second. You have to change the policy to continue filling. I have seen multiple threads with the same problem.

    Core I3 3225, RAM 4GB, SnapRaid and UnionFS, 1 parity disk (5TB), 3 data disks (5TB+4TB+4TB), 1 x 32GB USB disk for startup, 1 x 60GB SSD disk for docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes I am not well understood :)

    Edited once, last by chente ().

  • The default setting is "epmfs (existing path, most free space)".

    :?: I'm sorry the point I was trying to suggest was perhaps it would be prudent for the plugin on it's initial install be changed to mfs rather than epmfs. As I used this with snapraid previously I know how it functions and how it's set up.


    The problems most users have is not changing the default setting and wonder why only one of their drives fill up.

  • Yes, I had understood you. I am translating into English with google and I think that sometimes I am not well understood. ||:D:D

    Core I3 3225, RAM 4GB, SnapRaid and UnionFS, 1 parity disk (5TB), 3 data disks (5TB+4TB+4TB), 1 x 32GB USB disk for startup, 1 x 60GB SSD disk for docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes I am not well understood :)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!