Can not reclaim space after deleting files from ext4

  • I'm still running OMV 5.5.23-1 at some point I will take on the project of upgrading to OMV6.

    I have completely filled one of my drives during an rclone copy job.

    I attempted to add the drive to a new Pool with unionfs, where I have other pre-existing working pools setup.

    This failed likely because the drive was full, so I used my krusader docker container to delete a directory around 93GB from the drive

    no space was reclaimed.

    I then deleted more data directly through OMV shell, as well as verfied the data removed with krusader no longer existed when viewing through the shell.

    still no space was reclaimed.

    I've rebooted OMV multiple times

    I've booted to the systemrescue and ran fsck -f on the partition

    here's the output

    I then mounted the volume from within systemrescue

    here's contents of the root directory on the offending volume

    checked free space with df -h but reporting 0 bytes available

    ran df without -h

    As shown below, this way shows there are less used blocks than what are available

    There are 15619855388 available, with 15390254704 used for a difference of 229,600,684

    229,600,684 x 1024 = 235,111,100,416 bytes or about 235GB of unused space

    I did take note that the unused blocks did increase each time I deleted more data, so I was able to watch the number of used blocks shrink.

    The 235GB sounds about right for the amount of data I've deleted.

    however I cannot get the drive to report any free space in either systemrescue or omv


    After I mount the volume in the OMV web gui, I get errors like this, likely because it cannot write to the drive because it is full.

    The volume does mount and is readable even though the gui churns for a long time and then errors are spit out.

    Code
    uota_off_no_quotas_5d2b87af-82a7-47d0-8057-6d3beb918300 Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 || true" run Started: 23:20:08.023411 Duration: 282.797 ms Changes: ---------- pid: 22029 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_5d2b87af-82a7-47d0-8057-6d3beb918300 Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 Result: False Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300" run Started: 23:20:08.306764 Duration: 36473.723 ms Changes: ---------- pid: 22031 retcode: 1 stderr: quotacheck: Scanning /dev/sdh1 [/srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300] quotacheck: Checked 106711 directories and 4271731 files quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300/aquota.user.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300/aquota.group.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists stdout:

    it actually spits out data like this for all the rest of the volumes for the quota checks, but the rest of the volumes are successful. Here's an example of another volume.

    there is a lot of this spit out in the error which I did not include.

    Code
    ne ---------- ID: quota_off_no_quotas_5876f864-1970-4ff9-b70d-a1a8cd9c23bd Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd || true" run Started: 23:17:49.365524 Duration: 109.425 ms Changes: ---------- pid: 15527 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_5876f864-1970-4ff9-b70d-a1a8cd9c23bd Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd Result: True Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd" run Started: 23:17:49.475592 Duration: 39055.941 ms Changes: ---------- pid: 15530 retcode: 0 stderr: quotacheck: Scanning /dev/sdm1 [/srv/dev-disk-by-uuid-5876f864-1970-4ff9-b70d-a1a8cd9c23bd] quotacheck: Checked 28792 directories and 31733 files stdout: 


    I've rebooted OMV multiple times and rerun fsck -f multiple times. After the first run where it optimized the file system, fsck has not had anything to do on subsequent runs.

    I'm not sure what else to do here.

    Please help!

  • I was able to stop the errors in the OMV web gui after deleting the aquota.user.new and aquota.group.new files from the root of the drive.

    Then I was able to create the new storage pool in unionfs so the other empty drive is now taking writes to the directory structure.

    However, I still have not reclaimed any space on this drive and have deleted about 350 GB of data in total at this point.

    Please, if anyone knows what else I can do, let me know. Thanks!

  • Code
    lsof directory | grep deleted | sort -nrk 7 | more

    Maybe you can use the above command to find files that have been deleted but not yet released, please note that the "directory" is changed to the directory you need to find.

    Note the first two columns, they are the process name and process id, close the process to let the OS enter the file recovery process.

    Of course this may not solve your problem, I usually copy the file and reformat the disk.

    Life is a boring and troublesome thing, it is annoying and stupid.

  • I've checked with lsof already, however, since the host OS has been rebooted, that in itself would have released the files if that was the case.

    As mentioned, I've viewed this drive by booting to systemrescue as well which is an independent OS from OMV, and both show the same.

  • Update:

    I previously deleted around 350GB of space with no space reclaimed.

    Now I removed another directory reporting usage of over 800GB,

    The drive is now reporting 479GB of free space.


    I was looking before at the number of reserved blocks and I'm providing it in case there's any relevance

    Code
    tune2fs -l /dev/sdh1 | grep "Reserved block count"
    Reserved block count:     195323468

    195,323,468 multiplied by the block size of 4096 = 800,044,924,928 (800GB)


    lsof does not show any open files in the directory path I've removed from.

    can someone explain what may be going on and possibly why there's such a disproportionate amount of space reclaimed?

    could some of the directories provisioned from the most recent backup job that filled the drive have been mis-reported on how much space they were actually consuming?

    Did I think I deleted more data than I actually did because of inaccurate consumption reporting?

    If directory consumption mis-reporting is possible and was occuring, wouldn't fsck have fixed this?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!