I'm still running OMV 5.5.23-1 at some point I will take on the project of upgrading to OMV6.
I have completely filled one of my drives during an rclone copy job.
I attempted to add the drive to a new Pool with unionfs, where I have other pre-existing working pools setup.
This failed likely because the drive was full, so I used my krusader docker container to delete a directory around 93GB from the drive
no space was reclaimed.
I then deleted more data directly through OMV shell, as well as verfied the data removed with krusader no longer existed when viewing through the shell.
still no space was reclaimed.
I've rebooted OMV multiple times
I've booted to the systemrescue and ran fsck -f on the partition
here's the output
[root@sysrescue ~]# fsck --f /dev/sdh1
fsck from util-linux 2.36.1
e2fsck 1.45.6 (20-Mar-2020)
Pass 1: Checking inodes, blocks, and sizes
Inode 6208544 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 6212840 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 6273036 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 9681761 extent tree (at level 1) could be narrower. Optimize<y>? yes
Inode 9681773 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9681799 extent tree (at level 1) could be narrower. Optimize<y>? yes
Inode 9682235 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9682262 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9683905 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9684001 extent tree (at level 1) could be narrower. Optimize ('a' enables 'yes' to all) <y>? yes
yyyInode 9685749 extent tree (at level 2) could be narrower. Optimize ('a' enables 'yes' to all) <y>? yes
yyyyyyyyyInode 9691490 extent tree (at level 1) could be narrower. Optimize ('a' enables 'yes' to all) <y>? yes
yyyInode 9691934 extent tree (at level 2) could be narrower. Optimize ('a' enables 'yes' to all) <y>? yes
Inode 9692198 extent tree (at level 1) could be narrower. Optimize<y>? yes
Inode 9693853 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 9734254 extent tree (at level 1) could be narrower. Optimize<y>? yes
Inode 9742738 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 9781270 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9826355 extent tree (at level 1) could be narrower. Optimize<y>? yes
Inode 9908460 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 9975818 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9975821 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9975827 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9975831 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9975835 extent tree (at level 2) could be narrower. Optimize<y>? yes
Inode 9975840 extent tree (at level 2) could be narrower. Optimize<y>? yes to all
Inode 14032562 extent tree (at level 2) could be narrower. Optimize? yes
I clipped the rest...
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
16TBBackup01: ***** FILE SYSTEM WAS MODIFIED *****
16TBBackup01: 4378837/15259648 files (0.4% non-contiguous), 3854186130/3906469376 blocks
Display More
I then mounted the volume from within systemrescue
here's contents of the root directory on the offending volume
[root@sysrescue /mnt]# mount /devsdh1 /mnt/16TBBackup01/
[root@sysrescue /mnt]# cd /mnt/16TBBackup01/
[root@sysrescue /mnt/16TBBackup01]# ls -lan
total 60
drwxrws---+ 7 0 100 4096 Jul 4 17:34 .
drwxr-xr-x 1 0 0 60 Jul 5 02:31 ..
drwxrws---+ 3 0 100 4096 Jan 7 23:44 16TBBackup01
-rwxrwx---+ 1 0 100 6144 Jul 5 02:08 aquota.group
-rw-------+ 1 0 100 0 Jul 4 17:34 aquota.group.new
-rwxrwx---+ 1 0 100 7168 Jul 5 02:08 aquota.user
-rw-------+ 1 0 100 0 Jul 4 17:34 aquota.user.new
drwxrws---+ 6 0 100 4096 Mar 5 05:24 Backup
drwxrwx---+ 3 0 100 4096 Jul 4 21:49 .bzvol
drwxrwx---+ 2 0 100 16384 Dec 6 2022 lost+found
drwxrwx---+ 2 0 100 4096 Jan 3 2023 old.bzvol
Display More
checked free space with df -h but reporting 0 bytes available
[root@sysrescue /mnt/16TBBackup01/Backup]# df -h
Filesystem Size Used Avail Use% Mounted on
dev 7.8G 0 7.8G 0% /dev
run 7.8G 94M 7.8G 2% /run
copytoram 12G 638M 12G 6% /run/archiso/copytoram
cowspace 3.9G 952K 3.9G 1% /run/archiso/cowspace
/dev/loop1 638M 638M 0 100% /run/archiso/sfs/airootfs
airootfs 3.9G 952K 3.9G 1% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
tmpfs 7.8G 0 7.8G 0% /tmp
tmpfs 7.8G 2.1M 7.8G 1% /etc/pacman.d/gnupg
tmpfs 1.6G 16K 1.6G 1% /run/user/0
/dev/sdh1 15T 15T 0 100% /mnt/16TBBackup01
Display More
ran df without -h
As shown below, this way shows there are less used blocks than what are available
There are 15619855388 available, with 15390254704 used for a difference of 229,600,684
229,600,684 x 1024 = 235,111,100,416 bytes or about 235GB of unused space
I did take note that the unused blocks did increase each time I deleted more data, so I was able to watch the number of used blocks shrink.
[root@sysrescue /mnt/16TBBackup01/Backup/rclonecrypt/OMV_Backup/Storage/Pictures/A7IV/2022]# df
Filesystem 1K-blocks Used Available Use% Mounted on
dev 8145480 0 8145480 0% /dev
run 8172068 95616 8076452 2% /run
copytoram 12258104 652760 11605344 6% /run/archiso/copytoram
cowspace 4086036 952 4085084 1% /run/archiso/cowspace
/dev/loop1 652800 652800 0 100% /run/archiso/sfs/airootfs
airootfs 4086036 952 4085084 1% /
tmpfs 8172068 0 8172068 0% /dev/shm
tmpfs 4096 0 4096 0% /sys/fs/cgroup
tmpfs 8172068 0 8172068 0% /tmp
tmpfs 8172068 2068 8170000 1% /etc/pacman.d/gnupg
tmpfs 1634412 16 1634396 1% /run/user/0
/dev/sdh1 15619855388 15390254704 0 100% /mnt/16TBBackup01
Display More
The 235GB sounds about right for the amount of data I've deleted.
however I cannot get the drive to report any free space in either systemrescue or omv
After I mount the volume in the OMV web gui, I get errors like this, likely because it cannot write to the drive because it is full.
The volume does mount and is readable even though the gui churns for a long time and then errors are spit out.
uota_off_no_quotas_5d2b87af-82a7-47d0-8057-6d3beb918300 Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 || true" run Started: 23:20:08.023411 Duration: 282.797 ms Changes: ---------- pid: 22029 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_5d2b87af-82a7-47d0-8057-6d3beb918300 Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300 Result: False Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5d2b87af-82a7-47d0-8057-6d3beb918300" run Started: 23:20:08.306764 Duration: 36473.723 ms Changes: ---------- pid: 22031 retcode: 1 stderr: quotacheck: Scanning /dev/sdh1 [/srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300] quotacheck: Checked 106711 directories and 4271731 files quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300/aquota.user.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-5d2b87af-82a7-47d0-8057-6d3beb918300/aquota.group.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists stdout:
it actually spits out data like this for all the rest of the volumes for the quota checks, but the rest of the volumes are successful. Here's an example of another volume.
there is a lot of this spit out in the error which I did not include.
ne ---------- ID: quota_off_no_quotas_5876f864-1970-4ff9-b70d-a1a8cd9c23bd Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd || true" run Started: 23:17:49.365524 Duration: 109.425 ms Changes: ---------- pid: 15527 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_5876f864-1970-4ff9-b70d-a1a8cd9c23bd Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd Result: True Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/5876f864-1970-4ff9-b70d-a1a8cd9c23bd" run Started: 23:17:49.475592 Duration: 39055.941 ms Changes: ---------- pid: 15530 retcode: 0 stderr: quotacheck: Scanning /dev/sdm1 [/srv/dev-disk-by-uuid-5876f864-1970-4ff9-b70d-a1a8cd9c23bd] quotacheck: Checked 28792 directories and 31733 files stdout:
I've rebooted OMV multiple times and rerun fsck -f multiple times. After the first run where it optimized the file system, fsck has not had anything to do on subsequent runs.
I'm not sure what else to do here.
Please help!