Running OMV 5.6.13 on Proxmox 7.0-8 with data drives in pass-through mode to OMV.
I had a simple UnionFS pooling and SnapRaid setup with 2 data and 1 parity disk.
There was a file transfer active to pooled drive when I added another drive to the pool. Not sure if the active transfer caused it, but Proxmox froze in the background and I had to hard reset the host. After reboot I proceeded to retry the mounting & adding drive to the pool but apply button started giving this error.
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: ---------- ID: quota_off_no_quotas_8102e6e0-4a4a-4e50-b005-04b30925a65e Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e || true" run Started: 20:43:21.135855 Duration: 77.332 ms Changes: ---------- pid: 2191 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_8102e6e0-4a4a-4e50-b005-04b30925a65e Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e Result: False Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e" run Started: 20:43:21.213758 Duration: 1490.28 ms Changes: ---------- pid: 2193 retcode: 1 stderr: quotacheck: Scanning /dev/sdb1 [/srv/dev-disk-by-uuid-8102e6e0-4a4a-4e50-b005-04b30925a65e] quotacheck: Checked 15985 directories and 326596 files quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-8102e6e0-4a4a-4e50-b005-04b30925a65e/aquota.user.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists stdout: |/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/- .... (THIS CONTINUES FOR LONG LONG TIME) .... |/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/done ---------- ID: disable_quota_service Function: service.disabled Name: quota Result: True Comment: Service quota is already disabled, and is in the desired state Started: 20:43:22.757994 Duration: 51.346 ms Changes: Summary for debian ------------ Succeeded: 2 (changed=2) Failed: 1 ------------ Total states run: 3 Total run time: 1.619 s
The drive does mount and I can access files on CLI but can't share or do anything else with the drive on OMV side.
Things I've tried:
1. I removed the added drive from configs and unmounted it but it didn't fix the issue.
2. I checked for duplicate / wrong entries for the drives in /etc/openmediavault/config.xml and /etc/monit/conf.d/openmediavault-filesystem.conf but they were fine
3. After several hours I just decided to reinstall the OMV VM only to see the same error when mounting the data drives and at this point I noticed that it was one of the 2 data drives that gave the error, not the new drive I was trying to add. Apparently something got corrupted during the hypervisor crash on this disk?
4. Ran fsck and e2fsck and let it do its optimizations but that didn't help
root@openmediavault:~# e2fsck -f /dev/sdb1
e2fsck 1.46.2 (28-Feb-2021)
Pass 1: Checking inodes, blocks, and sizes
Inode 51249961 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 85393551 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 100532236 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 109314093 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 109314247 extent tree (at level 1) could be shorter. Optimize<y>? yes
Inode 187762003 extent tree (at level 1) could be shorter. Optimize<y>? yes
Pass 1E: Optimizing extent trees
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
exos8tb: ***** FILE SYSTEM WAS MODIFIED *****
exos8tb: 342590/244191232 files (2.7% non-contiguous), 1106340941/1953506385 blocks
Alles anzeigen