Beiträge von Will_I_Am

    Thanks! Need to dig more about it later to really understand it as I don't see a way to get rid of them before mounting the disks (as there's no mntent before that?).

    But now I need to figure out the root cause why this particular system gets proxmox kernel panic really frequently and leads into this point eventually. Sigh...

    Got it sorted by just deleting the quota files in the root:

    aquota.group

    aquota.user.new

    aquota.user


    Then unmounted and remounted and got no more error after applying changes. Also shares work again. It did not create the .new file anymore, just the two others.


    Could someone explain what these files contain and why are they created when no quotas are configured by the user?

    Maybe I'm impatient but... was this a too hard question? Was it a too easy question? Was it just a stupid question? Is this a wrong forum or have I again found an issue that no-one else has ever seen?

    I would think that others have had their share of hiccups when the system crashes for some reason.

    There seems to be few other threads with similar issues but none of the solutions work for me.


    Here's one without any answers:

    RE: Error while mounting drive


    Here's another one with some:

    What is this crazy error? I'm stuck

    My disk is not out of space but it does have the quota entries and files although I've never activated any quotas.

    I get if I start the quota

    Running OMV 5.6.13 on Proxmox 7.0-8 with data drives in pass-through mode to OMV.

    I had a simple UnionFS pooling and SnapRaid setup with 2 data and 1 parity disk.

    There was a file transfer active to pooled drive when I added another drive to the pool. Not sure if the active transfer caused it, but Proxmox froze in the background and I had to hard reset the host. After reboot I proceeded to retry the mounting & adding drive to the pool but apply button started giving this error.

    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': debian: ---------- ID: quota_off_no_quotas_8102e6e0-4a4a-4e50-b005-04b30925a65e Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e || true" run Started: 20:43:21.135855 Duration: 77.332 ms Changes: ---------- pid: 2191 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_8102e6e0-4a4a-4e50-b005-04b30925a65e Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e Result: False Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8102e6e0-4a4a-4e50-b005-04b30925a65e" run Started: 20:43:21.213758 Duration: 1490.28 ms Changes: ---------- pid: 2193 retcode: 1 stderr: quotacheck: Scanning /dev/sdb1 [/srv/dev-disk-by-uuid-8102e6e0-4a4a-4e50-b005-04b30925a65e] quotacheck: Checked 15985 directories and 326596 files quotacheck: Cannot create new quotafile /srv/dev-disk-by-uuid-8102e6e0-4a4a-4e50-b005-04b30925a65e/aquota.user.new: File exists quotacheck: Cannot initialize IO on new quotafile: File exists stdout: |/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/- .... (THIS CONTINUES FOR LONG LONG TIME) .... |/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/-\|/done ---------- ID: disable_quota_service Function: service.disabled Name: quota Result: True Comment: Service quota is already disabled, and is in the desired state Started: 20:43:22.757994 Duration: 51.346 ms Changes: Summary for debian ------------ Succeeded: 2 (changed=2) Failed: 1 ------------ Total states run: 3 Total run time: 1.619 s


    The drive does mount and I can access files on CLI but can't share or do anything else with the drive on OMV side.


    Things I've tried:

    1. I removed the added drive from configs and unmounted it but it didn't fix the issue.

    2. I checked for duplicate / wrong entries for the drives in /etc/openmediavault/config.xml and /etc/monit/conf.d/openmediavault-filesystem.conf but they were fine

    3. After several hours I just decided to reinstall the OMV VM only to see the same error when mounting the data drives and at this point I noticed that it was one of the 2 data drives that gave the error, not the new drive I was trying to add. Apparently something got corrupted during the hypervisor crash on this disk?

    4. Ran fsck and e2fsck and let it do its optimizations but that didn't help