quotaon failed to start

  • Hi All,


    Ist is not really causing problems, but it is permanently mentioned in https://omv:9090/system/services:

    Quote

    StatusStaticRequired by


    Failed to start

    Path /usr/lib/systemd/system/quotaon@.service

    I read about quotas in generally in debian https://github.com/systemd/systemd/issues/29905: quotaon fails when ext4 has a quota system file and a quota mount option is present #29905

    So may be this is the case in terms of my OMV using ext4 on all drives.


    But I don't know why and how to solve it. I don't consciously use quotas, and I haven't knowingly configured or enabled that. Is quotaon in conflict with another service that is already active here?




    heinso


    Currently 8.2.7-1 (Synchrony), 64bit, on my ASRock Celeron J4105 NAS Build with a conventional 400W PSU, Raid 5 Array: 4x WD Red 2TB, 256 GB NVMe SSD on PCIe V1 as the system drive, 2x to SATA III-Adapter. 2x4GB RAM SO-DIMM 2400 and a 3,5", 6TB BU-HDD on the same SATA controller via PCIe as a full BU for the Raid Array . No Backup -> No Mercy

  • heinso

    Added the Label OMV 8.x
    • Official Post

    You can install the mounteditor plugin and remove the quota stuff from your filesystems. Then reboot and see if the problem is still there.


    I would run sudo omv-salt deploy run quota before rebooting but after removing the quota stuff in the mounteditor plugin.

    omv 8.2.6-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.2.4 | compose 8.1.12 | cterm 8.0 | borgbackup 8.1.9 | tempmon 8.0.3 | mergerfs 8.0.1 | scripts 8.0.3 | writecache 8.1.10


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ok, mount editor is installed and the mount options look like this:


    SSD-System drive and 3,5" BU drive:

    defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl


    Raid 5 Array 4x 3,5":

    defaults,nofail,user_xattr,usrquota,grpquota,acl


    If I understood You right, I shall remove the "quota-stuff" means to get rid of the bold expressions above followed by a sudo omv-salt deploy run quota and a reboot.

    heinso


    Currently 8.2.7-1 (Synchrony), 64bit, on my ASRock Celeron J4105 NAS Build with a conventional 400W PSU, Raid 5 Array: 4x WD Red 2TB, 256 GB NVMe SSD on PCIe V1 as the system drive, 2x to SATA III-Adapter. 2x4GB RAM SO-DIMM 2400 and a 3,5", 6TB BU-HDD on the same SATA controller via PCIe as a full BU for the Raid Array . No Backup -> No Mercy

    • Official Post

    If I understood You right, I shall remove the "quota-stuff" means to get rid of the bold expressions above followed by a sudo omv-salt deploy run quota and a reboot.

    Just click the "remove quota options" button for each filesystem in the mounteditor plugin and then run the command and reboot.

    omv 8.2.6-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.2.4 | compose 8.1.12 | cterm 8.0 | borgbackup 8.1.9 | tempmon 8.0.3 | mergerfs 8.0.1 | scripts 8.0.3 | writecache 8.1.10


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks a lot.


    OK found it. (The blue Icon with the lines and a minus)

    Quota stuff removed.


    The command output was the code below (hope it's ok with all the error lines) and after rebooting the failing service start is gone ; ).

    heinso


    Currently 8.2.7-1 (Synchrony), 64bit, on my ASRock Celeron J4105 NAS Build with a conventional 400W PSU, Raid 5 Array: 4x WD Red 2TB, 256 GB NVMe SSD on PCIe V1 as the system drive, 2x to SATA III-Adapter. 2x4GB RAM SO-DIMM 2400 and a 3,5", 6TB BU-HDD on the same SATA controller via PCIe as a full BU for the Raid Array . No Backup -> No Mercy

    • Official Post

    The errors are fine. I would run sudo omv-salt deploy run quota again after rebooting.

    omv 8.2.6-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.2.4 | compose 8.1.12 | cterm 8.0 | borgbackup 8.1.9 | tempmon 8.0.3 | mergerfs 8.0.1 | scripts 8.0.3 | writecache 8.1.10


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • both done:

    heinso


    Currently 8.2.7-1 (Synchrony), 64bit, on my ASRock Celeron J4105 NAS Build with a conventional 400W PSU, Raid 5 Array: 4x WD Red 2TB, 256 GB NVMe SSD on PCIe V1 as the system drive, 2x to SATA III-Adapter. 2x4GB RAM SO-DIMM 2400 and a 3,5", 6TB BU-HDD on the same SATA controller via PCIe as a full BU for the Raid Array . No Backup -> No Mercy

  • heinso

    Added the Label resolved
    • Official Post

    That looks good.

    omv 8.2.6-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.2.4 | compose 8.1.12 | cterm 8.0 | borgbackup 8.1.9 | tempmon 8.0.3 | mergerfs 8.0.1 | scripts 8.0.3 | writecache 8.1.10


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!