Posts by thk

    Solved - kind of my own fault but... I totally agree with sakulthefirst - The UI error is very misleading. The OMV developers should definitely improve the ui error and do some kind of system readiness before Applying new config.

    Running the quota command manually gave me the very simple answer to the issue:


    >omv-salt deploy run quota


    ID: quota_check_no_quotas_8221b09f-a0ac-4ae1-b216-47641cebe874

    Function: cmd.run

    Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8221b09f-a0ac-4ae1-b216-47641cebe874

    Result: False

    Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/8221b09f-a0ac-4ae1-b216-47641cebe874" run

    Started: 20:15:17.191684

    Duration: 40.796 ms

    Changes:

    ----------

    pid:

    3665407

    retcode:

    2

    stderr:

    quotacheck: Scanning /dev/mapper/st2000lm007-backup [/srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874] quotacheck: Cannot sta

    t old user quota file /srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874/aquota.user: No such file or directory. Usage will not be subtracted.

    quotacheck: Cannot stat old group quota file /srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874/aquota.group: No such file or d

    irectory. Usage will not be subtracted.

    quotacheck: Cannot stat old user quota file /srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874/aquota.user: No such file or dir

    ectory. Usage will not be subtracted.

    quotacheck: Cannot stat old group quota file /srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874/aquota.group: No such file or d

    irectory. Usage will not be subtracted.

    quotacheck: Checked 952 directories and 13716 files

    quotacheck: Old file not found.

    quotacheck: Cannot allocate new quota block (out of disk space).

    quotacheck: Cannot write quota (id 100): No space left on device

    stdout:

    done


    :rolleyes:


    Would be much nicer to give that output in the error UI instead - could have saved me a lot of hours.

    Why it's doing the quota check in the first place... I don't know - I'm not using any quota. Maybe the merger.fs...?

    I managed to do all steps, but OMV still fails.

    • service quota status = Disabled
    • delete the ,usrjquota=aquota.user,grpjquota=aquota.group entries from all mountpoints in /etc/fstab = ok
    • reboot = ok
    • rm /srv/filessystems/aquota.* = succesfully
    • unmount new filesystem, hit Apply = Error
    • hit "revert" = OK (new filesystem unmounted though)

    OMV just puts ,usrjquota=aquota.user,grpjquota=aquota.group back into /etc/fstab entries, and also creates the aquota files on all filesystems again.


    So, I think we can agree that it is OMV that does the "quota thing"?


    However, the error is only appearing when I make changes to filesystems. Apparently other changes eg. sharing, is applied without error. But I'm still not happy about the filesystem error.

    I'm only trying to add a new disk. I do not touch any of the existing filesystems. If mergerfs is using aquota files it's using it on all filesystems - not only the two merged ones. However I can't find any documention on that.


    Also it seems that the issue is not there from the beginning. So it must be building up...

    The error is gone when I hit revert, but the situation is NOT comforting. The aquota files are pretty new, so something is using them.

    I'm using EXT4 filesystem on all filesystems, and I have one merger.fs. by the Union Filesystem plugin.


    Would it help to attach full output of the error, or is is possible to set the omv log to debug so it would be possible to see which module is failing the quota?

    Sure, here is the results:

    Last one (/srv/dev-disk-by-label-basic) is the new filesystem I'm trying to create.

    I can create the filesystem, apparently also mount it, but I cannot Apply the configuration - it fails. But If I hit "Revert" it stays mounted


    However I think you are right about "some quota going on". There are some strange "quota" files on all filesystems :

    Code
    root@openmediavault:~# ls -l /srv/dev-disk-by-uuid-8221b09f-a0ac-4ae1-b216-47641cebe874/
    total 20
    -rw------- 1 root users 6144 Mar 26 07:10 aquota.group
    -rw------- 1 root users 0 Mar 25 22:19 aquota.group.new
    -rw------- 1 root users 7168 Mar 26 07:10 aquota.user
    -rw------- 1 root users 0 Mar 25 22:19 aquota.user.new
    drwxrws---+ 7 root users 4096 Mar 28 08:43 backup

    No, never - It just suddenly started to fail when adding a new disk. But it's not the new disk fail - it's like all the existing filesystems... I do not use Quota. It's the second time now in OMV5 - Can I active debugging somehow?


    Thanks

    Some time last year I started getting this crazy error when I was making changes to to file systems. I reinstalled OMV and reconfigured everything from scratch, and everyone was happy. But now.... It's BACK! What is it?


    This is the first few lines of the error when I hit Apply. I can't find anything i the logfiles - Only on the screen.


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': omv.privat.lan: ---------- ID: quota_off_no_quotas_07486652-3d63-4830-9658-bc31a880af36 Function: cmd.run Name: quotaoff --group --user /dev/disk/by-uuid/07486652-3d63-4830-9658-bc31a880af36 || true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-uuid/07486652-3d63-4830-9658-bc31a880af36 || true" run Started: 08:09:20.759265 Duration: 89.242 ms Changes: ---------- pid: 2782986 retcode: 0 stderr: stdout: ---------- ID: quota_check_no_quotas_07486652-3d63-4830-9658-bc31a880af36 Function: cmd.run Name: quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/07486652-3d63-4830-9658-bc31a880af36 Result: True Comment: Command "quotacheck --user --group --create-files --no-remount --verbose /dev/disk/by-uuid/07486652-3d63-4830-9658-bc31a880af36" run Started: 08:09:20.849234 Duration: 109.904 ms Changes: ---------- pid: 2782988 retcode: 0 stderr: quotacheck: Scanning /dev/mapper/wd30--raid5-media2 [/srv/dev-disk-by-uuid-07486652-3d63-4830-9658-bc31a880af36] quotacheck: Checked 1877 directories and 18759 files stdout:

    I have 16 GB system drive for openmediavault which is reporting 98% full


    # df -h /dev/sda2

    Filesystem Size Used Avail Use% Mounted on

    /dev/sda2 15G 14G 346M 98% /



    However disk usage is only 2.5 GB


    # du -d1 -x -h / | sort -h

    4.0K /export

    4.0K /mnt

    8.0K /media

    8.0K /sharedfolders

    16K /lost+found

    16K /opt

    24K /home

    128K /root

    1.8M /srv

    8.5M /etc

    49M /boot

    673M /var

    1.8G /usr

    2.5G /


    Where are the rest 11,5 GB that it's reporting used?

    Why?

    I'm installing on a remote server via KVM Console, and I do not have physical access to the server. The server has twelve disk and one sata-dom for OMV. I can't disable the scsi controller i BIOS as the sata-dom is residing on it.

    The disk contains important data, and currently there is no backups available.

    Does the installer really delete all the disks or is it just a precaution? Are there any other mitigations i could use?

    /T

    Just upgraded to release: 1.1 Kralizec with plugin now openmediavault-route 1.0


    Still same issue.


    The Gui gives you the ability to create more than one static route, but it fails with more than one. Anyone having more than one static route?


    Can this be done without the plugin directly from the comandline instead?

    Hi,


    I'm trying to ad three static routes for crashplan. But only the first one succeeds. If I try to add another second subnet it fails:



    Shouldn't it be possible to add multiple static routes?


    /THK

    Thank you for clarifying the ACL - The ACL's are surely still there, but I'm pretty sure that OMW dosen't rediscover the ACL user selection i the GUI when recreating the share (But I'm not 100% on this - I need to test this)


    However, back to question regarding config file... Is it a No? Imaging you have 50 shared folders on volumes you need to move to another volume. I would hate going through the process of recreating all the shares (unless it's scriptable). This could also touch the topics about "replacing a disk".

    Dear all


    Does anyone have a hint on how to change the location of a shared folder without having to delete and recreate the share in OMW GUI? There is a lot of ACL settings I would like preserve, and I would like to avoid going through the process of recreating the share.


    In short; I have rsync'ed the folder to another volume and I would like to move the share definition to the new folder preserving all user-rights - the easy way :)
    My guess is that I can change the share destination in the config files.. or? :?


    Kind Regards
    Thomas