Can't login with correct credentials on web interface

    • Resolved
    • OMV 2.x
    • Can't login with correct credentials on web interface

      Hi at all!
      Since yesterday night I can no longer sign in into OMV!
      If use wrong credentials, it gave error. If I use the right one instead it simply releoad the login page asking me to insert them again!
      How can I do to resolve? :(
      I can login without problem with SSH, the only problem is the webinterface
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2

      The post was edited 1 time, last by Blabla ().

    • true, so I searched in the folder, And I found 3 log files bigger than 8GB!

      Source Code

      1. root@NAS:/var# du -sh /var/log/* | sort -hr | head -n 20
      2. 8.3G /var/log/messages.1
      3. 8.3G /var/log/kern.log.1
      4. 8.2G /var/log/syslog
      5. 123M /var/log/syslog.1

      what could it be?!
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • I would delete the .1 files to start. Those are archives. I would also look in syslog to see what is logging so much. Maybe by posting tail -n 200 /var/log/syslog, I could tell.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      I would delete the .1 files to start. Those are archives. I would also look in syslog to see what is logging so much. Maybe by posting tail -n 200 /var/log/syslog, I could tell.


      I did a Tail on syslog.1 and kern.log.1,
      this was the error that i found in both of them:

      Source Code

      1. Jul 16 11:09:07 NAS kernel: [ 320.973111] ACPI Exception: AE_NOT_FOUND, while evaluating GPE method [_L6F] (20160108/evgpe-592)
      2. Jul 16 11:09:07 NAS kernel: [ 320.974151] ACPI Error: [PGRT] Namespace lookup failure, AE_NOT_FOUND (20160108/psargs-359)


      with your command this are the error that i had:

      Source Code

      1. Jul 18 00:59:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      2. Jul 18 00:59:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      3. Jul 18 01:00:01 NAS /USR/SBIN/CRON[14207]: (root) CMD (/usr/sbin/omv-mkgraph >/dev/null 2>&1)
      4. Jul 18 01:00:01 NAS rrdcached[2632]: Received FLUSHALL
      5. Jul 18 01:00:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      6. Jul 18 01:00:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      7. Jul 18 01:01:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      8. Jul 18 01:01:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      9. Jul 18 01:02:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      10. Jul 18 01:02:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      11. Jul 18 01:03:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      12. Jul 18 01:03:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      13. Jul 18 01:04:10 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      14. Jul 18 01:04:12 NAS rrdcached[2632]: flushing old values
      15. Jul 18 01:04:12 NAS rrdcached[2632]: rotating journals
      16. Jul 18 01:04:12 NAS rrdcached[2632]: started new journal /var/lib/rrdcached/journal/rrd.journal.1468796652.581411
      17. Jul 18 01:04:12 NAS rrdcached[2632]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1468789452.581409
      18. Jul 18 01:04:40 NAS monit[2940]: 'rootfs' space usage 100.0% matches resource limit [space usage>80.0%]
      19. Jul 18 01:17:01 NAS /USR/SBIN/CRON[14562]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
      20. Jul 18 01:17:01 NAS postfix/postsuper[14565]: fatal: scan_dir_push: open directory hold: No such file or directory
      21. Jul 18 02:09:01 NAS /USR/SBIN/CRON[15490]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)
      Display All
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • Try the following command:

      echo "disable" > /sys/firmware/acpi/interrupts/gpe6F

      If that fixes the problem, then I would add that line /etc/rc.local
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • The tail command shows the end of the log and the messages are timestamped. If the messages stop appearing, then it is fixed.

      To make it simple:

      echo "disable" > /sys/firmware/acpi/interrupts/gpe6F
      echo "" > /var/log/syslog (this will wipe syslog but it is not really that important)
      tail -f /var/log/syslog
      The last command will allow you to watch all new syslog entries. If you don't see any of the error messages, add it to /etc/rc.local. ctrl-c exits out of tail.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I have again the same problem :(
      here's the log:
      13G /var/log/syslog
      5.5G /var/log/messages
      5.5G /var/log/kern.log
      505M /var/log/messages.1
      505M /var/log/kern.log.1
      235M /var/log/messages.2.gz
      235M /var/log/kern.log.2.gz
      123M /var/log/syslog.1

      this time I have endlessy this error into syslog:
      Aug 1 07:10:39 NAS kernel: [ 7222.875427] ACPI Exception: AE_NOT_FOUND, while evaluating GPE method [_L6F] (20160108/evgpe-592)
      Aug 1 07:10:39 NAS kernel: [ 7222.876384] ACPI Error: [PGRT] Namespace lookup failure, AE_NOT_FOUND (20160108/psargs-359)
      Aug 1 07:10:39 NAS kernel: [ 7222.876385] ACPI Error: Method parse/execution failed [\_GPE._L6F] (Node ffff8802654e36f8), AE_NOT_FOUND (20160108/psparse-542)
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2
    • Did you add echo "disable" > /sys/firmware/acpi/interrupts/gpe6F to /etc/rc.local? Otherwise, it needs to be reset every time you reboot.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Sorry to resume this thread again, but this time have a problemwith the "in" folder:

      Source Code

      1. root@NAS:/var# du -sh /* | sort -hr [/tt]
      2. 349G /media [/tt]
      3. 26G /in [/tt]
      4. 1.7G /usr [/tt]
      5. 397M /lib[/tt]
      6. root@NAS:/var# du -sh /in/* | sort -hr [/tt]
      7. 23G /in/in [/tt]
      8. 1.7G /in/usr [/tt]
      9. 397M /in/lib[/tt]
      10. root@NAS:/var# du -sh /in/in/* | sort -hr [/tt]
      11. 23G /in/in/in [/tt]
      12. 64M /in/in/boot [/tt]
      13. 15M /in/in/bin[/tt]
      Display All


      If I try to add other /in in the command, it always give me the same error :(
      Intel G4400 - Asrock H170M Pro4S - 8GB ram - 2x4TB WD RED in RAID1 - ZFS Mirror 2x6TB Seagate Ironwolf
      OMV 4.1.4 - Kernel 4.14 backport 3 - omvextrasorg 4.1.2