Notifications do not work

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Notifications do not work

      Hi all,

      I am testing Openmediavault to be my future NAS System :) I did a clean install of the latest image 4.1.3 and upgraded. After a Reboot I am now on Kernel 4.17.
      I did a little bit of testing and then configured notification. I could sent a test email it immediately appeared in my email account. So everything should work.

      I then removed one HDD for testing purpose and saw this in SYSLOG

      Source Code

      1. Aug 7 10:34:19 openmediavault zed: eid=49 class=history_event pool_guid=0xF81E337B8C4A1044
      2. Aug 7 10:34:19 openmediavault zed: eid=50 class=history_event pool_guid=0xF81E337B8C4A1044
      3. Aug 7 10:34:56 openmediavault kernel: [11269.184051] ata4: exception Emask 0x50 SAct 0x0 SErr 0x4090800 action 0xe frozen
      4. Aug 7 10:34:56 openmediavault kernel: [11269.184139] ata4: irq_stat 0x00400040, connection status changed
      5. Aug 7 10:34:56 openmediavault kernel: [11269.184200] ata4: SError: { HostInt PHYRdyChg 10B8B DevExch }
      6. Aug 7 10:34:56 openmediavault kernel: [11269.184260] ata4: hard resetting link
      7. Aug 7 10:34:57 openmediavault kernel: [11269.895928] ata4: SATA link down (SStatus 0 SControl 300)
      8. Aug 7 10:35:02 openmediavault kernel: [11274.975900] ata4: hard resetting link
      9. Aug 7 10:35:02 openmediavault kernel: [11275.289825] ata4: SATA link down (SStatus 0 SControl 300)
      10. Aug 7 10:35:07 openmediavault kernel: [11280.351902] ata4: hard resetting link
      11. Aug 7 10:35:07 openmediavault kernel: [11280.665755] ata4: SATA link down (SStatus 0 SControl 300)
      12. Aug 7 10:35:07 openmediavault kernel: [11280.665770] ata4.00: disabled
      13. Aug 7 10:35:07 openmediavault kernel: [11280.665789] ata4: EH complete
      14. Aug 7 10:35:07 openmediavault kernel: [11280.665812] ata4.00: detaching (SCSI 3:0:0:0)
      15. Aug 7 10:35:07 openmediavault kernel: [11280.671606] sd 3:0:0:0: [sdc] Synchronizing SCSI cache
      16. Aug 7 10:35:07 openmediavault kernel: [11280.671625] sd 3:0:0:0: [sdc] Synchronize Cache(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
      17. Aug 7 10:35:07 openmediavault kernel: [11280.671626] sd 3:0:0:0: [sdc] Stopping disk
      18. Aug 7 10:35:07 openmediavault kernel: [11280.671629] sd 3:0:0:0: [sdc] Start/Stop Unit failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
      19. Aug 7 10:35:16 openmediavault zed: eid=51 class=io pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1
      20. Aug 7 10:35:16 openmediavault zed: eid=52 class=io pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1
      21. Aug 7 10:35:16 openmediavault zed: eid=53 class=io pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1
      22. Aug 7 10:35:16 openmediavault zed: eid=54 class=probe_failure pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1
      23. Aug 7 10:35:16 openmediavault zed: eid=55 class=vdev.too_small pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1
      24. Aug 7 10:35:16 openmediavault zed: eid=56 class=statechange pool_guid=0xF81E337B8C4A1044 vdev_path=/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K7FP7R1T-part1 vdev_state=UNAVAIL
      25. Aug 7 10:36:06 openmediavault nmbd[7853]: [2018/08/07 10:36:06.004300, 0] ../source3/nmbd/nmbd_namequery.c:109(query_name_response)
      26. Aug 7 10:36:06 openmediavault nmbd[7853]: query_name_response: Multiple (2) responses received for a query on subnet 10.0.0.132 for name WORKGROUP<1d>.
      27. Aug 7 10:36:06 openmediavault nmbd[7853]: This response was from IP 10.0.0.10, reporting an IP address of 10.0.0.10.
      28. Aug 7 10:36:26 openmediavault zed: eid=57 class=history_event pool_guid=0xF81E337B8C4A1044
      29. Aug 7 10:36:26 openmediavault zed: eid=58 class=history_event pool_guid=0xF81E337B8C4A1044
      30. Aug 7 10:37:02 openmediavault zed: eid=59 class=history_event pool_guid=0xF81E337B8C4A1044
      31. Aug 7 10:37:02 openmediavault zed: eid=60 class=history_event pool_guid=0xF81E337B8C4A1044
      32. Aug 7 10:37:46 openmediavault zed: eid=61 class=history_event pool_guid=0xF81E337B8C4A1044
      33. Aug 7 10:37:46 openmediavault zed: eid=62 class=history_event pool_guid=0xF81E337B8C4A1044
      Display All

      The Behavior is corrected and it was logged in the Kernel and also my ZFS Plugin stated there's something going wrong.

      Unfortunately I did not receive any email notification. Waited for half an hour but nothing.
      This concerns me a little but because failure of an HDD is a really critical issue and this should be reported immediately.

      Any ideas or suggestions?

      Thanks S



      The post was edited 1 time, last by alex.ba ().

    • Hmm ok I think we have a misunderstanding :)
      I do not want to monitor the ZFS Pool.... I only want to know why a notification is not sent out when a HDD is disconnected.I think that this should be immediately reported in any case - or is this not part of the notification service?

      Thanks S
    • alex.ba wrote:

      or is this not part of the notification service?
      Reading that thread it's not part of the plugin, therefore not part of the notification service, perhaps @subzero79 could shed more light on this for you.

      My understanding of zfs is, if a drive is down/failed/disconnected then the report would be a degraded pool, not disk a has been disconnected, the fact that the pool has been flagged as being degraded is sufficient for a notification.

      I don't use zfs on omv but I did on nas4free and from recollection when a drive failed or was pulled an email notification was sent about the pool being degraded.....I know because I had a drive do just that....after some work and investigation the drive was fine, but for whatever reason it was reported with an error once I had logged into the nas.