Dispatcher service and webui

librenms version: [21.1.0-7-ge42a6e36a - Tue Feb 02 2021 10:39:20 GMT-0700]
OS version ubuntu 20.04.2

When settuping the dispatcher service, the dispatcher runs the discovery service every minute instead of running it in intervalls.

The webui configuration section for the poller, https://hostname/settings/poller/distributed have been configured, but do not appear to be saved anywhere. Am I correct in that the settings for the dispatcher service should be stored in the poller_cluster table for the correctly named poller?



Yes, when you start the dispatch service by hand it should report the values it is using. What does it say?

This is what it shows when starting:

librenms2(INFO):Groups: [0]
librenms2(INFO):Discovery QueueManager created: 12 workers, 21600s frequency
Discovery_0-1(INFO):Creating queue discovery:0
Discovery_0-1(INFO):Created redis queue with socket_timeout of 60s
librenms2(INFO):Using pure python SQL client
librenms2(INFO):Groups: [0]
librenms2(INFO):Services QueueManager created: 8 workers, 300s frequency
Services_0-1(INFO):Creating queue services:0
Services_0-1(INFO):Created redis queue with socket_timeout of 60s
librenms2(INFO):Using pure python SQL client
librenms2(INFO):Groups: [0]
librenms2(INFO):Ping QueueManager created: 1 workers, 60s frequency
Ping_0-1(INFO):Creating queue ping:0
librenms2(INFO):Using pure python SQL client
Ping_0-1(INFO):Created redis queue with socket_timeout of 60s
librenms2(INFO):LibreNMS Service: librenms2-0b4f32aa-6ce2-11eb-910c-3925afdf6a9d started!
librenms2(INFO):Poller group 0 (default). Using Python 3.8.5 and redis locks and queues
librenms2(INFO):Maintenance tasks will be run every 1 day, 0:00:00
librenms2(INFO):librenms2 is now the master dispatcher
Poller_0-1(INFO):Polling device 106

The problem is that the web interface is

With some more testing, I found that if I update the mysql table poller_cluster with the values that I want, then the poller service will read thos values at startup.

With that information, it appears I have two issues.

  1. The webui is not updating the poller_cluster mysql table.
  2. By looking at the log, it appears that the polling queuing is not working correctly – this might because there are still some null values in the poller_cluster table for poller node.

FYI,your screenshot shows the global settings ui to set the default values. If you have set a value in the poller settings it will override these values. (null indicates use the system defaults)

I think I have found the issue. After digging some more there appears to be an error in the discovery when I have the entity state discovery module enabled. This error causes the discovery to never finish which always causes the discovery polling to happen over and over.

When I run the discovery.php -h 7 by hand, this is the error message that is printed to the screen (I grabbed more than just the error so that one has some context of what is happening) :

Load disco module entity-physical

Caching OIDs: entPhysicalEntry entAliasMappingIdentifier…

Runtime for discovery module ‘entity-physical’: 0.4070 seconds with 6760 bytes
SNMP: [1/0.05s] MySQL: [37/0.02s] RRD: [0/0.00s]

Unload disco module entity-physical

Load disco module entity-state

Entity States: +++++++++++++++++++++++
In Grammar.php line 136:

Argument 1 passed to Illuminate\Database\Grammar::parameterize() must be of
the type array, string given, called in /opt/librenms/vendor/laravel/frame
work/src/Illuminate/Database/Query/Grammars/Grammar.php on line 886

When I disabled the entity state module, then the discovery polling finished and updated the time in the database.

What is the next step that I need to do for this since I am not very familiar with the laravel framework.


I have tracked down this issue to the configuration of the mellanox switch that is tried to be polled. It appears that librenms does not like to poll the entity-state and the route module against this switch. The one thing that is different on this switch is that it has a 2nd vrf in it besides the normal default.

This topic was automatically closed 186 days after the last reply. New replies are no longer allowed.