I’ve been using LibreNMS since January 2022, I’m pretty sure it was working last week, for sure the week before (so sometime in July 2022), but today the “All Ports” feature shows no data.
- The output of
[OK] Composer Version: 2.3.10
[OK] Dependencies up-to-date.
[OK] Database connection successful
[OK] Database Schema is current
[OK] SQL Server meets minimum requirements
[OK] lower_case_table_names is enabled
[OK] MySQL engine is optimal
[OK] Database schema correct
[OK] MySQl and PHP time match
[FAIL] Both Dispatcher Service and Python Wrapper were active recently, this could cause double pollig
[OK] Dispatcher Service is enabled
[OK] Locks are functional
[OK] Python poller wrapper is polling
[OK] Redis is unavailable
[OK] rrd_dir is writable
[OK] rrdtool version ok
If you are having troubles with discovery/polling include the pastebin output of:
./discovery.php -h HOSTNAME -d | ./pbin.sh
./poller.php -h HOSTNAME -r -f -d | ./pbin.sh
Maybe an automatic update changed how polling is done? i see new files from July 30, 2022.
rwxr-xr-x 20 librenms librenms 4096 Jul 30 00:15 LibreNMS
drwxr-xr-x 9 librenms librenms 4096 Jul 30 00:15 includes
-rwxrwxr-x 1 librenms librenms 4541 Jul 30 00:15 discovery.php
-rw-rw-r-- 1 librenms librenms 14875 Jul 30 00:15 daily.php
-rwxrwxr-x 1 librenms librenms 5647 Jul 30 00:15 validate.php
-rwxrwxr-x 1 librenms librenms 5903 Jul 30 00:15 poller.php
drwxrwxr-x+ 2 librenms librenms 4096 Jul 31 00:00 logs
I was fiddling with the Gear…Global Settings … Poller, I swear I selected Distributed Poller and set the Default Poller Group = General and rebooted, and it cleared the Poller issues but the All Ports data didnt reappear. Then later, the Validate shows the problem again …“FAIL: Both Dispatcher Service and Python Wrapper were active recently, this could cause double polling”.
And below, the OK in green shows that the “Dispatcher Service is enabled” and “Python poller wrapper is polling”
I ended up editing the cron file on my install and commenting out most of the cron entries per this link Dispatcher Service (RC) - LibreNMS Docs. This isn’t very clearly documented though.
At this point I no longer have the both dispatcher service and Python Wrapper service running. My python Wrapper no longer shows as running. We’ll see in some number of hours if discovery and everything else still works properly.
The detection is new. Crazy how many people have discovered they were double polling.
This is my cron file:
33 */6 * * * librenms /opt/librenms/cronic /opt/librenms/discovery-wrapper.py 1
*/5 * * * * librenms /opt/librenms/discovery.php -h new >> /dev/null 2>&1
*/5 * * * * librenms /opt/librenms/cronic /opt/librenms/poller-wrapper.py 8
- librenms /opt/librenms/alerts.php >> /dev/null 2>&1
*/5 * * * * librenms /opt/librenms/poll-billing.php >> /dev/null 2>&1
01 * * * * librenms /opt/librenms/billing-calculate.php >> /dev/null 2>&1
*/5 * * * * librenms /opt/librenms/check-services.php >> /dev/null 2>&1
*/5 * * * * librenms /opt/librenms/services-wrapper.py 1
What should I comment to remove the warning: “Both Dispatcher Service and Python Wrapper were active recently, this could cause double polling”?
First you need to decide , which polling method you want to use.
dispatcher service or standard pollers
if want to use dispatcher service, you can remove standard pollers from webgui
Thank you for your response.
Would it be deactivated here?
How can I deactivate the dispatcher service?
here you can remove standard pollers
to stop dispatcher service,stop below service
Poller is deactivated
In addition, I have also stopped the librenms service but I still get the following message “[FAIL] Both Dispatcher Service and Python Wrapper were active recently, this could cause double polling”, when I run ./validate.php
Is it necessary to make any further configuration?
after disabling , if you run validation immediately . it gives the same error. did you try after some time.
Also share below screenshot from your setup
After changing the configuration yesterday, the error persists
Ahh, i dont have my test VM now. I will try to reproduce the issue in test VM next week
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.