Run port-poller but not port-discovery

Tags: #<Tag:0x00007f3b82459d58>

My machine:

LibreNMS | 1.69
DB Schema | 2020_07_27_00522_alter_devices_snmp_algo_columns (188)
PHP | 7.4.12
Python | 3.8.5
MySQL | 10.4.17-MariaDB-1:10.4.17+maria~focal
RRDTool | 1.7.2
SNMP | NET-SNMP 5.8
OpenSSL |

[OK] Composer Version: 1.10.17
[OK] Dependencies up-to-date.
[OK] Database connection successful
[OK] Database schema correct

My problem:
I very much like the auto-discovery features of LibreNMS. But I want to monitor only specific switch-ports to save disk space and CPU (energy). E.g. standard access-ports for users don’t need to be monitored, only special-ports like those having Wifi APs connected. For that, I disabled port-discovery in the global settings (/settings/poller/poller_modules) via WebUI and deleted unwanted ports. But as long as port-poller is running, all the ports that I removed get re-added within few minutes. When I disable port-poller module, then ports don’t get monitored/polled and no new ports get added. It seems, that the port-poller is also discovering and adding “missing” ports.
Any idea?

Hi,

Yes, this is expected, poller polls all ports which incidently adds also ports if they are missing.
What you are trying to do could be done that way :

  • enable discovery for ports
  • keep poller for ports as well
  • go into your device settings, in the ports panel, and disable all the ports you don’t need.
  • depending on the ratio of the enabled/disabled ports, you might even want to enable “selected port polling” to really only poll the ports you want and not all of them. But beware that polling selectively is most of the time longer and more CPU intensive than polling all the ports at once (a lot of small requests instead of one big one …).
    Bye

Hi,
thank you sharing your ideas.
I already played around a lot. My pain point is that as long as ports is known to LibreNMS, the RRD files got created to their final size. If RRD files would be sparse and occupy only that disk space required for the actually data saved the way to simply keep all the disabled ports would be fine. Consider a stacked switch with 8 stack-members having 8x 48 access ports plus one uplink per device. Out of these 392 ports, there might be only max 40 ports of interest that I want to monitor. This is only 1/10 of all ports. This is a huge waste in disk space and CPU cycles.
Unfortunately, disabling port-discovery is not really doing what the term is promising, when polling also discovers. “Polling” should only fetch monitoring data but not discovering additional monitoring targets. Maybe this is an performance optimisation within LibreNMS. But when I disable port-discovery I really want to have that.
When I installed LibreNMS I was deeply impressed about the amount of data that was discovered automatically and which is really helpful. That was the main point I put more effort in that NMS. But beside all these very good features, I need the option to keep large parts of the active ports out of scope.
If “selected port polling” is the only option, then I will give that a try. Can someone tell, if it would be possible to (manually) delete the RRD files of disabled ports? Will that confuse LibreNMS? What will happen, if I decide to enable some of these ports later on?
BR

Hello

You can have LibreNMS automatically delete unused RRDs. You’ll not be able to avoid the creation of the all the RRDs at first discovery, and during a few first polls. Then you’ll disable the ports you don’t need (using the GUI or even SQL directly to the DB). And enable selected port polling. Then after a few days, the daily.sh script will do the cleanup according to your settings .
https://docs.librenms.org/Support/Cleanup-options/
This will end up exactly as you wanted it, even if it is probably a longer way than you expected. But this is the only way to achieve it for the moment.
Bye

Marvelous. I will go this way. Thank you very much. As for the RRD cleanup, I found this code in daily.php:

if ($options[‘f’] === ‘rrd_purge’) {
$lock = Cache::lock(‘rrd_purge’, 86000);
if ($lock->get()) {
$rrd_purge = Config::get(‘rrd_purge’);
$rrd_dir = Config::get(‘rrd_dir’);

    if (is_numeric($rrd_purge) && $rrd_purge > 0) {
        $cmd = "find $rrd_dir -type f -mtime +$rrd_purge -print -exec rm -f {} +";
        $purge = `$cmd`;
        if (! empty($purge)) {
            echo "Purged the following RRD files due to old age (over $rrd_purge days old):\n";
            echo $purge;
        }
    }
    $lock->release();
}

}

The find … rm command is straight forward. That can be run manually also with a different selector (e.g. with some SQL select piping into rm instead of using find command). But I am unsure about the lock. Does it only prevent a second instance of daily.php to run in parallel or is it something different that gets mutex’d?