High discovery polling time on Huawei VRP switchs since update V1.70.0

Tags: #<Tag:0x00007fdb76c260b0> #<Tag:0x00007fdb76c25fe8>

Hi Team!
Please if someone can guide me on this, so i can approach to the fault. Since the update V1.70.0 i have issues with the discovery poller for the Huawei VRP swiches S9300 The discovery is executed each 6 hours. What we see is that the time of polling its taking to much(and its breaking the graphs, so we disable de discovery for ports, and the the graphs recover), specifically for the processor module, and we are seeing a lot of this warnings:

“Warning: Illegal offset type in /opt/librenms/LibreNMS/Device/YamlDiscovery.php on line 178”

Runtime for discovery module ‘processors’: 615.2000 seconds with 7409424 bytes
SNMP: [11/615.09s] MySQL: [42/0.02s] RRD: [0/0.00s]

On the pastebin you can find a fragment of ./discovery.php -h HOSTNAME -d


Its possible that this issue are associated with “VRP NAC polling optimisation” #12279 ?

[email protected]:/opt/librenms# su - librenms
$ ./validate.php
Component | Version
--------- | -------
LibreNMS  | 1.70.1
DB Schema | 2020_11_02_164331_add_powerstate_enum_to_vminfo (192)
PHP       | 7.4.7
Python    | 3.7.3
MySQL     | 10.3.22-MariaDB-0+deb10u1
RRDTool   | 1.7.1
SNMP      | NET-SNMP 5.7.3
OpenSSL   | 

[OK]    Composer Version: 2.0.8
[OK]    Dependencies up-to-date.
[OK]    Database connection successful
[OK]    Database schema correct
[INFO]  Detected Python Wrapper
[OK]    Connection to memcached is ok


Weird. NAC optimisation is for 802.1X, not touching the Processor part. And as I do not have any S93xx device, makes it difficult to try. Could you run ./discovery.php -h HOSTNAME -d -v, check (and remove) any sensitive information, and post it here ?
The warning itself is probably harmless.
Could you also identify “where”, at which steps, the discovery is running slow ?

Hi PipoCanaja!
Sorry for the delay, I attach the discovery https://we.tl/t-gFD6qlXEWV . The switches S9300 are not so powerfull, and these in particular are working in stack mode, witch make the master switch work even harder, making them more sensitive to a large polling.

As you said the NAC optimization doesn’t look associated with the issue

Runtime for discovery module ‘ports’: 26.3900 seconds with 9064 bytes
SNMP: [5/26.04s] MySQL: [601/0.29s] RRD: [0/0.00s]

Unload disco module ports

The processors module is the one that is taking to much time

Runtime for discovery module ‘processors’: 618.9000 seconds with 7409424 bytes
SNMP: [11/618.82s] MySQL: [42/0.03s] RRD: [0/0.00s]

Unload disco module processors

Please let me know if you need another logs.


Hi @PipoCanaja!
The problem was resolved when we change the SRUs on the switchs, this new SRUs have greater power processing, and do not have issues to manage a large polling when having a lot of ports(like in this case where we have a stack of switchs).
Anyway thanks for the support!!




This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.