In my experience this is caused by incomplete SNMP polling where a polling session partially fails (device stops responding to the SNMP query part way through returning data) and this results in partial data being written to the database that includes traffic interface counter values of 0 instead of the correct running total.
On the subsequent polling session when correct traffic counters are received there is a huge “spike” in recorded traffic equal to the current traffic counter value minus the previous zero value.
This also causes port traffic utilisation alert rules to trigger if you have any set up. I discussed this problem over in this thread where I was getting bogus port utilisation alerts and the workarounds I found for this:
Personally I think this is a bug / design flaw in LibreNMS, as if an SNMP session fails (hangs) part way through it should be treated as a failed polling session (with all data discarded) rather than writing partial and incorrect data (including zero’s for port traffic counters) into the database.
We have a couple of models of switch which occasionally stop responding to SNMP queries part way through and while I can work around it with my alert rules I can’t do anything about the incorrect spikes in traffic graphs.
Is it possible the management interfaces of these switches are becoming overloaded at these times and are having difficulty responding to SNMP queries ? What are the poller statistics for one of these switches - does it have a long polling time and does it show any failed polling sessions ?