I’ve posted in discord channel, but also posting here.
I mentioned a week or so ago about 21.8.0 alerting on the incorrect mount being full. This started happening in 21.8.0 (Docker container) with both on disk LVM mounts (not NFS) and also VMware vSphere mounts. I can confirm it happens in 21.9.1 as well.
e.g. LibreNMS UI shows /home is full and alerts on it; however, the actual server mount is /opt.
e.g. LibreNMS UI shows /storage/imagebuilder is full and alerts on it; however the actual mount full is /storage/archive
This could be two separate issues?
One being a mount reporting bug
The second being, possibly an incorrect/old VMware MIB. The mounts LibreNMS is reporting on is from 6.5; however, VMware updated their vSphere mounts in 6.7.
I have the debug data and am manually typing in the relevant pieces.
BLUF I do see the snmpwalk is correct, but the array that creates the SQL UPDATE is incorrect.
Specifically, the mount for /storage/imagebuilder is hrStorageIndex 17 (and INODE is 18) and /storage/archive (which is 95%) is hrStorageIndex 27 (and INODE 28), but the array that builds the SQL UPDATE array sets storage_index 27 to /storage/imagebuilder (when it should be /storage/archive) with the storage_used, storage free, storage_perc values set to the correct /storage/archive values.
Seems like either VMware is reporting the wrong index for one OID, or LibreNMS is assuming that certain OID values should match, and that isn’t the case.
Can you have a look at my thread below, I suspect this is the same issue being discussed here, and I have provided SNMP data:
In short, if additional mount points (drives in the case of windows) are added dynamically and cause the drive/mount indexes to change in SNMP LibreNMS does not handle this properly causing the disk space of one mount point/drive to be attributed to another drive/mount point which was previously at that index.
Some aspects of the data (like drive names) seem to be cached and don’t update when the indexes in the SNMP data shift dynamically.
My current work around is remove the device and re-add via the lnms CLI. This is only temporary until another ISO/non-nfs mount is initiated on the device.