LibreNMS Graphs Stopped Populating - why? How to restore?


I’m running LibreNMS within a docker. I’ve inherited this so did not do the install from scratch.

Unfortunately my first problem is I cannot seem to find the validate.php or run it, where is this meant to be run from? What is the full path? Doing a locate validate.php returns blank.

[email protected]:~$ locate validate.php
[email protected]:~$ 

Secondly, I had graphs working. They have now stopped. I’ve read many threads but still cannot be sure of how to get them restored. One fix I was contemplating was to remove and readd. However - is this the best way? Will I lose previous data in doing so? Also… What does remove and readd mean? Remove from my hosts file restart the container and then add new with echo to restart the container again?

Any help would be greatly appreciated.

Most of the scripts will be in /opt/librenms.

If the graphs were working before then you need to check if rrd is running/listening and the permissions on the /opt/librenms/rrd have not changed.

The validate.php script should point you to any issues related to this.

Thank you snmpd, so trying to find the ./validate.php, here’s the contents of the folder I thought it would be in (I don’t have a /opt/librenms):

[email protected]:~$ ls -lla /aa/bb/cc/librenms/
total 40
drwxr-xr-x   7 user user 4096 Nov 11  2020 .
drwxr-xr-x   5 user user 4096 Jan  6 12:17 ..
drwxr-xr-x   2 user user 4096 Nov 11  2020 config
-rw-r--r--   1 user user 82 Nov 11  2020 .env
drwxrwxr-x   2 user user 4096 Dec 21 10:56 logs
drwxr-xr-x   2 user user 4096 Nov 11  2020 monitoring-plugins
drwxrwxr-x 192 user user 12288 Jan  4 13:29 rrd
drwxr-xr-x   2 user user 4096 Dec 21 11:11 weathermap

and checking the permissions of the hosts in the rrd look good? (happy for you to tell me if wrong):

[email protected]:~/cc/librenms/rrd$ ls -lla | grep inf
drwxr-xr-x   2 user user12288 Nov 22 18:12
drwxr-xr-x   2 user user 12288 Sep 24 06:14
drwxr-xr-x   2 user user 12288 Nov 22 18:18
drwxr-xr-x   2 user user 20480 Sep  7 13:27

and within the folder (sorry for omiting within my last update):

[email protected]:~/cc/librenms/rrd/$ ls -lla | grep 'Nov 25'
-rw-r--r--   1 user user 2543768 Nov 25 12:17 port-id75026.rrd
-rw-r--r--   1 user user 2543768 Nov 25 12:17 port-id75030.rrd
-rw-r--r--   1 user user 2543768 Nov 25 12:17 port-id75032.rrd

hi @snmpd, any thoughts on my last reply? I really appreciate your first reply and the help you initially gave me. Hoping you had some further thoughts.

Are you running rrdcached? If not, you might want to. If you are then, you need to ensure your rrdcached process is running and has permissions to everything it is supposed to.

hey again @snmpd

Thanks for continuing to help me understand this. So, no rrdcached.

What permissions should be where?

I changed my /opt/librenms within my docker to be librenms:librenms as I noticed nobody:nogroup when I checked this folder.

Also, found the validate.php script within the docker /opt/librenms:

It might be worth saying again loads of other device graphs work but some do not, most likely the newer adds but my team are working on an audit for this.

[email protected]:~$ docker exec -it nms_librenms bash
bash-5.0# ./validate.php
Component | Version
--------- | -------
LibreNMS  | 1.69
DB Schema | 2020_10_21_124101_allow_nullable_ospf_columns (188)
PHP       | 7.3.24
Python    | 3.8.5
MySQL     | 10.2.35-MariaDB-1:10.2.35+maria~bionic
RRDTool   | 1.7.2
SNMP      | NET-SNMP 5.8
OpenSSL   | 

[OK]    Installed from the official Docker image; no Composer required
[OK]    Database connection successful
[OK]    Database schema correct
[WARN]  IPv6 is disabled on your server, you will not be able to add IPv6 devices.
[WARN]  Updates are managed through the official Docker image

Are some working or are some displaying graphs and others not. Are the ones that are working actually updating?

Pick a device that isn’t working and run the below commands for that device.

/opt/librenms/discovery.php -d -v -h “hostname”
/opt/librenms/poller.php -d -v -h “hostname”

The debugs for these commands provide information on mysql and rrd and possibly the failures.

Thank you again @snmpd, so so much!

I have been looking at this again this afternoon, I’m starting to notice a lot of RRD: [0/0.00s] in the outputs, especially if run from these options in the GUI.

./discovery.php host1 2023-01-16 17:40:09 - 1 devices discovered in 153.2 secs  
SNMP [164/111.71s]: Get[107/12.39s] Getnext[1/0.11s] Walk[56/99.21s]
MySQL [5530/9.83s]: Cell[437/10.34s] Row[2146/-5.67s] Rows[95/0.31s] Column[32/0.06s] Update[2755/4.36s] Insert[57/0.31s] Delete[8/0.11s]
RRD [0/0.00s]:
./poller.php host1 2023-01-16 17:46:53 - 1 devices polled in 293.8 secs  
SNMP [47/283.06s]: Get[16/2.00s] Getnext[4/0.40s] Walk[27/280.66s]
MySQL [1170/4.66s]: Cell[14/0.04s] Row[-13/-0.03s] Rows[28/0.14s] Column[1/0.00s] Update[1138/4.51s] Insert[2/0.01s] Delete[0/0.00s]
RRD [549/0.28s]: Update[547/0.28s] Create[2/0.00s]

I am starting to wonder if rrdcached is working, I have it in a docker, which is running and healthy, I’m unsure how to check the service within the docker as the docker says systemctl not recognised. My thoughts here are is rrdcached polling correctly? Could it be linked to

The last thing I did to this VM was growing the HD size as the existing files had put disk space to 0%. You may also be right about incorrect permissions but having checked /opt/librenms I’m unsure what other folders could be incorrect. I’ve got about 2 years of history so trying to avoid a reinstall.

[email protected]:~$ docker ps | grep libre
cfd55543a8e1   adolfintel/speedtest       "docker-php-entrypoi…"   15 months ago   Up 3 weeks   >80/tcp, :::80->80/tcp                              librespeed
db210a218757   librenms/librenms:latest   "/init"                  16 months ago   Up 3 weeks             514/tcp, 514/udp,>8000/tcp, :::8000->8000/tcp    nms_librenms
59f7d62c9ab9   librenms/librenms:latest   "/init"                  2 years ago     Up 3 weeks             514/tcp, 8000/tcp, 514/udp                                     nms_dispatcher
[email protected]:~$ docker ps | grep rrd
3e1bf3250c15   crazymax/rrdcached         "/init"                  2 years ago     Up 3 weeks (healthy)   42217/tcp                                                      nms_rrdcached
[email protected]:~$ sudo -s
[sudo] password for user: 
[email protected]:/home/user# docker exec -it nms_librenms bash
bash-5.0# pwd

Running the scripts again as root (not as the user as I was before):
(if there’s something more specific from the entire output I’m happy to share the full output, trimming to try and help)

./discovery.php host1 2023-01-16 17:53:54 - 1 devices discovered in 199.9 secs  
SNMP [162/152.14s]: Get[106/12.70s] Getnext[1/0.11s] Walk[55/139.34s]
MySQL [5556/10.51s]: Cell[437/9.85s] Row[2166/-5.03s] Rows[95/0.30s] Column[32/0.05s] Update[2755/4.93s] Insert[65/0.34s] Delete[6/0.07s]
./poller.php host1 2023-01-16 17:59:21 - 1 devices polled in 287.6 secs  
SNMP [48/277.41s]: Get[17/2.28s] Getnext[4/0.41s] Walk[27/274.72s]
MySQL [1163/4.41s]: Cell[14/0.03s] Row[-13/-0.03s] Rows[28/0.14s] Column[1/0.00s] Update[1131/4.26s] Insert[2/0.01s] Delete[0/0.00s]
RRD [548/0.29s]: Update[548/0.29s]

use the -v -d option in the poller and discovery. That provides more detail for each those metrics you posted.

Thanks snmpd, those commands were run using those options. This is the final end of command output. Is there another part of the command output you’d like me to grab? Would you like the full output?

hi @snmpd

I noticed something in logs today, does it help us? I have 3 others servers running this and 2 work but I cannot find any differences. However, today I noticed that one shows rrdcreate logs, where as the one that is currently broken does not:

Broken (was working before I grew the VM HD size):

[email protected]:~$ docker logs nms_rrdcached -t -n 45
2023-01-17T16:37:07.193062680Z Jan 17 16:37:07 rrdcached[1371]: journal processing complete
2023-01-17T16:37:07.193080760Z Jan 17 16:37:07 rrdcached[1371]: listening for connections
2023-01-17T16:57:48.012938030Z caught SIGTERM
2023-01-17T16:57:48.013246281Z signal_receiver: Signal 18 was received from process 1367.
2023-01-17T16:57:48.057992841Z [cont-finish.d] executing container finish scripts...
2023-01-17T16:57:48.066515752Z [cont-finish.d] done.
2023-01-17T16:57:48.069398967Z [s6-finish] waiting for services.
2023-01-17T16:57:48.540351050Z starting shutdown
2023-01-17T16:57:48.540805602Z clean shutdown; all RRDs flushed
2023-01-17T16:57:48.540970267Z removing journals
2023-01-17T16:57:48.541201134Z goodbye
2023-01-17T16:57:48.755160726Z [s6-finish] sending all processes the TERM signal.
2023-01-17T16:57:51.781227270Z [s6-finish] sending all processes the KILL signal and exiting.
2023-01-17T16:57:54.077213542Z [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
2023-01-17T16:57:54.408589964Z [s6-init] ensuring user provided files have correct perms...exited 0.
2023-01-17T16:57:54.419449703Z [fix-attrs.d] applying ownership & permissions fixes...
2023-01-17T16:57:54.428778646Z [fix-attrs.d] done.
2023-01-17T16:57:54.435861886Z [cont-init.d] executing container initialization scripts...
2023-01-17T16:57:54.445718257Z [cont-init.d] executing... 
2023-01-17T16:57:54.462695354Z [cont-init.d] exited 0.
2023-01-17T16:57:54.470577440Z [cont-init.d] executing... 
2023-01-17T16:57:54.489313136Z [cont-init.d] exited 0.
2023-01-17T16:57:54.497117523Z [cont-init.d] executing... 
2023-01-17T16:57:54.508421282Z Fixing perms...
2023-01-17T16:57:54.517213808Z [cont-init.d] exited 0.
2023-01-17T16:57:54.525399239Z [cont-init.d] executing... 
2023-01-17T16:57:54.617114277Z [cont-init.d] exited 0.
2023-01-17T16:57:54.624895575Z [cont-init.d] ~-socklog: executing... 
2023-01-17T16:57:57.693271502Z [cont-init.d] ~-socklog: exited 0.
2023-01-17T16:57:57.700123917Z [cont-init.d] done.
2023-01-17T16:57:57.707057498Z [services.d] starting services
2023-01-17T16:57:57.762525305Z [services.d] done.
2023-01-17T16:57:58.192038034Z starting up
2023-01-17T16:57:58.192223248Z setgid(1000) succeeded
2023-01-17T16:57:58.192585947Z setuid(1000) succeeded
2023-01-17T16:57:58.192620399Z checking for journal files
2023-01-17T16:57:58.193063023Z journal processing complete
2023-01-17T16:57:58.193216866Z Jan 17 16:57:58 rrdcached[1350]: starting up
2023-01-17T16:57:58.193277460Z Jan 17 16:57:58 rrdcached[1350]: setgid(1000) succeeded
2023-01-17T16:57:58.193298654Z Jan 17 16:57:58 rrdcached[1350]: setuid(1000) succeeded
2023-01-17T16:57:58.193324087Z Jan 17 16:57:58 rrdcached[1350]: checking for journal files
2023-01-17T16:57:58.193361954Z Jan 17 16:57:58 rrdcached[1350]: journal processing complete
2023-01-17T16:57:58.193558954Z listening for connections
2023-01-17T16:57:58.194028036Z Jan 17 16:57:58 rrdcached[1350]: listening for connections


[email protected]:~$ docker logs nms_rrdcached -t --since 2023-01-06
2023-01-06T13:13:51.015416120Z caught SIGTERM
2023-01-06T13:13:51.295755187Z starting shutdown
2023-01-06T13:16:40.870970057Z Creating configuration...
2023-01-06T13:16:41.034433240Z Fixing perms...
2023-01-06T13:16:45.657556990Z rrdcached: can't create pid file '/var/run/rrdcached/' (File exists)
2023-01-06T13:16:45.657809251Z rrdcached: removed stale PID file (no rrdcached on pid 1)
2023-01-06T13:16:45.657843439Z rrdcached: starting normally.
2023-01-06T13:16:45.658552023Z starting up
2023-01-06T13:16:45.658568615Z checking for journal files
2023-01-06T13:16:45.658947222Z replaying from journal: /data/journal/rrd.journal.1673005468.944090
2023-01-06T13:16:51.493382411Z Replayed 6354 entries (58728 failures)
2023-01-06T13:16:51.493473425Z replaying from journal: /data/journal/rrd.journal.1673009068.943951
2023-01-06T13:16:51.556417796Z Malformed journal entry at line 36790
2023-01-06T13:16:51.556450287Z Replayed 7995 entries (28795 failures)
2023-01-06T13:16:51.556458604Z journal processing complete
2023-01-06T13:16:51.556869954Z listening for connections
2023-01-07T03:59:36.733148076Z rrdcreate request for /data/db/host3/port-id3630.rrd
2023-01-07T03:59:36.758201265Z rrdcreate request for /data/db/host3/port-id3631.rrd
2023-01-07T03:59:36.784462899Z rrdcreate request for /data/db/host3/port-id3632.rrd
2023-01-07T03:59:36.809960873Z rrdcreate request for /data/db/host3/port-id3633.rrd
2023-01-07T04:51:31.113632943Z rrdcreate request for /data/db/host6/port-id3634.rrd
2023-01-07T04:51:31.141731904Z rrdcreate request for /data/db/host6/port-id3635.rrd
2023-01-07T04:51:31.169401811Z rrdcreate request for /data/db/host6/port-id3636.rrd
2023-01-07T04:51:31.202644958Z rrdcreate request for /data/db/host6/port-id3637.rrd
2023-01-07T04:51:31.239175026Z rrdcreate request for /data/db/host6/port-id3638.rrd
2023-01-07T04:51:31.268788892Z rrdcreate request for /data/db/host6/port-id3639.rrd
2023-01-07T04:51:31.298122445Z rrdcreate request for /data/db/host6/port-id3640.rrd
2023-01-07T04:51:31.326243335Z rrdcreate request for /data/db/host6/port-id3641.rrd
2023-01-07T04:51:31.357894095Z rrdcreate request for /data/db/host6/port-id3642.rrd
2023-01-07T04:51:31.386406927Z rrdcreate request for /data/db/host6/port-id3643.rrd
2023-01-07T04:51:31.419049887Z rrdcreate request for /data/db/host6/port-id3644.rrd
2023-01-07T04:51:31.448973786Z rrdcreate request for /data/db/host6/port-id3645.rrd
2023-01-07T04:51:31.477943492Z rrdcreate request for /data/db/host6/port-id3646.rrd
2023-01-07T04:51:31.505830482Z rrdcreate request for /data/db/host6/port-id3647.rrd
2023-01-07T04:51:31.533260510Z rrdcreate request for /data/db/host6/port-id3648.rrd
2023-01-07T04:51:31.561730139Z rrdcreate request for /data/db/host6/port-id3649.rrd
2023-01-07T04:51:31.590025237Z rrdcreate request for /data/db/host6/port-id3650.rrd
2023-01-07T04:51:31.618693642Z rrdcreate request for /data/db/host6/port-id3651.rrd
2023-01-07T04:51:31.649110651Z rrdcreate request for /data/db/host6/port-id3652.rrd
2023-01-07T04:51:31.677039035Z rrdcreate request for /data/db/host6/port-id3653.rrd
2023-01-07T04:51:31.703535644Z rrdcreate request for /data/db/host6/port-id3654.rrd
2023-01-07T04:51:31.733134057Z rrdcreate request for /data/db/host6/port-id3655.rrd
2023-01-07T04:51:31.766549817Z rrdcreate request for /data/db/host6/port-id3656.rrd
2023-01-07T04:51:31.798592135Z rrdcreate request for /data/db/host6/port-id3657.rrd
2023-01-07T04:51:31.827107854Z rrdcreate request for /data/db/host6/port-id3658.rrd
2023-01-07T04:51:31.856802853Z rrdcreate request for /data/db/host6/port-id3659.rrd
2023-01-07T04:51:31.886071361Z rrdcreate request for /data/db/host6/port-id3660.rrd
2023-01-07T04:51:31.916748324Z rrdcreate request for /data/db/host6/port-id3661.rrd
2023-01-07T04:51:31.949819526Z rrdcreate request for /data/db/host6/port-id3662.rrd
2023-01-07T04:51:31.981085911Z rrdcreate request for /data/db/host6/port-id3663.rrd
2023-01-07T04:51:32.006917744Z rrdcreate request for /data/db/host6/port-id3664.rrd
2023-01-07T04:51:32.039664666Z rrdcreate request for /data/db/host6/port-id3665.rrd
2023-01-07T04:51:32.067140717Z rrdcreate request for /data/db/host6/port-id3666.rrd
2023-01-07T04:51:32.097082588Z rrdcreate request for /data/db/host6/port-id3667.rrd
2023-01-07T04:51:32.126619081Z rrdcreate request for /data/db/host6/port-id3668.rrd
2023-01-07T04:51:32.157330164Z rrdcreate request for /data/db/host6/port-id3669.rrd
2023-01-07T04:51:32.187022112Z rrdcreate request for /data/db/host6/port-id3670.rrd
2023-01-07T04:51:32.218670901Z rrdcreate request for /data/db/host6/port-id3671.rrd
2023-01-07T04:51:32.249252524Z rrdcreate request for /data/db/host6/port-id3672.rrd
2023-01-07T04:51:32.278617550Z rrdcreate request for /data/db/host6/port-id3673.rrd
2023-01-07T04:51:32.311442743Z rrdcreate request for /data/db/host6/port-id3674.rrd
2023-01-07T04:51:32.347125894Z rrdcreate request for /data/db/host6/port-id3675.rrd
2023-01-07T04:51:32.376600621Z rrdcreate request for /data/db/host6/port-id3676.rrd
2023-01-07T04:51:32.403105327Z rrdcreate request for /data/db/host6/port-id3677.rrd
2023-01-07T04:51:32.433448117Z rrdcreate request for /data/db/host6/port-id3678.rrd
2023-01-07T04:51:32.461725819Z rrdcreate request for /data/db/host6/port-id3679.rrd
2023-01-07T04:51:32.489483042Z rrdcreate request for /data/db/host6/port-id3680.rrd
2023-01-07T05:20:10.261696410Z rrdcreate request for /data/db/host4/port-id3681.rrd
2023-01-07T05:20:10.290204279Z rrdcreate request for /data/db/host4/port-id3682.rrd
2023-01-07T05:20:10.332605841Z rrdcreate request for /data/db/host4/port-id3683.rrd
2023-01-07T05:20:10.359532981Z rrdcreate request for /data/db/host4/port-id3684.rrd
2023-01-07T05:20:10.391556987Z rrdcreate request for /data/db/host4/port-id3685.rrd
2023-01-07T05:20:10.419900302Z rrdcreate request for /data/db/host4/port-id3686.rrd
2023-01-07T05:20:10.449494303Z rrdcreate request for /data/db/host4/port-id3687.rrd
2023-01-07T05:20:10.475568013Z rrdcreate request for /data/db/host4/port-id3688.rrd
2023-01-07T05:20:10.503514986Z rrdcreate request for /data/db/host4/port-id3689.rrd
2023-01-07T05:20:10.531755207Z rrdcreate request for /data/db/host4/port-id3690.rrd
2023-01-07T05:20:10.559486219Z rrdcreate request for /data/db/host4/port-id3691.rrd
2023-01-07T05:20:10.586988452Z rrdcreate request for /data/db/host4/port-id3692.rrd
2023-01-07T05:20:10.614306847Z rrdcreate request for /data/db/host4/port-id3693.rrd
2023-01-07T05:20:10.642517292Z rrdcreate request for /data/db/host4/port-id3694.rrd
2023-01-08T12:34:40.781813821Z rrdcreate request for /data/db/host1/port-id3695.rrd
2023-01-09T00:41:29.296207766Z rrdcreate request for /data/db/host1/port-id3696.rrd
2023-01-09T00:41:29.333941860Z rrdcreate request for /data/db/host1/port-id3697.rrd
2023-01-09T00:41:29.367321091Z rrdcreate request for /data/db/host1/port-id3698.rrd
2023-01-09T00:41:29.388561096Z rrdcreate request for /data/db/host1/port-id3699.rrd
2023-01-09T00:41:29.402656902Z rrdcreate request for /data/db/host1/port-id3700.rrd
2023-01-09T00:41:29.431340399Z rrdcreate request for /data/db/host1/port-id3701.rrd
2023-01-09T00:41:29.454107894Z rrdcreate request for /data/db/host1/port-id3702.rrd
2023-01-09T00:41:29.473214689Z rrdcreate request for /data/db/host1/port-id3703.rrd
2023-01-09T00:41:29.489171841Z rrdcreate request for /data/db/host1/port-id3704.rrd
2023-01-09T00:41:29.522672549Z rrdcreate request for /data/db/host1/port-id3705.rrd
2023-01-09T00:41:29.541036183Z rrdcreate request for /data/db/host1/port-id3706.rrd
2023-01-09T00:41:29.558333880Z rrdcreate request for /data/db/host1/port-id3707.rrd
2023-01-11T11:46:59.443177987Z rrdcreate request for /data/db/host6/port-id3720.rrd
2023-01-11T11:46:59.505699649Z rrdcreate request for /data/db/host6/port-id3721.rrd
2023-01-11T11:46:59.541464747Z rrdcreate request for /data/db/host6/port-id3722.rrd
2023-01-11T11:46:59.571795827Z rrdcreate request for /data/db/host6/port-id3723.rrd
2023-01-11T11:47:13.798571178Z rrdcreate request for /data/db/host2/port-id3724.rrd
2023-01-11T11:47:13.827217925Z rrdcreate request for /data/db/host2/port-id3725.rrd
2023-01-11T11:47:13.858115963Z rrdcreate request for /data/db/host2/port-id3726.rrd
2023-01-11T11:47:13.885375003Z rrdcreate request for /data/db/host2/port-id3727.rrd
2023-01-11T11:47:13.918104247Z rrdcreate request for /data/db/host2/port-id3728.rrd
2023-01-11T11:47:13.950549141Z rrdcreate request for /data/db/host2/port-id3729.rrd
2023-01-11T11:47:13.982241334Z rrdcreate request for /data/db/host2/port-id3730.rrd
2023-01-11T11:47:14.013002116Z rrdcreate request for /data/db/host2/port-id3731.rrd
2023-01-11T11:47:14.045129546Z rrdcreate request for /data/db/host2/port-id3732.rrd
2023-01-11T11:47:14.075048159Z rrdcreate request for /data/db/host2/port-id3733.rrd
2023-01-11T11:47:14.103948647Z rrdcreate request for /data/db/host2/port-id3734.rrd
2023-01-11T11:47:14.134138673Z rrdcreate request for /data/db/host2/port-id3735.rrd
2023-01-11T12:07:11.539408661Z rrdcreate request for /data/db/host2/bgp-
2023-01-11T12:12:11.608686459Z rrdcreate request for /data/db/host2/bgp-
2023-01-11T12:12:11.614770746Z rrdcreate request for /data/db/host2/bgp-
2023-01-11T12:17:10.439111111Z rrdcreate request for /data/db/host2/bgp-
2023-01-13T11:22:47.542231602Z rrdcreate request for /data/db/host6/port-id3736.rrd
2023-01-13T11:22:47.572610626Z rrdcreate request for /data/db/host6/port-id3737.rrd
2023-01-13T11:22:47.598429916Z rrdcreate request for /data/db/host6/port-id3738.rrd
2023-01-13T11:22:47.630224126Z rrdcreate request for /data/db/host6/port-id3739.rrd
2023-01-13T11:23:12.638056248Z rrdcreate request for /data/db/host2/port-id3740.rrd
2023-01-13T11:23:12.671072244Z rrdcreate request for /data/db/host2/port-id3741.rrd
2023-01-13T11:23:12.701682331Z rrdcreate request for /data/db/host2/port-id3742.rrd
2023-01-13T11:23:12.730055048Z rrdcreate request for /data/db/host2/port-id3743.rrd
2023-01-13T11:23:12.759934674Z rrdcreate request for /data/db/host2/port-id3744.rrd
2023-01-13T11:23:12.797431295Z rrdcreate request for /data/db/host2/port-id3745.rrd
2023-01-13T11:23:12.830491291Z rrdcreate request for /data/db/host2/port-id3746.rrd
2023-01-13T11:23:12.859391315Z rrdcreate request for /data/db/host2/port-id3747.rrd
2023-01-13T11:23:12.895953516Z rrdcreate request for /data/db/host2/port-id3748.rrd
2023-01-13T11:23:12.922555291Z rrdcreate request for /data/db/host2/port-id3749.rrd
2023-01-13T11:23:12.953135247Z rrdcreate request for /data/db/host2/port-id3750.rrd
2023-01-13T11:23:12.983471569Z rrdcreate request for /data/db/host2/port-id3751.rrd
2023-01-13T12:08:17.918964377Z rrdcreate request for /data/db/host2/bgp-
2023-01-13T12:13:17.431772227Z rrdcreate request for /data/db/host2/bgp-
2023-01-13T12:13:17.437084139Z rrdcreate request for /data/db/host2/bgp-
2023-01-13T12:18:16.400480847Z rrdcreate request for /data/db/host2/bgp-
2023-01-13T12:28:20.752582423Z rrdcreate request for /data/db/host1/bgp-
2023-01-13T12:42:24.216364711Z rrdcreate request for /data/db/host1/bgp-
2023-01-13T12:47:18.387633772Z rrdcreate request for /data/db/host1/bgp-
2023-01-13T12:52:40.149920915Z rrdcreate request for /data/db/host1/bgp-
2023-01-16T10:57:12.009760150Z rrdcreate request for /data/db/host1/port-id3752.rrd
2023-01-16T10:57:12.040428035Z rrdcreate request for /data/db/host1/port-id3753.rrd
2023-01-16T10:57:12.068355850Z rrdcreate request for /data/db/host1/port-id3754.rrd
2023-01-16T10:57:12.097002048Z rrdcreate request for /data/db/host1/port-id3755.rrd
2023-01-16T10:57:12.130391748Z rrdcreate request for /data/db/host1/port-id3756.rrd
2023-01-16T10:57:12.158555282Z rrdcreate request for /data/db/host1/port-id3757.rrd
2023-01-16T10:57:12.186726042Z rrdcreate request for /data/db/host1/port-id3758.rrd
2023-01-16T10:57:12.213429259Z rrdcreate request for /data/db/host1/port-id3759.rrd
2023-01-16T10:57:12.238135584Z rrdcreate request for /data/db/host1/port-id3760.rrd
2023-01-16T10:57:12.264336954Z rrdcreate request for /data/db/host1/port-id3761.rrd
2023-01-16T10:57:12.290532563Z rrdcreate request for /data/db/host1/port-id3762.rrd
2023-01-16T10:57:12.318556964Z rrdcreate request for /data/db/host1/port-id3763.rrd
2023-01-16T12:32:51.314814792Z rrdcreate request for /data/db/host1/bgp-
2023-01-16T12:32:51.323603737Z rrdcreate request for /data/db/host1/cbgp-
2023-01-16T12:42:07.597147512Z rrdcreate request for /data/db/host1/bgp-
2023-01-16T12:42:07.604383109Z rrdcreate request for /data/db/host1/cbgp-
2023-01-16T12:46:33.627663927Z rrdcreate request for /data/db/host1/bgp-
2023-01-16T12:46:33.635594300Z rrdcreate request for /data/db/host1/cbgp-
2023-01-16T12:55:47.279602290Z rrdcreate request for /data/db/host1/bgp-
2023-01-16T12:55:47.289027355Z rrdcreate request for /data/db/host1/cbgp-
2023-01-16T18:07:24.035661487Z rrdcreate request for /data/db/host2/cbgp-
2023-01-16T18:12:24.570343349Z rrdcreate request for /data/db/host2/cbgp-
2023-01-16T18:12:24.580000163Z rrdcreate request for /data/db/host2/cbgp-
2023-01-16T18:17:24.498767986Z rrdcreate request for /data/db/host2/cbgp-

You are only supposed to be using one rrdcached instance. You need to check your config.php on each host to make sure they are all writing to the same server. cd

$config[‘rrdtool_version’] = ‘1.7.2’;
$config[‘rrdcached’] = "[your_rrd_server_ip:42217]

Or check the GUI…

global settings, poller, RRDTool.

hi @snmpd

So nothing here standing out at you? Nothing should have gone awry after resizing(growing) the HD?

I can confirm this is only one container/instance. It’s just 4 servers, 2 in DC1, 2 in DC2, 2 for customers 1 active 1 in case of failure and 1 for DC devices and 1 in case of failure.

The more I look at this the more I think its permissions since the HD change, but I’m not sure what folders need what permissions.

Checking the config.php within the librenms docker shows no active rrd configuration, it is hashed out. See below.

[email protected]:~$ docker exec -it nms_librenms bash
bash-5.0# cat config.php | grep rrd
### Enable this to use rrdcached. Be sure rrd_dir is within the rrdcached dir
### and that your web server has permission to talk to rrdcached.
#$config['rrdcached']    = "unix:/var/run/rrdcached.sock";
# Number in days of how long to keep old rrd files. 0 disables this feature
$config['rrd_purge'] = 0;

I didn’t add the version, must I? I did add the server IP though. I then restarted the docker:

bash-5.0# cat config.php | grep rrd
### Enable this to use rrdcached. Be sure rrd_dir is within the rrdcached dir
### and that your web server has permission to talk to rrdcached.
#$config['rrdcached']    = "unix:/var/run/rrdcached.sock";
$config['rrdcached'] = "";
# Number in days of how long to keep old rrd files. 0 disables this feature
$config['rrd_purge'] = 0;
bash-5.0# exit
[email protected]:~$ docker restart nms_librenms 
[email protected]:~$ docker ps | grep libre
db210a218757   librenms/librenms:latest   "/init"                  16 months ago   Up 3 minutes           514/tcp, 514/udp,>8000/tcp, :::8000->8000/tcp    nms_librenms
59f7d62c9ab9   librenms/librenms:latest   "/init"                  2 years ago     Up 3 weeks             514/tcp, 8000/tcp, 514/udp                                     nms_dispatcher

Having monitored for a day, there was a change to the error message of Error Drawing Graphs, when removing the rrdcached line in config.php that error went away but graphs still are not drawing. My rrd folders appear to not be updated. To me this is mind boggling.

I cannot thank you enough for your help so far!

Hello there,

If graphs are not populating data any more, then perhaps the polling process stopped. If you use crontab to run librenms, check it with crontab -l.
Librenms graphs are popoulated from rrd database, go to /data/rrd/
your monitored hosts should appear here, choose any one them first confirm that rrd files update is normal usually every 5 minutes, according to poller configuration.

You can check which data is sent to rrd database using the command rrdtool dump filename.rrd, check the last entry.

Best regards,

Thanks @jihaddaouk , that’s what I’m thinking, so I’m with you, along that page here’s my command output, but either I’ve got the wrong command or it’s not running as you say?

[email protected]:~$ crontab -l
no crontab for user
[email protected]:~$ docker exec -it nms_librenms bash
bash-5.0# crontab -l
crontab: can't open 'root': No such file or directory
bash-5.0# exit
[email protected]:~$ sudo -s
[sudo] password for user: 
[email protected]:/home/user# docker exec -it nms_librenms bash
bash-5.0# crontab -l
crontab: can't open 'root': No such file or directory

What silly thing am I doing here?

Hello there,

In Librenms directory there shoud be a file called librenms.cron. In this file check if the scheduled entries have the word root before them. Like

          • root /
            Remove the word root, and schedule the crontab file again, and watch for the results.

Best regards,

Morning @jihaddaouk

The librenms.cron says it shouldn’t be used any more, there’s a librenms.nonroot.cron which uses a different user, should my cron run against this user instead?

[email protected]:~$ docker exec -it nms_librenms bash
bash-5.0# cat librenms.cron
# It's recommended not to run this cron anymore - please see librenms.nonroot.cron

33  */6   * * *   root    /opt/librenms/discovery.php -h all >> /dev/null 2>&1
*/5  *    * * *   root    /opt/librenms/discovery.php -h new >> /dev/null 2>&1
*/5  *    * * *   root    /opt/librenms/cronic /opt/librenms/ 16
15   0    * * *   root    /opt/librenms/ >> /dev/null 2>&1
*    *    * * *   root    /opt/librenms/alerts.php >> /dev/null 2>&1
bash-5.0# cat librenms.nonroot.cron
# Using this cron file requires an additional user on your system, please see install docs.

33   */6  * * *   librenms    /opt/librenms/cronic /opt/librenms/ 1
*/5  *    * * *   librenms    /opt/librenms/discovery.php -h new >> /dev/null 2>&1
*/5  *    * * *   librenms    /opt/librenms/cronic /opt/librenms/ 16
*    *    * * *   librenms    /opt/librenms/alerts.php >> /dev/null 2>&1
*/5  *    * * *   librenms    /opt/librenms/poll-billing.php >> /dev/null 2>&1
01   *    * * *   librenms    /opt/librenms/billing-calculate.php >> /dev/null 2>&1
*/5  *    * * *   librenms    /opt/librenms/check-services.php >> /dev/null 2>&1

# Daily maintenance script. DO NOT DISABLE!
# If you want to modify updates:
#  Switch to monthly stable release:
#  Disable updates:
15   0    * * *   librenms    /opt/librenms/ >> /dev/null 2>&1


In the first file delete the word root, save the file, and run the command crontab librenms.crontab
It should work.

Best regards,

Thank you @jihaddaouk !

That did get things running again, however this cron doesn’t run on other servers that are working. What is the process that performs this on other servers?

Finally, this now seems to have really spiked my CPU, hopefully it settles, but after a couple of hours this afternoon graphs are very choppy. Any thoughts? Please see below:

[email protected]:~$ docker exec -it nms_librenms bash
bash-5.0# crontab -l
# It's recommended not to run this cron anymore - please see librenms.nonroot.cron

33  */6   * * *       /opt/librenms/discovery.php -h all >> /dev/null 2>&1
*/5  *    * * *       /opt/librenms/discovery.php -h new >> /dev/null 2>&1
*/5  *    * * *       /opt/librenms/cronic /opt/librenms/ 16
15   0    * * *       /opt/librenms/ >> /dev/null 2>&1
*    *    * * *       /opt/librenms/alerts.php >> /dev/null 2>&1

Hello there,

Well there is data, for the choppy one it depends on the type of activity that is running on your server.
This cron runs only on the server that hosts LibreNMS. Do you how to add devices into LibreNMS?

Best regards,