Problems with ceph application

Hi,

I’m trying to enable the ceph application for one of our ceph-systems (ceph01).

If I manually run the poller.php from the librenms-server I can see some information about ceph,
but it seems like there are no rrd-Files created and not data is collected.

root@nms:/opt/librenms# ls -l rrd/ceph01/*ceph*
ls: cannot access 'rrd/ceph01/*ceph*': No such file or directory
root@nms:/opt/librenms#

ceph01:

  • is already added to librenms
  • data is collected using snmp
  • the ceph “plugin binary” is available, executable and added to the snmpd-config
  • the ceph application is enabled in the librenms host settings
  1. working ceph “plugin binary” (localhost @ ceph01)
    root@ceph01:~# /opt/librenms/ceph
    <<>>

    rbd:0:28685588:46994443

    osd.32:19:19
    osd.24:23:23
    osd.23:42:42
    … CUT … CUT … CUT …
    osd.12:30:30
    osd.18:18:18
    osd.15:24:24

    c:65894068523008:42978793271296:22915275251712
    rbd:3255411146752:14294266560565:3419753
    root@ceph01:~#

  2. Using the poller.php (on the librenms-server)

root@nms:/opt/librenms# su - librenms
$ ./poller.php -h ceph01 -r -f -d -m unix-agent,applications
LibreNMS Poller
SQL[SELECT version FROM `dbSchema` ORDER BY version DESC LIMIT 1]
SQL[SELECT version()]
===================================
Version info:
Commit SHA: 60a1a02f884becd2676f1f6a8b6531499ff33a05
Commit Date: 1532294700
DB Schema: 255
PHP: 7.0.30-0+deb9u1
MySQL: 10.1.26-MariaDB-0+deb9u1
RRDTool: 1.6.0
SNMP: NET-SNMP 5.7.3
==================================DEBUG!
Updating os_def.cache... Done
Override poller modules: unix-agent, applications
Starting polling run:

SQL[SELECT * FROM `devices` WHERE `disabled` = 0 AND `hostname` = 'ceph01' ORDER BY `device_id` ASC]
SQL[SELECT * FROM devices_attribs WHERE `device_id` = '49']
Hostname: ceph01
Device ID: 49
OS: linux (unix)

.. CUT .... CUT .... CUT ..


#### Load poller module unix-agent ####
.. CUT .... CUT .... CUT ..
[app] => Array
        (
            [ceph] => <poolstats>
rbd:0:33674504:34289802
<osdperformance>
osd.32:12:12
osd.24:16:16
osd.23:25:25
.. CUT .... CUT .... CUT ..
osd.12:0:0
osd.18:1:1
osd.15:9:9
<df>
c:65894068523008:42981709565952:22912358957056
rbd:3254611345408:14294266626101:3419753
        )

Enabling ceph for ceph01 if not yet enabled
SQL[SELECT COUNT(*) FROM `applications` WHERE `device_id` = '49' AND `app_type` = 'ceph']


>> Runtime for poller module 'unix-agent': 1.2702 seconds with 251520 bytes
>> SNMP: [0/0.00s] MySQL: [4/0.04s] RRD: [0/0.00s]
#### Unload poller module unix-agent ####

RRD[create /opt/librenms/rrd/ceph01/poller-perf-unix-agent.rrd --step 300 DS:poller:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/poller-perf-unix-agent.rrd N:1.2702150344849]
[RRD Disabled]Modules status: Global+ OS  Device

#### Load poller module applications ####
SQL[SELECT * FROM `applications` WHERE `device_id`  = '49']
Ceph Pool: rbd, IOPS: 0, Wr bytes: 33674504, R bytes: 34289802
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-pool-rbd.rrd --step 300 DS:ops:GAUGE:600:0:U DS:wrbytes:GAUGE:600:0:U DS:rbytes:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-pool-rbd.rrd N:0:33674504:34289802]
[RRD Disabled]Ceph OSD: osd.32, Apply: 12, Commit: 12
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.32.rrd --step 300 DS:apply_ms:GAUGE:600:0:U DS:commit_ms:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.32.rrd N:12:12]
[RRD Disabled]Ceph OSD: osd.24, Apply: 16, Commit: 16
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.24.rrd --step 300 DS:apply_ms:GAUGE:600:0:U DS:commit_ms:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.24.rrd N:16:16]
[RRD Disabled]Ceph OSD: osd.23, Apply: 25, Commit: 25
.. CUT .... CUT .... CUT ..
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.15.rrd --step 300 DS:apply_ms:GAUGE:600:0:U DS:commit_ms:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-osd-osd.15.rrd N:9:9]
[RRD Disabled]Ceph Pool DF: c, Avail: 65894068523008, Used: 42981709565952, Objects: 22912358957056
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-df-c.rrd --step 300 DS:avail:GAUGE:600:0:U DS:used:GAUGE:600:0:U DS:objects:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-df-c.rrd N:65894068523008:42981709565952:22912358957056]
[RRD Disabled]Ceph Pool DF: rbd, Avail: 3254611345408, Used: 14294266626101, Objects: 3419753
RRD[create /opt/librenms/rrd/ceph01/app-ceph-17-df-rbd.rrd --step 300 DS:avail:GAUGE:600:0:U DS:used:GAUGE:600:0:U DS:objects:GAUGE:600:0:U  RRA:AVERAGE:0.5:1:2016 RRA:AVERAGE:0.5:6:1440 RRA:AVERAGE:0.5:24:1440 RRA:AVERAGE:0.5:288:1440  RRA:MIN:0.5:1:720 RRA:MIN:0.5:6:1440     RRA:MIN:0.5:24:775     RRA:MIN:0.5:288:797  RRA:MAX:0.5:1:720 RRA:MAX:0.5:6:1440     RRA:MAX:0.5:24:775     RRA:MAX:0.5:288:797  RRA:LAST:0.5:1:1440 ]
[RRD Disabled]RRD[update /opt/librenms/rrd/ceph01/app-ceph-17-df-rbd.rrd N:3254611345408:14294266626101:3419753]
[RRD Disabled]SQL[UPDATE `applications` set `app_state` ='OK',`app_status` ='',`timestamp` =NOW() WHERE `app_id` = '17']
SQL[SELECT * FROM `application_metrics` WHERE app_id='17']
: .SQL[UPDATE `application_metrics` set `value` ='33674504',`value_prev` ='21650424' WHERE app_id='17' && metric='pool_rbd_wrbytes']
USQL[UPDATE `application_metrics` set `value` ='34289802',`value_prev` ='47390184' WHERE app_id='17' && metric='pool_rbd_rbytes']
USQL[UPDATE `application_metrics` set `value` ='12',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.32_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='12',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.32_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='16',`value_prev` ='15' WHERE app_id='17' && metric='osd_osd.24_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='16',`value_prev` ='15' WHERE app_id='17' && metric='osd_osd.24_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='25',`value_prev` ='19' WHERE app_id='17' && metric='osd_osd.23_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='25',`value_prev` ='19' WHERE app_id='17' && metric='osd_osd.23_commit_ms']
U....SQL[UPDATE `application_metrics` set `value` ='4',`value_prev` ='2' WHERE app_id='17' && metric='osd_osd.30_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='4',`value_prev` ='2' WHERE app_id='17' && metric='osd_osd.30_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='0',`value_prev` ='2' WHERE app_id='17' && metric='osd_osd.29_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='0',`value_prev` ='2' WHERE app_id='17' && metric='osd_osd.29_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='3',`value_prev` ='0' WHERE app_id='17' && metric='osd_osd.28_apply_ms']
.. CUT .... CUT .... CUT ..
USQL[UPDATE `application_metrics` set `value` ='0',`value_prev` ='26' WHERE app_id='17' && metric='osd_osd.12_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='1',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.18_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='1',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.18_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='9',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.15_apply_ms']
USQL[UPDATE `application_metrics` set `value` ='9',`value_prev` ='18' WHERE app_id='17' && metric='osd_osd.15_commit_ms']
USQL[UPDATE `application_metrics` set `value` ='65894068523008',`value_prev` ='2147483647' WHERE app_id='17' && metric='df_c_avail']
USQL[UPDATE `application_metrics` set `value` ='42981709565952',`value_prev` ='2147483647' WHERE app_id='17' && metric='df_c_used']
USQL[UPDATE `application_metrics` set `value` ='22912358957056',`value_prev` ='2147483647' WHERE app_id='17' && metric='df_c_objects']
USQL[UPDATE `application_metrics` set `value` ='3254611345408',`value_prev` ='2147483647' WHERE app_id='17' && metric='df_rbd_avail']
USQL[UPDATE `application_metrics` set `value` ='14294266626101',`value_prev` ='2147483647' WHERE app_id='17' && metric='df_rbd_used']
U.


>> Runtime for poller module 'applications': 0.4944 seconds with -148968 bytes
>> SNMP: [0/0.00s] MySQL: [69/0.48s] RRD: [0/0.00s]
#### Unload poller module applications ####

Did I miss something?

Thanks!

Thomas

Your log shows it attempting to create the files (but you have specified -r) so looks like it works to me.

I’m guessing you didn’t enable the unix agent on the device ceph1?

Thanks murrant.

Using the unix-agent it’s working now.
Thought I don’t need the agent if I add ceph to the snmpd-Config :slight_smile:

If someone like me finds this topic on google I will specify.

You have to edit the device in librenms web interface and in the section Modules enable unix-agent.

Link would be:
http://[your_librenms_hostname]/device/device=[device_number]/tab=edit/section=modules/