Innodb flush log - Re: Performance tuning

Greetings,
I have a LibreNMS install that monitors 1036 devices, and 4483 ports. a few BGP and OSPF sessions, but nothing heavy numbers there (9 BGP sessions, and 8 OSPF nodes). I have gone through and added RRDcached and saw some increase in performance. my Poller completes in its entirety at around 2 minutes on 5 minute polling interval. the VM has 4 cores @ 2.4Ghz, and 4GB of ram currently. Ram is not an issue, CPU hits max every time the poller runs for a little while, but what the real kicker is my IOPS! before RRDCached was installed i was maxing out my NAS, and now with RRDCached, I am seeing IOPS in the 250/300 range every 5 minute poling interval. I want to apply the “innodb_flush_log_at_trx_commit = 0” to 2 and see if that helps at all, but I dont know where to put it. the performance guide says “under a mysqld group” which I do not understand. I have my.cnf, clint.cnf, mysql-clinets.cnf and server.cnf. none of which have the line to edit, so am I adding this line, and if so to what cnf file? As always, thanks for the help for someone not very linux/sql oriented at all.

*client.cnf and mysql-clients.cnf… sorry for the typo

Yeh stick it in my.cnf,

You should have a [mysqld] section, or add it so,

[mysqld]
innodb_flush_log_at_trx_commit=0

restart sql after.

are you using networked storage on your VM platform? I would check atop and see if your cpu is ever waiting on disk.

1 Like

Thats impressive all in 2 min.

1 Like

yea 10G iSCSI… and its not a huge array so my IOPS are fairly limited. Im sure the CPU is waiting on disk because the reality of it is I only have about 300 or so IOPS to spare on a good day, so when I see this VM spike to that every 5 minutes, I also see disk latency go through the roof. Ideally I need a bigger array, but that wont be available to me until Q2-3, so Im trying to get this down as much as I can.

as for the amazement in the polling only taking 2 minutes… some of the devices are Ping only. :slight_smile: Thanks for the quick replies all. I’ll give this a whirl

Yeh we ran into same issues, ended up with a dedicated server and all is happy.

well setting that to 2 didnt seem to help. is 0 “better” than 2 in this instance? it might have actually made it slightly worse with the 2 setting, otherwise no change.

what did you use for your dedicated disk?

Can’t remember off the top of my head, nothing too fancy, think some 10k rpm disks. Go for a fast raid option and go for the biggest max cache memory size in your raid controller too.

so it took about 2 or 3 polling intervals… but the IOPS are now down around ~140… ill keep an eye on it… but much improvement! i wonder why it took a few intervals to take affect??

Hi @James_Urwiller

If you want to lower a little bit more, one last option would be to to lower the amount of threads running for the polling.
in /etc/cron.d/librenms, check the number of threads after poller-wrapper.py.
Lowering it (slowly) should increase the poller time (you could probably accept going to about 3 and halh minute polling time, and the load would be reduced on the storage.

PipoCanaja

1 Like

That’s a great Idea… Thanks for the suggestion!

do i have to restart any services for this to take affect? i would assume not, and simply wait for the next poll to take place?

Simply waiting is right. So change it, do something else for 20 minutes (4 polls) and look at poller stats. Depending on the duration (you should probably stay between 3 min and 3 half min) you can lower it again a little and wait another 20 etc etc.