Linux Memory Pools reporting wrong

For some reason Librenms is reporting high physical memory and virtual memory? Its a virtual machine running off a UCS server (UCSC-C220-M4S)
.


Here is the htop report.
image
image

discovery.php
https://p.libren.ms/view/1fedebb6
poller.php
https://p.libren.ms/view/bf9defb6
walk
https://p.libren.ms/view/5a330ed4

According to your htop report 91% of memory is used. So, seems about right.

However, I’m not sure what “Virtual memory” is :confused:

Virtual memory is swap plus physical memory.

Thanks for the help. But seems pretty high for just Librenms running?
For some reason my mysqld and rsyslog is taking up a lot of memory any ideas?

How big is your install (number of monitored devices and ports)? Are you using LibreNMS as a syslog collector? If so, what’s your input rate?

Hi, Its running off a UCS VM. 20 Cores, 32GB RAM, SSDs for Hard Drives. I have about 2000 devices added and 40,000 ports. I am syslogging about 100 devices. I would like to syslog the rest but as of right now in /etc/systemd/journald.conf the rate limit is commented out. Is that normal for RAM to be this high?

I think I’m surprised you’ve managed to get that out of the spec to be, well done on that one.

You should really split the MySQL off at this stage though, that will give you some big benefits.

What do you mean spec to be? Should I increase the resources to my VM? Is there a doc to split the MySQL and what big benefits will that give me? Thank You.

Sorry should have said out of the spec to be honest. I.e I’m surprised it’s working.

Splitting mysql off is just a case of installing mysql on a new server, dump the database, import into new server and change config.php to use the new DB server.

So what specs would I need for 2000 devices if kept on the same server?
So splitting my MySQL to another machine how much resources will the 1st and 2nd need to be?

If it’s working for you then you already know the specs you need, it’s extremely difficult to calculate required resources accurately.

For 3.5k devices, we run a 12 core, 32gb ram, 120gb ssd x 2 in raid 1 on physical kit and it’s fairly under utilised so you should be able to cope with half of that if not less.

Hmm for some reason the memory keeps inclining, I gave it about 128gb and has used almost all of it. In going to try and split off the database hopefully that helps. It seems like there are a lot of mysqld plugin running filling up the memory?
Will that be reduced if put the database on another vm?

Linux always consumes the ram. Take a look at: http://www.linuxatemyram.com/

1 Like

People (and by that I mean Windows “admins”) need to be reminded of that more often :smile:

1 Like

Also, check to see if you have any really large tables, like syslog or something. There was an issue with perf_log not getting cleaned out by daily.sh for awhile.

Will do, I am in the process of moving the SQL database off to another server like was mentioned earlier. Seems like the poller-wrapper proccess keep running.

That’s pretty odd, perhaps it never completes. Try running by hand and see if it gets stuck.

Nope doesn’t seem to get stuck the only thing that does take a while is the discovery-wrapper.py takes about an hour to run. It only runs by the default cron

An hour? That seems long. So, you should have about 30 instances running at one time.

30 instances finished but still takes a while. The cronjob is doing 32 once a day.
Whats killing me is this