Librenms It is slow, and after a few minutes or hours it crashes

LibreNMS It is slow, and after a few minutes or hours it crashes, this after updating the php following the tutorial: Upgrading php7.2 to php7.4 in Ubuntu 18.04LTS
can’t find an error, before that I had the same error, can anyone help?

./validate.php

Component Version
LibreNMS 1.70.1-1-ga3635d0b7
DB Schema 2020_11_02_164331_add_powerstate_enum_to_vminfo (191)
PHP 7.4.13
Python 3.6.9
MySQL 10.1.47-MariaDB-0ubuntu0.18.04.1
RRDTool 1.7.0
SNMP NET-SNMP 5.7.3

====================================

[OK] Composer Version: 2.0.7
[OK] Dependencies up-to-date.
[OK] Database connection successful
[OK] Database schema correct

What do you mean it crashes? Does the web interface not have any display?
Can you find your librenms, php and web server log files?

A web interface does not go up, it stops notifying, ssh inaccessible, only after restarting the server, yes I can get the logs, but I did not find anything unusual.

tail -f /var/log/mysql/error.log
2020-12-03 12:58:32 139685891464320 [Note] InnoDB: Waiting for purge to start
2020-12-03 12:58:32 139685891464320 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.49-89.0 started; log sequence number 569955376261
2020-12-03 13:03:06 139685133285120 [Note] InnoDB: Dumping buffer pool(s) not yet started
2020-12-03 13:03:06 139685891464320 [Note] Plugin ‘FEEDBACK’ is disabled.
2020-12-03 13:03:06 139685891464320 [Note] Recovering after a crash using tc.log
2020-12-03 13:03:06 139685891464320 [Note] Starting crash recovery…
2020-12-03 13:03:06 139685891464320 [Note] Crash recovery finished.
2020-12-03 13:03:07 139685891464320 [Note] Server socket created on IP: ‘127.0.0.1’.
2020-12-03 13:03:07 139685891464320 [Note] /usr/sbin/mysqld: ready for connections.
Version: ‘10.1.47-MariaDB-0ubuntu0.18.04.1’ socket: ‘/var/run/mysqld/mysqld.sock’ port: 3306 Ubuntu 18.04

tail -f /var/log/php7.4-fpm.log
[03-Dec-2020 11:47:36] NOTICE: fpm is running, pid 1124
[03-Dec-2020 11:47:36] NOTICE: ready to handle connections
[03-Dec-2020 11:47:36] NOTICE: systemd monitor interval set to 10000ms
[03-Dec-2020 11:48:41] WARNING: [pool librenms] server reached pm.max_children setting (5), consider raising it
[03-Dec-2020 12:58:11] NOTICE: fpm is running, pid 1070
[03-Dec-2020 12:58:11] NOTICE: ready to handle connections
[03-Dec-2020 12:58:11] NOTICE: systemd monitor interval set to 10000ms
[03-Dec-2020 13:24:29] WARNING: [pool librenms] server reached pm.max_children setting (5), consider raising it
[03-Dec-2020 13:25:12] WARNING: [pool librenms] server reached pm.max_children setting (5), consider raising it
[03-Dec-2020 13:35:06] WARNING: [pool librenms] server reached pm.max_children setting (5), consider raising it

tail -f error.log
2020/12/03 13:36:28 [error] 1465#1465: *275 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_perf&legend=yes&height=149&width=320&from=1604334900 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:36:28 [error] 1465#1465: *276 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_perf&legend=yes&height=149&width=320&from=1575477300 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:36:28 [error] 1465#1465: *277 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1606926900 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:36:28 [error] 1465#1465: *269 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1606408500 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:37:28 [error] 1465#1465: *276 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1604334900 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:37:28 [error] 1465#1465: *272 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1575477300 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:39:43 [error] 1465#1465: *292 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1606926900 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:39:43 [error] 1465#1465: *293 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1606408500 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:40:06 [error] 1465#1465: *285 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1604334900 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance
2020/12/03 13:40:08 [error] 1466#1466: *284 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 170.233.32.28, server: librenms.example.com, request: “GET /graph.php?type=global_poller_modules_perf&legend=yes&height=149&width=320&from=1575477300 HTTP/1.1”, upstream: “fastcgi://unix:/run/php-fpm-librenms.sock”, host: “10.255.90.101”, referrer: “http://10.255.90.101/poller/performance

Looks like your database is crashing. Can you pull up more of that file?

tail -n 250 /var/log/mysql/error.log

> tail -n 250 /var/log/mysql/error.log
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140005119022848 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140004761229056 has waited at lock0lock.cc line 7224 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140005089109760 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140004500166400 has waited at dict0dict.cc line 1141 for 0.0000 seconds the semaphore:
> Mutex at 0x7f557c0362e8 '&dict_sys->mutex', lock var 1
> Last time reserved by thread 140005117589248 in file not yet reserved line 0, waiters flag 1
> --Thread 140004563076864 has waited at dict0stats_bg.cc line 461 for 0.0000 seconds the semaphore:
> Mutex at 0x7f557c0362e8 '&dict_sys->mutex', lock var 1
> Last time reserved by thread 140005117589248 in file not yet reserved line 0, waiters flag 1
> --Thread 140005091567360 has waited at dict0dict.cc line 1141 for 0.0000 seconds the semaphore:
> Mutex at 0x7f557c0362e8 '&dict_sys->mutex', lock var 1
> Last time reserved by thread 140005117589248 in file not yet reserved line 0, waiters flag 1
> --Thread 140004640077568 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140004909164288 has waited at btr0sea.cc line 1489 for 1.0000 seconds the semaphore:
> X-lock on RW-latch at 0x7f557806a368 '&btr_search_latch_arr[i]'
> a writer (thread id 140004499756800) has reserved it in mode  exclusive
> number of readers 0, waiters flag 1, lock_word: 0
> Last time read locked in file btr0sea.cc line 955
> Last time write locked in file btr0sea.cc line 1489
> Holder thread 0 file not yet reserved line 0
> --Thread 140004535645952 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140004641101568 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> --Thread 140004762253056 has waited at btr0sea.cc line 1375 for 0.0000 seconds the semaphore:
> S-lock on RW-latch at 0x7f557806a368 '&btr_search_latch_arr[i]'
> a writer (thread id 140004499756800) has reserved it in mode  exclusive
> number of readers 0, waiters flag 1, lock_word: 0
> Last time read locked in file btr0sea.cc line 955
> Last time write locked in file btr0sea.cc line 1489
> Holder thread 0 file not yet reserved line 0
> --Thread 140005184837376 has waited at lock0lock.cc line 5405 for 0.0000 seconds the semaphore:
> Mutex at 0x7f5565e25068 '&lock_sys->mutex', lock var 1
> Last time reserved by thread 140005138294528 in file not yet reserved line 0, waiters flag 1
> OS WAIT ARRAY INFO: signal count 23972
> Mutex spin waits 132524, rounds 2480400, OS waits 70885
> RW-shared spins 11865, rounds 307054, OS waits 8575
> RW-excl spins 13056, rounds 204710, OS waits 4988
> Spin rounds per wait: 18.72 mutex, 25.88 RW-shared, 15.68 RW-excl
> FAIL TO OBTAIN LOCK MUTEX, SKIP LOCK INFO PRINTING
> --------
> FILE I/O
> --------
> I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
> I/O thread 1 state: waiting for completed aio requests (log thread)
> I/O thread 2 state: waiting for completed aio requests (read thread)
> I/O thread 3 state: waiting for completed aio requests (read thread)
> I/O thread 4 state: waiting for completed aio requests (read thread)
> I/O thread 5 state: waiting for completed aio requests (read thread)
> I/O thread 6 state: waiting for completed aio requests (write thread)
> I/O thread 7 state: waiting for completed aio requests (write thread)
> I/O thread 8 state: waiting for completed aio requests (write thread)
> I/O thread 9 state: waiting for completed aio requests (write thread)
> Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 1 [1, 0, 0, 0] ,
>  ibuf aio reads: 0, log i/o's:InnoDB: ###### Diagnostic info printed to the standard error stream
>  0, sync i/o's: 0
> Pending flushes (fsync) log: 1; buffer pool: 1
> 5348 OS file reads, 30205 OS file writes, 12943 OS fsyncs
> 0.00 reads/s, 0 avg bytes/read, 1.03 writes/s, 0.11 fsyncs/s
> -------------------------------------
> INSERT BUFFER AND ADAPTIVE HASH INDEX
> -------------------------------------
> Ibuf: size 1, free list len 1538, seg size 1540, 461 merges
> merged operations:
>  insert 98, delete mark 315225, delete 3
> discarded operations:
>  insert 0, delete mark 0, delete 0
> 11.17 hash searches/s, 48.11 non-hash searches/s
> ---
> LOG
> ---
> Log sequence number 569937873994
> Log flushed up to   569937873483
> Pages flushed up to 569936863369
> Last checkpoint at  569936801409
> Max checkpoint age    80826164
> Checkpoint age target 78300347
> Modified age          1010625
> Checkpoint age        1072585
> 0 pending log writes, 0 pending chkp writes
> 10518 log i/o's done, 0.05 log i/o's/second
> ----------------------
> BUFFER POOL AND MEMORY
> ----------------------
> Total memory allocated 140574720; in additional pool allocated 0
> Total memory allocated by read views 93752
> Internal hash tables (constant factor + variable factor)
>     Adaptive hash index 3528288         (2213368 + 1314920)
>     Page hash           139112 (buffer pool 0 only)
>     Dictionary cache    841232  (554768 + 286464)
>     File system         940864  (812272 + 128592)
>     Lock system         394184  (332872 + 61312)
>     Recovery system     0       (0 + 0)
> Dictionary memory allocated 286464
> Buffer pool size        8191
> Buffer pool size, bytes 134201344
> Free buffers            2304
> Database pages          5807
> Old database pages      2123
> Modified db pages       346
> Percent of dirty pages(LRU & free pages): 4.265
> Max dirty pages percent: 75.000
> Pending reads 0
> Pending writes: LRU 0, flush list 17, single page 0
> Pages made young 3435, not young 201236
> 0.00 youngs/s, 0.00 non-youngs/s
> Pages read 4902, created 1231, written 18861
> 0.00 reads/s, 0.00 creates/s, 0.69 writes/s
> Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
> Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
> LRU len: 5807, unzip_LRU len: 0
> I/O sum[49]:cur[1], unzip sum[0]:cur[0]
> --------------
> ROW OPERATIONS
> --------------
> 0 queries inside InnoDB, 0 queries in queue
> 22 read views open inside InnoDB
> 108 RW transactions active inside InnoDB
> 0 RO transactions active inside InnoDB
> 107 out of 1000 descriptors used
> ---OLDEST VIEW---
> Normal read view
> Read view low limit trx n:o 3203213548
> Read view up limit trx id 3203213375
> Read view low limit trx id 3203213839
> Read view individually stored trx ids:
> Read view trx id 3203213375
> Read view trx id 3203213395
> Read view trx id 3203213411
> Read view trx id 3203213413
> Read view trx id 3203213416
> Read view trx id 3203213421
> Read view trx id 3203213422
> Read view trx id 3203213426
> Read view trx id 3203213429
> Read view trx id 3203213432
> Read view trx id 3203213436
> Read view trx id 3203213437
> Read view trx id 3203213439
> Read view trx id 3203213440
> Read view trx id 3203213441
> Read view trx id 3203213442
> Read view trx id 3203213443
> Read view trx id 3203213444
> Read view trx id 3203213452
> Read view trx id 3203213454
> Read view trx id 3203213492
> Read view trx id 3203213498
> Read view trx id 3203213500
> Read view trx id 3203213507
> Read view trx id 3203213516
> Read view trx id 3203213517
> Read view trx id 3203213518
> Read view trx id 3203213522
> Read view trx id 3203213523
> Read view trx id 3203213525
> Read view trx id 3203213526
> Read view trx id 3203213547
> Read view trx id 3203213563
> Read view trx id 3203213572
> Read view trx id 3203213574
> Read view trx id 3203213578
> Read view trx id 3203213579
> Read view trx id 3203213580
> Read view trx id 3203213593
> Read view trx id 3203213626
> Read view trx id 3203213644
> Read view trx id 3203213688
> Read view trx id 3203213709
> Read view trx id 3203213710
> Read view trx id 3203213711
> Read view trx id 3203213712
> Read view trx id 3203213715
> Read view trx id 3203213716
> Read view trx id 3203213717
> Read view trx id 3203213718
> Read view trx id 3203213747
> Read view trx id 3203213750
> Read view trx id 3203213751
> Read view trx id 3203213757
> Read view trx id 3203213761
> Read view trx id 3203213771
> Read view trx id 3203213772
> Read view trx id 3203213773
> Read view trx id 3203213774
> Read view trx id 3203213779
> Read view trx id 3203213797
> Read view trx id 3203213798
> Read view trx id 3203213799
> Read view trx id 3203213802
> Read view trx id 3203213805
> Read view trx id 3203213821
> Read view trx id 3203213824
> Read view trx id 3203213825
> Read view trx id 3203213826
> Read view trx id 3203213828
> Read view trx id 3203213829
> Read view trx id 3203213830
> Read view trx id 3203213831
> Read view trx id 3203213833
> Read view trx id 3203213834
> Read view trx id 3203213835
> Read view trx id 3203213836
> -----------------
> Main thread process no. 1536, id 140004571469568, state: enforcing dict cache limit
> Number of rows inserted 1439, updated 54631, deleted 411377, read 1647539
> 0.06 inserts/s, 0.39 updates/s, 0.00 deletes/s, 85.27 reads/s
> Number of system rows inserted 0, updated 0, deleted 0, read 0
> 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
> ----------------------------
> END OF INNODB MONITOR OUTPUT
> ============================
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: innodb_empty_free_list_algorithm has been changed to legacy because of small buffer pool size. In order to use backoff, increase buffer pool at least up to 20MB.
> 
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Using mutexes to ref count buffer pool pages
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: The InnoDB memory heap is disabled
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Compressed tables use zlib 1.2.11
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Using Linux native AIO
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Using SSE crc32 instructions
> 2020-12-03 12:58:05 139685891464320 [Note] InnoDB: Initializing buffer pool, size = 128.0M
> 2020-12-03 12:58:06 139685891464320 [Note] InnoDB: Completed initialization of buffer pool
> 2020-12-03 12:58:06 139685891464320 [Note] InnoDB: Highest supported file format is Barracuda.
> 2020-12-03 12:58:06 139685891464320 [Note] InnoDB: Starting crash recovery from checkpoint LSN=569953178616
> 2020-12-03 12:58:14 139685891464320 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer...
> 2020-12-03 12:58:21 139685891464320 [Note] InnoDB: To recover: 390 pages from log
> 2020-12-03 12:58:23 139685891464320 [Note] InnoDB: Starting final batch to recover 353 pages from redo log
> 2020-12-03 12:58:32 139685891464320 [Note] InnoDB: 128 rollback segment(s) are active.
> 2020-12-03 12:58:32 139685891464320 [Note] InnoDB: Waiting for purge to start
> 2020-12-03 12:58:32 139685891464320 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.49-89.0 started; log sequence number 569955376261
> 2020-12-03 13:03:06 139685133285120 [Note] InnoDB: Dumping buffer pool(s) not yet started
> 2020-12-03 13:03:06 139685891464320 [Note] Plugin 'FEEDBACK' is disabled.
> 2020-12-03 13:03:06 139685891464320 [Note] Recovering after a crash using tc.log
> 2020-12-03 13:03:06 139685891464320 [Note] Starting crash recovery...
> 2020-12-03 13:03:06 139685891464320 [Note] Crash recovery finished.
> 2020-12-03 13:03:07 139685891464320 [Note] Server socket created on IP: '127.0.0.1'.
> 2020-12-03 13:03:07 139685891464320 [Note] /usr/sbin/mysqld: ready for connections.
> Version: '10.1.47-MariaDB-0ubuntu0.18.04.1'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  Ubuntu 18.04

is he braking for lack of memory?

Could be, but at least we know it is MySQL crashing and burning. OP needs to investigate that further.

I inserted 2gb more ram and the problem stopped, before the update there were spikes in memory consumption, which increased the Linux swap and ended up crashing, strange that before the update there was no such problem, it seems that somehow the librenms is consuming more memory than before.

What is your mysql.conf?