LibreNMS installer, and what should be default

So this topic is meant for discussion, and the idea behind it was sparked by initially comments in Discord, and then a look at Longterm Plans · GitHub .

Currently the install docs make provisions for 4 distro’s and 2 different webserver varieties. I am a firm believer of choice so I like the fact that one can choose.

In Discord some users mentioned Docker should become the new standard, and from the card above we can see the idea was thought of before. But as can be seen below, @VVelox makes a valid point:


So I have a few questions I would like to ask:

  1. Should Docker be the default installation method, and everything else be secondary?
  2. Should we start including things like RRDCached setup to the default install? Reasoning behind this is that it makes it easier for a user to transition to distributed polling later on if needed, and to my knowledge it does not have any negative impact on “all-in-one” installations. (Other aspects of distributed polling can be added too, this is just an example of one)
  3. If Docker is not the default or preferred installation method, should we start looking at an install script of some kind (like bash) that can automate the installation all the way up to the web installation wizard? This will of course take the OS into consideration to use the correct installation managers for packages (yum vs apt etc.).
  4. Should apache be dropped altogether for nginx to make stuff more standardised, without capping capability?

It is a lot to consider, and will (hopefully) provoke some well thought-out and long answers. But please remember this is a discussion based on the thought of only one person at this point.

  • Docker as default
  • Bash or similar script as default
  • None of the above, the current set up is still the best

0 voters

I think another option could be an ansible script.

Could also work. Like an ansible galaxy collection.

  1. no to Docker as default but a great option to add
  2. removing friction for a switch to distributed polling sounds great
  3. I would recommend a Ansible Role as it is easier to read then bash and has some other advantages
  4. only drop Apache if there are technical reasons as I believe supporting multiple web servers also enforces higher quality - same argument as @VVelox mentioned for Docker in above screenshot.

Containerizing the setup eliminates most user setup issues and allows us to structure the design is such way that adding 3rd party tool like oxidized, sysloging and greylogs becomes a simple thing for the user. The negative part is your adding work overhead to the project that is strictly not LibreNMS development.

Containerizing project does not need to be a part of LibreNMS deployment it could be done by a separate team. The new “lnms” config method is a godsend for docker, because you can configure the whole cluster using jobs. Then only thing I can think of that is missing in LibreNMS so it is docker “ready” is the ability to create API tokens outside the webGUI.

I’ve moved my setup into Kubernetes(K3S) using Helm deployment, takes me about 60 minutes to setup a new LibreNMS environment with alsorts of personal tweaks I prefer to use.

I had some issues installing LibreNMS on Docker with a single Dockerfile and only one .env file (and some .config files) instead of using all of the Github project (i’m glad it worked), so i think LNMS for Docker should be reduced as a single Dockerfile and some config files that can work instantly.

And most importantly, i think some python/bash script could be created to automate the installation and configuration

It all boils down to one thing, who will maintain it?

My vote goes for keeping the current + “promote” docker to the same status/level/something. Not sure it’s possible to merge the installation pages tho, too different and cluttered. Feel free to add a tab for docker on the first step and simply have a link to the docker page there

1 Like

We need API to work a little differently before we can do this properly.

How come? Don’t understand what the API has to do with it? What am I missing?

Something like this? Improvements/thoughts welcome.

What do you mean exactly ? Please give us more precision

This seems like nice work, well done. Will try and test it out some time.

I will also comment on something you mentioned in the README over there. Not sure I agree with one of your statement, but this is not place to discuss that.

API allows LibreNMS to intergrade with 3rd party application but right now that configuration need to be handled manually.

For example here is my Kubernettes configuration file:

##  Edit this file to configure the monitoring cluster ##
##  whole configuration template can be found under:   ##
##        LibreNMS-Helm/librenms/values.yaml           ##

company: "TEST CO"
  TZ: "Atlantic/Reykjavik"
  path: "/data/"
    FQDN: "nms.test.local"
    volumeSize: "20Gi"
    - "community-string-1"
    - "community-string-2"
    # 1 replica per 100 devices
    replicas: "1"
    volumeSize: "20Gi"
    rootPassword: "fooRootPassword"
    user: "foo"
    password: "bar"
    name: "msmtpd"
    FQDN: ""
    port: "587"
  from: "[email protected]"
      user: "foo"
      password: "bar"   
    FQDN: "ox.test.local"
    token: "API-token-generated-inside-LIBRE"
  # Device can be grouped by as string in device NOTE or DESCRIPTION
      user: "deviceuser"
      password: "devicepassword"
      user: ""
      pass: ""
      string: "/^ox-group-1/"
      user: ""
      pass: ""
      string: "/^ox-group-2/"
      user: ""
      pass: ""
      string: "/^ox-group-3/"
      user: ""
      pass: ""
      string: "/^ox-group-4/"
      user: ""
      pass: ""
      string: "/^ox-group-5/"

I use this single to file build a my whole monitoring cluster ( LibreNMS + Pollers + Sylogng + nmptrapd + Oxidized + mariadb + redis + rrdcached + msmtpd) but I have to run it twice.

I can’t deploy the oxidized part because I’m missing the API information and I can only get that by starting the LibrenNMS server first, go into the WEBgui create an API token then update the config file(with the API token) and redeploy.

With proper API we can start doing all sort of cool integration with grafana, graylog, IPAM service.
And that integration would not require the installer to be an expert in multiple filed.

one thing to consider is that Librenms is built on Laravel (well converted to) and Laravel has a very well structured docker-compose that contain most of the stuff LibreNMS need to run.

@Skylark we had the same problem in our shell script, see our solution below. It is not beatifull but maybe it helps you.

Create API-Token

NewToken=$(cat /dev/urandom | tr -dc ‘a-z0-9’ | fold -w 32 | head -n 1)

Datenbank erstellen

sudo -u librenms bash <<EOF
/opt/librenms/lnms migrate

create Admin User

su -c ‘php /opt/librenms/adduser.php admin Passwort1+ 10’ librenms

create Oxidized User

su -c ‘php /opt/librenms/adduser.php oxidized Oxidized1+ 10’ librenms

userId=$(mysql -u librenms --password=$dbpass -s -N --database=‘librenms’ -e ‘select user_id from users where username=“oxidized”’)

ADD Localhosts and discover

php /opt/librenms/addhost.php localhost librenms v2c
php /opt/librenms/discovery.php -h localhost

mysql -u librenms --password=$dbpass --database=“librenms” << EOF
INSERT INTO api_tokens (user_id,token_hash,description,disabled) VALUES ($userId,‘$NewToken’,‘This token is used by Oxidized’,0);
SET TIME_ZONE=‘+00:00’;
ALTER TABLE notifications CHANGE datetime datetime timestamp NOT NULL DEFAULT ‘1970-01-02 00:00:00’ ;
ALTER TABLE users CHANGE created_at created_at timestamp NOT NULL DEFAULT ‘1970-01-02 00:00:01’ ;

After this we create two heathchecks to test if LibreNMS Gui and Oxidized worked:

2022-05-19 create check for internal services

curl -X POST -d ‘{“type”:“http”,“ip”: “”,“desc”:“OxidizedService”,“param”: “-E -p 8888 -u "/nodes.html"”}’ -H ‘X-Auth-Token: $NewToken’
curl -X POST -d ‘{“type”:“http”,“ip”: “”,“desc”:“LibreGui”,“param”: “-E -u "/login"”}’ -H ‘X-Auth-Token: $NewToken’