Aggregating bandwith graphs from interfaces on seperate devices?

Is it possible to make a graph that aggregates the bandwidth from two interfaces that are on separate devices. i.e out bound interface on two circuits ?

i was trying to make a dashboard and saw the aggregate feature and thought it was this but i couldn’t figure it out.

1 Like

Take a look to https://docs.librenms.org/#Extensions/Interface-Description-Parsing/

That may fit what you want

im not sure how that gets me a custom graph on a dashboard with two interfaces’s bandwidth combined into one. assuming thats possible to begin with

Sounds like exactly what you want.

Change two or more interfaces descriptions to start with the same e.g testing: 123, then go to the URL /iftype/type=testing

(Note the semi-colon is important on the port description)

This will give you a page with aggregate graph for both interfaces, and each interface individually.

I can get you some examples tomorrow if you want,

Or create a bill with both interfaces added, Or, check out Single graph, Multiple interfaces

oh snaps…let me wrap my head around it and see if i can implement and il report back.

thank you

Looks legit… I dont supposed you know how to add a title to this ? like Total bandwidth ? either way good

Don’t think so, someone managed to add it to their dashboard via external image, not sure if you can put a title on the widget, haven’t tried it. I’m using How to add a custom menu item to categorize aggregates.

Also you might be interested in showing the Aggregate Totals and Volume Totals, so change your URL like this

monitoring/graphs/noagg=0/nototal=0/id=89,46/type=multiport_bits_separate

The gift that keeps on giving!!! Thanks.

If only I wasn’t illiterate and could find this out on my own, by reading.

no problem

Just beware if you do it this way , ignore the right side column for total volume per interface, these numbers are incorrect on my instance, don’t think it was designed to jam nototal=0 into the url bar for combined graphs.

However doing so will get you a correct aggregate volume at the bottom and that’s what matters. :upside_down_face:

Ignore this bit

Let me know if its the same for you and i’ll try and figure out how to fix it.

yea its nice. You were right the last column is way off

image

Cool, just to double check the bottom-right two numbers should be fine though. It’s just the per interface.

So 1.87 TB and 476.57GB should be correct. (You can compare them to a single interface graph, or a bill)

wait i think i cant do math either but if i get 3.10 Gb per second and that showing me 6 hours worth

6 hours * 60 minute per hour * 60 seconds per minute * 3.10 Gb = 66960 Gb with inturn menas ~8.37 TB and that shows 2.xx TB ??

what am i missing ?

Check out this converter, put your IN Average aggregate of 690.93 Mbps which equals 0.31TB/h

so 0.31TB x 6 = 1.86TB (which including the decimels round up to your 1.87TB data volume figure)

http://www.kylesconverter.com/data-bandwidth/megabits-per-second-to-terabytes-per-hour

Two other options you may want to consider - you can create “bills” and add whatever interfaces to them you like, or set up ports with specific tags similar to what Chas has suggested that will allow them to be marked as transit, peering, or core interfaces in your config file.

Sorted, looks like you need to wait for a poll for the tag to be picked up.

This look ideal, I’ve modified the Interface descriptions as suggested, I actually used the example here (testing: 123) then hit the URL http://librenms/iftype/type=testing but see this:

Total Graph for ports of type : Testing
None found.

I’ve obviously missed something, the ‘Description Parsing’ help page doesn’t make much sense to me :frowning:

If you click into the relevant interface in LibreNMS do you see “testing: 123” as the port description?

Double check the there is no space before the colon, and a space after the colon. so testing: 123 not testing : 123

What device are you trying this on?

Otherwise nothing looks wrong with what you’re doing and the URL is correct.

You will need to wait for polling to complete for the database to be updated.

1 Like

All sorted now, didn’t know you had to wait for a polling cycle for the tags to be picked up.

Just bringing this up again, I’m trying to fix the “generic_multi_bits_separated.inc.php”, but if some one can also confirm they also see the same issues;

I believe this is the bad data in the aggregate graph view with noagg=0 and nototal=0;

collabshot_2019-2-25_ead31d

Problem 1:
Per-line aggregate Total figures can be corrected if divided by 8 (this looks to be because “generic_multi_bits_separated.inc.php” is multiplying to get Bytes, but for some reason is also double multiplying the totals?)

Problem 2:
MAX peaks are not preserved in the aggregate view, and are more obvious in a Year/Two year view, in comparison to a Single graph Year/Two year view.

Aggregate Maximum must be using AVERAGE instead of MAX. MAX is never fetched in the DEF. I believe MAX should be fetched to preserve peak traffic like it does in “generic_data.inc.php”.

1 Like

Any news on the problem 2 issue? We would much like to have peak traffic shown in aggregated graphs…

Hmm still looking into Problem 2 :stuck_out_tongue:

Historic Max peak data is shown correctly on individual interfaces (ports_bits) - and LibreNMS preserves max peak through RRD sampling overtime… which is awesome :smiley:

but for Historic aggregates (multiport_bits), Max peak is incorrect (looks to me like it’s actually displaying the average) I think this is just a code issue in multiports_bits that needs changing. For the last 1 month it’s fine but anything older looks to be incorrect. You can compare this by checking an aggregate graph, and then the individual graph inside that aggregate 6 months ago.