You should have an installed Linux server running one of the supported OS. Make sure you select your server's OS in the tabbed options below. Choice of web server is your preference, NGINX is recommended.
Connect to the server command line and follow the instructions below.
Note
These instructions assume you are the root user. If you are not, prepend sudo to the shell commands (the ones that aren't at mysql> prompts) or temporarily become a user with root privileges with sudo -s or sudo -i.
Please note the minimum supported PHP version is 8.1
You should have an installed Linux server running one of the supported OS. Make sure you select your server's OS in the tabbed options below. Choice of web server is your preference, NGINX is recommended.
Connect to the server command line and follow the instructions below.
Note
These instructions assume you are the root user. If you are not, prepend sudo to the shell commands (the ones that aren't at mysql> prompts) or temporarily become a user with root privileges with sudo -s or sudo -i.
Please note the minimum supported PHP version is 8.1
apt install software-properties-common
add-apt-repository universe
add-apt-repository ppa:ondrej/php
diff --git a/search/search_index.json b/search/search_index.json
index ceff7458..55da8cf5 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"Installing Install LibreNMS Now Install Using Docker Setup Applications Auto Discovery Oxidized RRDCached Alerting Rules Templates Transports More... API Using the API API Endpoints Support FAQ Install validation Performance tweaks More... Developing Getting Started Support for a new OS"},{"location":"API/","title":"Using the API","text":""},{"location":"API/#versioning","title":"Versioning","text":"
Versioning an API is a minefield which saw us looking at numerous options on how to do this.
We have currently settled on using versioning within the API end point itself /api/v0. As the API itself is new and still in active development we also decided that v0 would be the best starting point to indicate it's in development.
To access any of the token end points you will be required to authenticate using a token. Tokens can be created directly from within the LibreNMS web interface by going to /api-access/.
Click on 'Create API access token'.
Select the user you would like to generate the token for.
Whilst this documentation will describe and show examples of the end points, we've designed the API so you should be able to traverse through it without knowing any of the available API routes.
Input to the API is done in three different ways, sometimes a combination of two or three of these.
Passing parameters via the api route. For example when obtaining a devices details you will pass the hostname of the device in the route: /api/v0/devices/:hostname.
Passing parameters via the query string. For example you can list all devices on your install but limit the output to devices that are currently down: /api/v0/devices?type=down
Passing data in via JSON, this will mainly be used when adding or updating information via the API, for instance adding a new device:
curl -X POST -d '{\"hostname\":\"localhost.localdomain\",\"version\":\"v1\",\"community\":\"public\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices\n
devices: This is either an array of device ids or -1 for a global rule
builder: The rule which should be in the format entity.condition value (i.e devices.status != 0 for devices marked as down). It must be json encoded in the format rules are currently stored.
severity: The severity level the alert will be raised against, Ok, Warning, Critical.
disabled: Whether the rule will be disabled or not, 0 = enabled, 1 = disabled
count: This is how many polling runs before an alert will trigger and the frequency.
delay: Delay is when to start alerting and how frequently. The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
interval: How often to re-issue notifications while this alert is active,0 means notify once.The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
mute: If mute is enabled then an alert will never be sent but will show up in the Web UI (true or false).
invert: This would invert the rules check.
name: This is the name of the rule and is mandatory.
notes: Some informal notes for this rule
Example:
curl -X POST -d '{\"devices\":[1,2,3], \"name\": \"testrule\", \"builder\":{\"condition\":\"AND\",\"rules\":[{\"id\":\"devices.hostname\",\"field\":\"devices.hostname\",\"type\":\"string\",\"input\":\"text\",\"operator\":\"equal\",\"value\":\"localhost\"}],\"valid\":true},\"severity\": \"critical\",\"count\":15,\"delay\":\"5 m\",\"interval\":\"5 m\",\"mute\":false,\"notes\":\"This a note from the API\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/rules\n
rule_id: You must specify the rule_id to edit an existing rule, if this is absent then a new rule will be created.
devices: This is either an array of device ids or -1 for a global rule
builder: The rule which should be in the format entity.condition value (i.e devices.status != 0 for devices marked as down). It must be json encoded in the format rules are currently stored.
severity: The severity level the alert will be raised against, Ok, Warning, Critical.
disabled: Whether the rule will be disabled or not, 0 = enabled, 1 = disabled
count: This is how many polling runs before an alert will trigger and the frequency.
delay: Delay is when to start alerting and how frequently. The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
interval: How often to re-issue notifications while this alert is active,0 means notify once.The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
mute: If mute is enabled then an alert will never be sent but will show up in the Web UI (true or false).
invert: This would invert the rules check.
name: This is the name of the rule and is mandatory.
notes: Some informal notes for this rule
Example:
curl -X PUT -d '{\"rule_id\":1,\"device_id\":\"-1\", \"name\": \"testrule\", \"builder\":{\"condition\":\"AND\",\"rules\":[{\"id\":\"devices.hostname\",\"field\":\"devices.hostname\",\"type\":\"string\",\"input\":\"text\",\"operator\":\"equal\",\"value\":\"localhost\"}],\"valid\":true},\"severity\": \"critical\",\"count\":15,\"delay\":\"5 m\",\"interval\":\"5 m\",\"mute\":false,\"notes\":\"This a note from the API\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/rules\n
Retrieve the data used to draw a graph so it can be rendered in an external system
Route: /api/v0/bills/:id/graphdata/:graph_type
Input:
The reducefactor parameter is used to reduce the number of data points. Billing data has 5 minute granularity, so requesting a graph for a long time period will result in many data points. If not supplied, it will be automatically calculated. A reducefactor of 1 means return all items, 2 means half of the items etc.
If you send an existing bill_id the call replaces all values it receives. For example if you send 2 ports it will delete the existing ports and add the the 2 new ports. So to add ports you have to get the current ports first and add them to your update call.
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
name: optional - The name of the device group
type: optional - should be static or dynamic. Setting this to static requires that the devices input be provided
desc: optional - Description of the device group
rules: required if type == dynamic - A set of rules to determine which devices should be included in this device group
devices: required if type == static - A list of devices that should be included in this group. This is a static list of devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
full: set to any value to return all data for the devices in a given group
title: optional - Some title for the Maintenance Will be replaced with device group name if omitted
notes: optional - Some description for the Maintenance
start: optional - start time of Maintenance in full format Y-m-d H:i:00 eg: 2022-08-01 22:45:00 Current system time now() will be used if omitted
duration: required - Duration of Maintenance in format H:i / Hrs:Mins eg: 02:00
Example with start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devicegroups/Cisco%20switches/maintenance/ \\\n --data-raw '\n{\n \"title\":\"Device group Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with start time\",\n \"start\":\"2022-08-01 08:00:00\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device group Cisco switches (2) will begin maintenance mode at 2022-08-01 22:45:00 for 2:00h\"\n}\n
Example with no start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devicegroups/Cisco%20switches/maintenance/ \\\n --data-raw '\n{\n \"title\":\"Device group Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with no start time\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device group Cisco switches (2) moved into maintenance mode for 2:00h\"\n}\n
"},{"location":"API/DeviceGroups/#add-devices-to-group","title":"Add devices to group","text":"
Add devices to a device group.
Route: /api/v0/devicegroups/:name/devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
devices: required - A list of devices to be added to the group.
"},{"location":"API/DeviceGroups/#remove-devices-from-group","title":"Remove devices from group","text":"
Removes devices from a device group.
Route: /api/v0/devicegroups/:name/devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
devices: required - A list of devices to be removed from the group.
Get a particular health class graph for a device, if you provide a sensor_id as well then a single sensor graph will be provided. If no sensor_id value is provided then you will be sent a stacked sensor graph.
Get a particular wireless class graph for a device, if you provide a sensor_id as well then a single sensor graph will be provided. If no sensor_id value is provided then you will be sent a stacked wireless graph.
Get information about a particular port for a device.
Route: /api/v0/devices/:hostname/ports/:ifname
hostname can be either the device hostname or id
ifname can be any of the interface names for the device which can be obtained using get_port_graphs. Please ensure that the ifname is urlencoded if it needs to be (i.e Gi0/1/0 would need to be urlencoded.
Input:
columns: Comma separated list of columns you want returned.
ifname can be any of the interface names for the device which can be obtained using get_port_graphs. Please ensure that the ifname is urlencoded if it needs to be (i.e Gi0/1/0 would need to be urlencoded.
type is the port type you want the graph for, you can request a list of ports for a device with get_port_graphs.
Input:
from: This is the date you would like the graph to start - See http://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html for more information.
to: This is the date you would like the graph to end - See http://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html for more information.
width: The graph width, defaults to 1075.
height: The graph height, defaults to 300.
ifDescr: If this is set to true then we will use ifDescr to lookup the port instead of ifName. Pass the ifDescr value you want to search as you would ifName.
title: optional - Some title for the Maintenance Will be replaced with hostname if omitted
notes: optional - Some description for the Maintenance Will also be added to device notes if user prefs \"Add schedule notes to devices notes\" is set
start: optional - start time of Maintenance in full format Y-m-d H:i:00 eg: 2022-08-01 22:45:00 Current system time now() will be used if omitted
duration: required - Duration of Maintenance in format H:i / Hrs:Mins eg: 02:00
Example with start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devices/localhost/maintenance/ \\\n --data-raw '\n \"title\":\"Device Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with start time\",\n \"start\":\"2022-08-01 08:00:00\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device localhost (1) will begin maintenance mode at 2022-08-01 22:45:00 for 2:00h\"\n}\n
Example with no start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devices/localhost/maintenance/ \\\n --data-raw '\n \"title\":\"Device Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with no start time\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device localhost (1) moved into maintenance mode for 2:00h\"\n}\n
Add a new device. Most fields are optional. You may omit snmp credentials to attempt each system credential in order. See snmp.version, snmp.community, and snmp.v3
To guarantee device is added, use force_add. This will skip checks for duplicate device and snmp reachability, but not duplicate hostname.
Route: /api/v0/devices
Input (JSON):
Fields:
hostname (required): device hostname or IP
display: A string to display as the name of this device, defaults to hostname (or device_display_default setting). May be a simple template using replacements: {{ $hostname }}, {{ $sysName }}, {{ $sysName_fallback }}, {{ $ip }}
snmpver: SNMP version to use, v1, v2c or v3. During checks detection order is v2c,v3,v1
port: SNMP port (defaults to port defined in config).
transport: SNMP protocol (udp,tcp,udp6,tcp6) Defaults to transport defined in config.
port_association_mode: method to identify ports: ifIndex (default), ifName, ifDescr, ifAlias
poller_group: This is the poller_group id used for distributed poller setup. Defaults to 0.
location or location_id: set the location by text or location id
Options:
force_add: Skip all checks and attempts to detect credentials. Add the device as given directly to the database.
ping_fallback: if snmp checks fail, add the device as ping only instead of failing
Update a device port notes field in the devices_attrs database.
Route: /api/v0/devices/:hostname/port/:portid
hostname can be either the device hostname or id
portid needs to be the port unique id (int).
Input (JSON): - notes: The string data to populate on the port notes field.
Examples:
curl -X PATCH -d '{\"notes\": \"This port is in a scheduled maintenance with the provider.\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices/localhost/port/5\n
Output:
[\n {\n \"status\": \"ok\",\n \"message\": \"Port notes field has been updated\"\n }\n]\n
curl -X PATCH -d '{\"field\": [\"notes\",\"purpose\"], \"data\": [\"This server should be kept online\", \"For serving web traffic\"]}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices/localhost\n
Output:
[\n {\n \"status\": \"ok\",\n \"message\": \"Device fields have been updated\"\n }\n]\n
Retrieve the inventory for a device. If you call this without any parameters then you will only get part of the inventory. This is because a lot of devices nest each component, for instance you may initially have the chassis, within this the ports - 1 being an sfp cage, then the sfp itself. The way this API call is designed is to enable a recursive lookup. The first call will retrieve the root entry, included within this response will be entPhysicalIndex, you can then call for entPhysicalContainedIn which will then return the next layer of results. To retrieve all items together, see get_inventory_for_device.
Route: /api/v0/inventory/:hostname
hostname can be either the device hostname or the device id
Input:
entPhysicalClass: This is used to restrict the class of the inventory, for example you can specify chassis to only return items in the inventory that are labelled as chassis.
entPhysicalContainedIn: This is used to retrieve items within the inventory assigned to a previous component, for example specifying the chassis (entPhysicalIndex) will retrieve all items where the chassis is the parent.
Retrieve the flattened inventory for a device. This retrieves all inventory items for a device regardless of their structure, and may be more useful for devices with with nested components.
Route: /api/v0/inventory/:hostname/all
hostname can be either the device hostname or the device id
Accept any json messages and passes to further syslog processing. single messages or an array of multiple messages is accepted. see Syslog for more details and logstash integration
name Is the name of the port group which can be obtained using get_port_groups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Params:
full: set to any value to return all data for the devices in a given group
To get started, you first need some alert rules which will react to changes with your devices before raising an alert.
Creating alert rules
After that you also need to tell LibreNMS how to notify you when an alert is raised, this is done using Alert Transports.
Configuring alert transports
The next step is not strictly required but most people find it useful. Creating custom alert templates will help you get the benefit out of the alert system in general. Whilst we include a default template, it is limited in the data that you will receive in the alerts.
This column provides you visibility on the status of the alert:
This alert is currently active and sending alerts. Click this icon to acknowledge the alert.
This alert is currently acknowledged until the alert clears. Click this icon to un-acknowledge the alert.
This alert is currently acknowledged until the alert worsens or gets better, at which stage it will be automatically unacknowledged and alerts will resume. Click this icon to un-acknowledge the alert.
This column will allow you access to the acknowledge/unacknowledge notes for this alert.
"},{"location":"Alerting/Creating-Transport/","title":"Creating a new Transport","text":""},{"location":"Alerting/Creating-Transport/#file-location","title":"File location","text":"
All transports are located in LibreNMS\\Alert\\Transport and the files are named after the Transport name. I.e Discord.php for Discord.
The following functions are required for a new transport to pass the unit tests:
deliverAlert() - This is function called within alerts to invoke the transport. Here you should do any post processing of the transport config to get it ready for use.
contact$Transport() - This is named after the transport so for Discord it would be contactDiscord(). This is what actually interacts with the 3rd party API, invokes the mail command or whatever you want your alert to do.
configTemplate() - This is used to define the form that will accept the transport config in the webui and then what data should be validated and how. Validation is done using Laravel validation
The following function is not required for new Transports and is for legacy reasons only. deliverAlertOld().
Please don't forget to update the Transport file to include details of your new transport.
A table should be provided to indicate the form values that we ask for and examples. I.e:
Config Example Discord URL https://discordapp.com/api/webhooks/4515489001665127664/82-sf4385ysuhfn34u2fhfsdePGLrg8K7cP9wl553Fg6OlZuuxJGaa1d54fe Options username=myname"},{"location":"Alerting/Device-Dependencies/","title":"Device Dependencies","text":"
It is possible to set one or more parents for a device. The aim for that is, if all parent devices are down, alert contacts will not receive redundant alerts for dependent devices. This is very useful when you have an outage, say in a branch office, where normally you'd receive hundreds of alerts, but when this is properly configured, you'd only receive an alert for the parent hosts.
There are three ways to configure this feature. First one is from general settings of a device. The other two can be done in the 'Device Dependencies' item under 'Devices' menu. In this page, you can see all devices and with its parents. Clicking on the 'bin' icon will clear the dependency setting. Clicking on the 'pen' icon will let you edit or change the current setting for chosen device. There's also a 'Manage Device Dependencies' button on the top. This will let you set parents for multiple devices at once.
For an intro on getting started with Device Dependencies, take a look at our Youtube video
Entities as described earlier are based on the table and column names within the database, if you are unsure of what the entity is you want then have a browse around inside MySQL using show tables and desc <tablename>.
Below are some common entities that you can use within the alerting system. This list is not exhaustive and you should look at the MySQL database schema for the full list.
"},{"location":"Alerting/Entities/#devices","title":"Devices","text":"Entity Description devices.hostname The device hostname devices.sysName The device sysName devices.sysDescr The device sysDescr devices.hardware The device hardware devices.version The device os version devices.location The device location devices.status The status of the device, 1 devices.status_reason The reason the device was detected as down (icmp or snmp) devices.ignore If the device is ignored this will be set to 1 devices.disabled If the device is disabled this will be set to 1 devices.last_polled The the last polled datetime (yyyy-mm-dd hh:mm:ss) devices.type The device type such as network, server, firewall, etc."},{"location":"Alerting/Entities/#bgp-peers","title":"BGP Peers","text":"Entity Description bgpPeers.astext This is the description of the BGP Peer bgpPeers.bgpPeerIdentifier The IP address of the BGP Peer bgpPeers.bgpPeerRemoteAs The AS number of the BGP Peer bgpPeers.bgpPeerState The operational state of the BGP session bgpPeers.bgpPeerAdminStatus The administrative state of the BGP session bgpPeers.bgpLocalAddr The local address of the BGP session."},{"location":"Alerting/Entities/#ipsec-tunnels","title":"IPSec Tunnels","text":"Entity Description ipsec_tunnels.peer_addr The remote VPN peer address ipsec_tunnels.local_addr The local VPN address ipsec_tunnels.tunnel_status The VPN tunnels operational status."},{"location":"Alerting/Entities/#memory-pools","title":"Memory pools","text":"
Entity | Description |---|---| mempools.mempool_type | The memory pool type such as hrstorage, cmp and cemp mempools.mempool_descr | The description of the pool such as Physical memory, Virtual memory and System memory mempools.mempool_perc | The used percentage of the memory pool.
"},{"location":"Alerting/Entities/#ports","title":"Ports","text":"Entity Description ports.ifDescr The interface description ports.ifName The interface name ports.ifSpeed The port speed in bps ports.ifHighSpeed The port speed in mbps ports.ifOperStatus The operational status of the port (up or down) ports.ifAdminStatus The administrative status of the port (up or down) ports.ifDuplex Duplex setting of the port ports.ifMtu The MTU setting of the port."},{"location":"Alerting/Entities/#processors","title":"Processors","text":"Entity Description processors.processor_usage The usage of the processor as a percentage processors.processor_descr The description of the processor."},{"location":"Alerting/Entities/#storage","title":"Storage","text":"Entity Description storage.storage_descr The description of the storage storage.storage_perc The usage of the storage as a percentage."},{"location":"Alerting/Entities/#health-sensors","title":"Health / Sensors","text":"Entity Description sensors.sensor_desc The sensors description. sensors.sensor_current The current sensors value. sensors.sensor_prev The previous sensor value. sensors.lastupdate The sensors last updated datetime stamp."},{"location":"Alerting/Macros/","title":"Macros","text":"
Macros are shorthands to either portion of rules or pure SQL enhanced with placeholders.
You can define your own macros in your config.php.
"},{"location":"Alerting/Macros/#ports-now-down-boolean","title":"Ports now down (Boolean)","text":"
Entity: ports.ifOperStatus != ports.ifOperStatus_prev AND ports.ifOperStatus_prev = \"up\" AND ports.ifAdminStatus = \"up\"
Description: Ports that were previously up and have now gone down.
Example: macros.port_now_down = 1
"},{"location":"Alerting/Macros/#port-has-xdp-neighbour-boolean","title":"Port has xDP neighbour (Boolean)","text":"
Entity: %macros.port AND %links.local_port_id = %ports.port_id
Description: Ports that have an xDP (lldp, cdp, etc) neighbour.
Example: macros.port_has_xdp_neighbours = 1
"},{"location":"Alerting/Macros/#port-has-xdp-neighbour-already-known-in-librenms-boolean","title":"Port has xDP neighbour already known in LibreNMS (Boolean)","text":"
Entity: %macros.port_has_neighbours AND (%links.remote_port_id IS NOT NULL)
Description: Ports that have an xDP (lldp, cdp, etc) neighbour that is already known in libreNMS.
Rules must consist of at least 3 elements: An Entity, a Condition and a Value. Rules can contain braces and Glues. Entities are provided from Table and Field from the database. For Example: ports.ifOperStatus.
Conditions can be any of:
Equals =
Not Equals !=
In IN
Not In NOT IN
Begins with LIKE ('...%')
Doesn't begin with NOT LIKE ('...%')
Contains LIKE ('%...%')
Doesn't Contain NOT LIKE ('%...%')
Ends with LIKE ('%...')
Doesn't end with NOT LIKE ('%...')
Between BETWEEN
Not Between NOT BETWEEN
Is Empty = ''
Is Not Empty != '''
Is Null IS NULL
Is Not Null IS NOT NULL
Greater >
Greater or Equal >=
Less <
Less or Equal <=
Regex REGEXP
Values can be an entity or any data. If using macros as value you must include the macro name into backticks. i.e. `macros.past_60m`
On the Advanced tab, you can specify some additional options for the alert rule:
Override SQL: Enable this if you using a custom query
Query: The query to be used for the alert.
An example of this would be an average rule for all CPUs over 10%
SELECT devices.device_id, devices.status, devices.disabled, devices.ignore, \nAVG(processors.processor_usage) AS cpu_avg FROM \ndevices INNER JOIN processors ON devices.device_id \n= processors.device_id WHERE devices.device_id \n= ? AND devices.status = 1 AND devices.disabled = \n0 AND devices.ignore = 0 GROUP BY devices.device_id, \ndevices.status, devices.disabled, devices.ignore \nHAVING AVG(processors.processor_usage) \n> 10\n
The 10 would then contain the average CPU usage value, you can change this value to be whatever you like.
You will to need copy and paste this into the Alert Rule under Advanced then paste into Query box and switch the Override SQL.
You can associate a rule to a procedure by giving the URL of the procedure when creating the rule. Only links like \"http://\" are supported, otherwise an error will be returned. Once configured, procedure can be opened from the Alert widget through the \"Open\" button, which can be shown/hidden from the widget configuration box.
Root-directory gets too full: storage.storage_descr = '/' AND storage.storage_perc >= '75'
Any storage gets fuller than the 'warning': storage.storage_perc >= storage_perc_warn
If device is a server and the used storage is above the warning level, but ignore /boot partitions: storage.storage_perc > storage.storage_perc_warn AND devices.type = \"server\" AND storage.storage_descr != \"/boot\"
VMware LAG is not using \"Source ip address hash\" load balancing: devices.os = \"vmware\" AND ports.ifType = \"ieee8023adLag\" AND ports.ifDescr REGEXP \"Link Aggregation .*, load balancing algorithm: Source ip address hash\"
Syslog, authentication failure during the last 5m: syslog.timestamp >= macros.past_5m AND syslog.msg REGEXP \".*authentication failure.*\"
High memory usage: macros.device_up = 1 AND mempools.mempool_perc >= 90 AND mempools.mempool_descr REGEXP \"Virtual.*\"
High CPU usage(per core usage, not overall): macros.device_up = 1 AND processors.processor_usage >= 90
High port usage, where description is not client & ifType is not softwareLoopback: macros.port_usage_perc >= 80 AND port.port_descr_type != \"client\" AND ports.ifType != \"softwareLoopback\"
Alert when mac address is located on your network ipv4_mac.mac_address = \"2c233a756912\"
You can also select Alert Rule from the Alerts Collection. These Alert Rules are submitted by users in the community :) If would like to submit your alert rules to the collection, please submit them here Alert Rules Collection
This page is for installs running version 1.42 or later. You can find the older docs here
Templates can be assigned to a single or a group of rules and can contain any kind of text. There is also a default template which is used for any rule that isn't associated with a template. This template can be found under Alert Templates page and can be edited. It also has an option revert it back to its default content.
To attach a template to a rule just open the Alert Templates settings page, choose the template to assign and click the yellow button in the Actions column. In the appearing popupbox select the rule(s) you want the template to be assigned to and click the Attach button. You might hold down the CTRL key to select multiple rules at once.
The templating engine in use is Laravel Blade. We will cover some of the basics here, however the official Laravel docs will have more information here
Placeholders are special variables that if used within the template will be replaced with the relevant data, I.e:
The device {{ $alert->hostname }} has been up for {{ $alert->uptime }} seconds would result in the following The device localhost has been up for 30344 seconds.
When using placeholders to echo data, you need to wrap the placeholder in {{ }}. I.e {{ $alert->hostname }}.
Device ID: $alert->device_id
Hostname of the Device: $alert->hostname
sysName of the Device: $alert->sysName
sysDescr of the Device: $alert->sysDescr
display name of the Device: $alert->display
sysContact of the Device: $alert->sysContact
OS of the Device: $alert->os
Type of Device: $alert->type
IP of the Device: $alert->ip
Hardware of the Device: $alert->hardware
Software version of the Device: $alert->version
Features of the Device: $alert->features
Serial number of the Device: $alert->serial
Location of the Device: $alert->location
uptime of the Device (in seconds): $alert->uptime
Short uptime of the Device (28d 22h 30m 7s): $alert->uptime_short
Long uptime of the Device (28 days, 22h 30m 7s): $alert->uptime_long
Description (purpose db field) of the Device: $alert->description
Notes of the Device: $alert->notes
Notes of the alert (ack notes): $alert->alert_notes
Time Elapsed, Only available on recovery ($alert->state == 0): $alert->elapsed
Rule Builder (the actual rule) (use {!! $alert->builder !!}): $alert->builder
Alert-ID: $alert->id
Unique-ID: $alert->uid
Faults, Only available on alert ($alert->state != 0), must be iterated in a foreach (@foreach ($alert->faults as $key => $value) @endforeach). Holds all available information about the Fault, accessible in the format $value['Column'], for example: $value['ifDescr']. Special field $value['string'] has most Identification-information (IDs, Names, Descrs) as single string, this is the equivalent of the default used and must be encased in {{ }}
State: $alert->state
Severity: $alert->severity
Rule: $alert->rule
Rule-Name: $alert->name
Procedure URL: $alert->proc
Timestamp: $alert->timestamp
Transport type: $alert->transport
Transport name: $alert->transport_name
Contacts, must be iterated in a foreach, $key holds email and $value holds name: $alert->contacts
Placeholders can be used within the subjects for templates as well although $faults is most likely going to be worthless.
The Default Template is a 'one-size-fit-all'. We highly recommend defining your own templates for your rules to include more specific information.
You can use plain text or html as per Alert templates and this will form the basis of your common template, feel free to make as many templates in the directory as needed.
There are two helpers for graphs that will use a signed url to allow secure external access. Anyone using the signed url will be able to view the graph.
Your LibreNMS web must be accessible from the location where the graph is viewed. Some alert transports require publicly accessible urls.
APP_URL must be set in .env to use signed graphs.
Changing APP_KEY will invalidate all previously issued singed urls.
You may specify the graph one of two ways, a php array of parameters, or a direct url to a graph.
Note that to and from can be specified either as timestamps with time() or as relative time -3d or -36h. When using relative time, the graph will show based on when the user views the graph, not when the event happened. Sharing a graph image with a relative time will always give the recipient access to current data, where a specific timestamp will only allow access to that timeframe.
This will insert a specially formatted html img tag linking to the graph. Some transports may search the template for this tag to attach images properly for that transport.
"},{"location":"Alerting/Templates/#using-models-for-optional-data","title":"Using models for optional data","text":"
If some value does not exist within the $faults[]-array, you may query fields from the database using Laravel models. You may use models to query additional values and use them on the template by placing the model and the value to search for within the braces. For example, ISIS-alerts do have a port_id value associated with the alert but ifName is not directly accessible from the $faults[]-array. If the name of the port was needed, it's value could be queried using a template such as:
We include a few templates for you to use, these are specific to the type of alert rules you are creating. For example if you create a rule that would alert on BGP sessions then you can assign the BGP template to this rule to provide more information.
The included templates apart from the default template are:
BGP Sessions
Ports
Temperature
"},{"location":"Alerting/Templates/#other-examples","title":"Other Examples","text":""},{"location":"Alerting/Templates/#microsoft-teams-markdown","title":"Microsoft Teams - Markdown","text":"
The simplest way of testing if an alert rule will match a device is by going to the device, clicking edit (the cog), select Capture. From this new screen choose Alerts and click run.
The output will cycle through all alerts applicable to this device and show you the Rule name, rule, MySQL query and if the rule matches.
It's possible to test your new template before assigning it to a rule. To do so you can run ./scripts/test-template.php. The script will provide the help info when ran without any parameters.
As an example, if you wanted to test template ID 10 against localhost running rule ID 2 then you would run:
If the rule is currently alerting for localhost then you will get the full template as expected to see on email, if it's not then you will just see the template without any fault information.
Transports are located within LibreNMS/Alert/Transport/ and can be configured within the WebUI under Alerts -> Alert Transports.
Contacts will be gathered automatically and passed to the configured transports. By default the Contacts will be only gathered when the alert triggers and will ignore future changes in contacts for the incident. If you want contacts to be re-gathered before each dispatch, please set 'Updates to contact email addresses not honored' to Off in the WebUI.
The contacts will always include the SysContact defined in the Device's SNMP configuration and also every LibreNMS user that has at least read-permissions on the entity that is to be alerted.
At the moment LibreNMS only supports Port or Device permissions.
You can exclude the SysContact by toggling 'Issue alerts to sysContact'.
To include users that have Global-Read, Administrator or Normal-User permissions it is required to toggle the options:
Issue alerts to admins.
Issue alerts to read only users
Issue alerts to normal users.
"},{"location":"Alerting/Transports/#using-a-proxy","title":"Using a Proxy","text":"
Proxy Configuration
"},{"location":"Alerting/Transports/#using-a-amqp-based-transport","title":"Using a AMQP based Transport","text":"
You need to install an additional php module : bcmath
The alerta monitoring system is a tool used to consolidate and de-duplicate alerts from multiple sources for quick \u2018at-a-glance\u2019 visualisation. With just one system you can monitor alerts from many other monitoring tools on a single screen.
Example:
Config Example API Endpoint http://alerta.example.com/api/alert Environment Production Apy key api key with write permission Alert state critical Recover state cleared"},{"location":"Alerting/Transports/#alertops","title":"AlertOps","text":"
Using AlertOps integration with LibreNMS, you can seamlessly forward alerts to AlertOps with detailed information. AlertOps acts as a dispatcher for LibreNMS alerts, allowing you to determine the right individuals or teams to notify based on on-call schedules. Notifications can be sent via various channels including email, text messages (SMS), phone calls, and mobile push notifications for iOS & Android devices. Additionally, AlertOps provides escalation policies to ensure alerts are appropriately managed until they are assigned or closed. You can also filter out/aggregate alerts based on different values.
To set up the integration:
Create a LibreNMS Integration: Sign up for an AlertOps account and create a LibreNMS integration from the integrations page. This will generate an Inbound Integration Endpoint URL that you'll need to copy to LibreNMS.
Configure LibreNMS Integration: In LibreNMS, navigate to the integration settings and paste the inbound integration URL obtained from AlertOps.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#alertmanager","title":"Alertmanager","text":"
Alertmanager is an alert handling software, initially developed for alert processing sent by Prometheus.
It has built-in functionality for deduplicating, grouping and routing alerts based on configurable criteria.
LibreNMS uses alert grouping by alert rule, which can produce an array of alerts of similar content for an array of hosts, whereas Alertmanager can group them by alert meta, ideally producing one single notice in case an issue occurs.
It is possible to configure as many label values as required in Alertmanager Options section. Every label and its value should be entered as a new line.
Labels can be a fixed string or a dynamic variable from the alert. To set a dynamic variable your label must start with extra_ then complete with the name of your label (only characters, figures and underscore are allowed here). The value must be the name of the variable you want to get (you can see all the variables in Alerts->Notifications by clicking on the Details icon of your alert when it is pending). If the variable's name does not match with an existing value the label's value will be the string you provided just as it was a fixed string.
Multiple Alertmanager URLs (comma separated) are supported. Each URL will be tried and the search will stop at the first success.
Basic HTTP authentication with a username and a password is supported. If you let those value blank, no authentication will be used.
The API transport allows to reach any service provider using POST, PUT or GET URLs (Like SMS provider, etc). It can be used in multiple ways:
The same text built from the Alert template is available in the variable
$msg, which can then be sent as an option to the API. Be carefull that HTTP GET requests are usually limited in length.
The API-Option fields can be directly built from the variables defined in Template-Syntax but without the 'alert->' prefix. For instance, $alert->uptime is available as $uptime in the API transport
The API-Headers allows you to add the headers that the api endpoint requires.
The API-body allow sending data in the format required by the API endpoint.
A few variables commonly used :
Variable Description {{ $hostname\u00a0}} Hostname {{ $sysName\u00a0}} SysName {{ $sysDescr\u00a0}} SysDescr {{ $os\u00a0}} OS of device (librenms defined) {{ $type\u00a0}} Type of device (librenms defined) {{ $ip\u00a0}} IP Address {{ $hardware\u00a0}} Hardware {{ $version\u00a0}} Version {{ $uptime\u00a0}} Uptime in seconds {{ $uptime_short\u00a0}} Uptime in human-readable format {{ $timestamp\u00a0}} Timestamp of alert {{ $description\u00a0}} Description of device {{ $title\u00a0}} Title (as built from the Alert Template) {{ $msg\u00a0}} Body text (as built from the Alert Template)
Example:
The example below will use the API named sms-api of my.example.com and send the title of the alert to the provided number using the provided service key. Refer to your service documentation to configure it properly.
Config Example API Method GET API URL http://my.example.com/sms-api API Options rcpt=0123456789 key=0987654321abcdef msg=(LNMS) {{ $title }} API Username myUsername API Password myPassword
The example below will use the API named wall-display of my.example.com and send the title and text of the alert to a screen in the Network Operation Center.
Config Example API Method POST API URL http://my.example.com/wall-display API Options title={{ $title }} msg={{ $msg }}
The example below will use the API named component of my.example.com with id 1, body as json status value and headers send token authentication and content type required.
Config Example API Method PUT API URL http://my.example.com/comonent/1 API Headers X-Token=HASH Content-Type=application/json API Body { \"status\": 2 }"},{"location":"Alerting/Transports/#aspsms","title":"aspSMS","text":"
aspSMS is a SMS provider that can be configured by using the generic API Transport. You need a token you can find on your personnal space.
aspSMS docs
Example:
Config Example Transport type Api API Method POST API URL https://soap.aspsms.com/aspsmsx.asmx/SimpleTextSMS Options UserKey=USERKEYPassword=APIPASSWORDRecipient=RECIPIENT Originator=ORIGINATORMessageText={{ $msg }}"},{"location":"Alerting/Transports/#browser-push","title":"Browser Push","text":"
Browser push notifications can send a notification to the user's device even when the browser is not open. This requires HTTPS, the PHP GMP extension, Push API support, and permissions on each device to send alerts.
Simply configure an alert transport and allow notification permission on the device(s) you wish to receive alerts on. You may disable alerts on a browser on the user preferences page.
Canopsis is a hypervision tool. LibreNMS can send alerts to Canopsis which are then converted to canopsis events.
Canopsis Docs
Example:
Config Example Hostname www.xxx.yyy.zzz Port Number 5672 User admin Password my_password Vhost canopsis"},{"location":"Alerting/Transports/#cisco-spark-aka-webex-teams","title":"Cisco Spark (aka Webex Teams)","text":"
Cisco Spark (now known as Webex Teams). LibreNMS can send alerts to a Cisco Spark room. To make this possible you need to have a RoomID and a token. You can also choose to send alerts using Markdown syntax. Enabling this option provides for more richly formatted alerts, but be sure to adjust your alert template to account for the Markdown syntax.
For more information about Cisco Spark RoomID and token, take a look here :
Getting started
Rooms
Example:
Config Example API Token ASd23r23edewda RoomID 34243243251 Use Markdown? x"},{"location":"Alerting/Transports/#clickatell","title":"Clickatell","text":"
Clickatell provides a REST-API requiring an Authorization-Token and at least one Cellphone number.
Clickatell Docs
Here an example using 3 numbers, any amount of numbers is supported:
Example:
Config Example Token dsaWd3rewdwea Mobile Numbers +1234567890,+1234567891,+1234567892"},{"location":"Alerting/Transports/#discord","title":"Discord","text":"
The Discord transport will POST the alert message to your Discord Incoming WebHook. Simple html tags are stripped from the message.
The only required value is for url, without this no call to Discord will be made. The Options field supports the JSON/Form Params listed in the Discord Docs below.
Discord Docs
Example:
Config Example Discord URL https://discordapp.com/api/webhooks/4515489001665127664/82-sf4385ysuhfn34u2fhfsdePGLrg8K7cP9wl553Fg6OlZuuxJGaa1d54fe Options username=myname"},{"location":"Alerting/Transports/#elasticsearch","title":"Elasticsearch","text":"
You can have LibreNMS send alerts to an elasticsearch database. Each fault will be sent as a separate document.
Example:
Config Example Host 127.0.0.1 Port 9200 Index Pattern \\l\\i\\b\\r\\e\\n\\m\\s-Y.m.d"},{"location":"Alerting/Transports/#gitlab","title":"GitLab","text":"
LibreNMS will create issues for warning and critical level alerts however only title and description are set. Uses Personal access tokens to authenticate with GitLab and will store the token in cleartext.
Example:
Config Example Host http://gitlab.host.tld Project ID 1 Personal Access Token AbCdEf12345"},{"location":"Alerting/Transports/#grafana-oncall","title":"Grafana Oncall","text":"
Send alerts to Grafana Oncall using a Formatted Webhook
Example:
Config Example Webhook URL https://a-prod-us-central-0.grafana.net/integrations/v1/formatted_webhook/m12xmIjOcgwH74UF8CN4dk0Dh/"},{"location":"Alerting/Transports/#hipchat","title":"HipChat","text":"
See the HipChat API Documentation for rooms/message for details on acceptable values.
You may notice that the link points at the \"deprecated\" v1 API. This is because the v2 API is still in beta.
Example:
Config Example API URL https://api.hipchat.com/v1/rooms/message?auth_token=109jawregoaihj Room ID 7654321 From Name LibreNMS Options color=red
At present the following options are supported: color.
Note: The default message format for HipChat messages is HTML. It is recommended that you specify the text message format to prevent unexpected results, such as HipChat attempting to interpret angled brackets (< and >).
The IRC transports only works together with the LibreNMS IRC-Bot. Configuration of the LibreNMS IRC-Bot is described here.
Example:
Config Example IRC enabled"},{"location":"Alerting/Transports/#jira","title":"JIRA","text":"
You can have LibreNMS create issues on a Jira instance for critical and warning alerts using either the Jira REST API or webhooks. Custom fields allow you to add any required fields beyond summary and description fields in case mandatory fields are required by your Jira project/issue type configuration. Custom fields are defined in JSON format but ustom fields allow you to add any required fields beyond summary and description fields in case mandatory fields are required by your Jira project/issue type configuration. Custom fields are defined in JSON format. Currently http authentication is used to access Jira and Jira username and password will be stored as cleartext in the LibreNMS database.
The config fields that need to set for webhooks are: Jira Open URL, Jira Close URL, Jira username, Jira password and webhook ID.
Note: Webhooks allow more control over how alerts are handled in Jira. With webhooks, recovery messages can be sent to a different URL than alerts. Additionally, a custom conditional logic can be built using the webhook payload and ID to automatically close an open ticket if predefined conditions are met.
Jira Issue Types Jira Webhooks
Example:
Config Example Project Key JIRAPROJECTKEY Issue Type Myissuetype Open URL https://myjira.mysite.com / https://webhook-open-url Close URL https://webhook-close-url Jira Username myjirauser Jira Password myjirapass Enable webhook ON/OFF Webhook ID alert_id Custom Fileds {\"components\":[{\"id\":\"00001\"}], \"source\": \"LibrenNMS\"}"},{"location":"Alerting/Transports/#jira-service-management","title":"Jira Service Management","text":"
Using Jira Service Management LibreNMS integration, LibreNMS forwards alerts to Jira Service Management with detailed information. Jira Service Management acts as a dispatcher for LibreNMS alerts, determines the right people to notify based on on-call schedules and notifies via email, text messages (SMS), phone calls and iOS & Android push notifications. Then escalates alerts until the alert is acknowledged or closed.
:warning: If the feature isn\u2019t available on your site, keep checking Jira Service Management for updates.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#line-messaging-api","title":"LINE Messaging API","text":"
LINE Messaging API Docs
Here is the step for setup a LINE bot and using it in LibreNMS.
Use your real LINE account register in developer protal.
Add a new channel, choose Messaging API and continue fill up the forms, note that Channel name cannot edit later.
Go to \"Messaging API\" tab of your channel, here listing some important value.
Bot basic ID and QR code is your LINE bot's ID and QR code.
Channel access token (long-lived), will use it in LibreNMS, keep it safe.
Use your real Line account add your LINE bot as a friend.
Recipient ID can be groupID, userID or roomID, it will be used in LibreNMS to send message to a group or a user. Use the following NodeJS program and ngrok for temporally https webhook to listen it.
LINE-bot-RecipientFetcher
Run the program and using ngrok expose port to public
$ node index.js\n$ ngrok http 3000\n
Go to \"Messaging API\" tab of your channel, fill up Webhook URL to https://<your ngrok domain>/webhook
If you want to let LINE bot send message to a yourself, use your real account to send a message to your LINE bot. Program will print out the userID in console.
Config Example Access token fhJ9vH2fsxxxxxxxxxxxxxxxxxxxxlFU= Recipient (groupID, userID or roomID) Ce51xxxxxxxxxxxxxxxxxxxxxxxxxx6ef"},{"location":"Alerting/Transports/#line-notify","title":"LINE Notify","text":"
LINE Notify
LINE Notify API Document
Example:
Config Example Token AbCdEf12345"},{"location":"Alerting/Transports/#mail","title":"Mail","text":"
The E-Mail transports uses the same email-configuration as the rest of LibreNMS. As a small reminder, here is its configuration directives including defaults:
Emails will attach all graphs included with the @signedGraphTag directive. If the email format is set to html, they will be embedded. To disable attaching images, set email_attach_graphs to false.
Config Example Email me@example.com"},{"location":"Alerting/Transports/#matrix","title":"Matrix","text":"
For using the Matrix transports, you have to create a room on the Matrix-server. The provided Auth_token belongs to an user, which is member of this room. The Message, sent to the matrix-room can be built from the variables defined in Template-Syntax but without the 'alert->' prefix. See API-Transport. The variable $msg is contains the result of the Alert template.The Matrix-Server URL is cutted before the beginning of the _matrix/client/r0/... API-part.
LibreNMS can send text messages through Messagebird Rest API transport.
Config Example Api Key Api rest key given in the messagebird dashboard Originator E.164 formatted originator Recipient E.164 formatted recipient for multi recipents comma separated Character limit Range 1..480 (max 3 split messages)"},{"location":"Alerting/Transports/#messagebird-voice","title":"Messagebird Voice","text":"
LibreNMS can send messages through Messagebird voice Rest API transport (text to speech).
Config Example Api Key Api rest key given in the messagebird dashboard Originator E.164 formatted originator Recipient E.164 formatted recipient for multi recipents comma separated Language Select box for options Spoken voice Female or Male Repeat X times the message is repeated"},{"location":"Alerting/Transports/#microsoft-teams","title":"Microsoft Teams","text":"
LibreNMS can send alerts to Microsoft Teams Incoming Webhooks which are then posted to a specific channel. Microsoft recommends using markdown formatting for connector cards. Administrators can opt to compose the MessageCard themselves using JSON to get the full functionality.
Example:
Config Example WebHook URL https://outlook.office365.com/webhook/123456789 Use JSON? x"},{"location":"Alerting/Transports/#nagios-compatible","title":"Nagios Compatible","text":"
The nagios transport will feed a FIFO at the defined location with the same format that nagios would. This allows you to use other alerting systems with LibreNMS, for example Flapjack.
Example:
Config Example Nagios FIFO /path/to/my.fifo"},{"location":"Alerting/Transports/#opsgenie","title":"OpsGenie","text":"
Using OpsGenie LibreNMS integration, LibreNMS forwards alerts to OpsGenie with detailed information. OpsGenie acts as a dispatcher for LibreNMS alerts, determines the right people to notify based on on-call schedules and notifies via email, text messages (SMS), phone calls and iOS & Android push notifications. Then escalates alerts until the alert is acknowledged or closed.
Create a LibreNMS Integration from the integrations page once you signup. Then copy the API key from OpsGenie to LibreNMS.
If you want to automatically ack and close alerts, leverage Marid integration. More detail with screenshots is available in OpsGenie LibreNMS Integration page.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#osticket","title":"osTicket","text":"
LibreNMS can send alerts to osTicket API which are then converted to osTicket tickets.
Example:
Config Example API URL http://osticket.example.com/api/http.php/tickets.json API Token 123456789"},{"location":"Alerting/Transports/#pagerduty","title":"PagerDuty","text":"
LibreNMS can make use of PagerDuty, this is done by utilizing an API key and Integraton Key.
API Keys can be found under 'API Access' in the PagerDuty portal.
Integration Keys can be found under 'Integration' for the particular Service you have created in the PagerDuty portal.
Example:
Config Example API Key randomsample Integration Key somerandomstring"},{"location":"Alerting/Transports/#philips-hue","title":"Philips Hue","text":"
Want to spice up your noc life? LibreNMS will flash all lights connected to your philips hue bridge whenever an alert is triggered.
To setup, go to the you http://your-bridge-ip/debug/clip.html
Update the \"URL:\" field to /api
Paste this in the \"Message Body\" {\"devicetype\":\"librenms\"}
Press the round button on your philips Hue Bridge
Click on POST
In the Command Response You should see output with your username. Copy this without the quotes
More Info: Philips Hue Documentation
Example:
Config Example Host http://your-bridge-ip Hue User username Duration 1 Second"},{"location":"Alerting/Transports/#playsms","title":"PlaySMS","text":"
PlaySMS is an open source SMS-Gateway that can be used via their HTTP API using a Username and WebService Token. Please consult PlaySMS's documentation regarding number formatting.
PlaySMS Docs
Here an example using 3 numbers, any amount of numbers is supported:
Example:
Config Example PlaySMS https://localhost/index.php User user1 Token MYFANCYACCESSTOKEN From My Name Mobiles +1234567892,+1234567890,+1234567891"},{"location":"Alerting/Transports/#pushbullet","title":"Pushbullet","text":"
Get your Access Token from your Pushbullet's settings page and set it in your transport:
Example:
Config Example Access Token MYFANCYACCESSTOKEN"},{"location":"Alerting/Transports/#pushover","title":"Pushover","text":"
If you want to change the default notification sound for all notifications then you can add the following in Pushover Options:
sound=falling
You also have the possibility to change sound per severity: sound_critical=fallingsound_warning=sirensound_ok=magic
Enabling Pushover support is fairly easy, there are only two required parameters.
Firstly you need to create a new Application (called LibreNMS, for example) in your account on the Pushover website (https://pushover.net/apps).
Now copy your API Key and obtain your User Key from the newly created Application and setup the transport.
Pushover Docs
Example:
Config Example Api Key APPLICATIONAPIKEYGOESHERE User Key USERKEYGOESHERE Pushover Options sound_critical=falling sound_warning=siren sound_ok=magic"},{"location":"Alerting/Transports/#rocketchat","title":"Rocket.chat","text":"
The Rocket.chat transport will POST the alert message to your Rocket.chat Incoming WebHook using the attachments option. Simple html tags are stripped from the message. All options are optional, the only required value is for url, without this then no call to Rocket.chat will be made.
The Sensu transport will POST an Event to the Agent API upon an alert being generated.
It will be categorised (ok, warning or critical), and if you configure the alert to send recovery notifications, Sensu will also clear the alert automatically. No configuration is required - as long as you are running the Sensu Agent on your poller with the HTTP socket enabled on tcp/3031, LibreNMS will start generating Sensu events as soon as you create the transport.
Acknowledging alerts within LibreNMS is not directly supported, but an annotation (acknowledged) is set, so a mutator or silence, or even the handler could be written to look for it directly in the handler. There is also an annotation (generated-by) set, to allow you to treat LibreNMS events differently from agent events.
The 'shortname' option is a simple way to reduce the length of device names in configs. It replaces the last 3 domain components with single letters (e.g. websrv08.dc4.eu.corp.example.net gets shortened to websrv08.dc4.eu.cen).
Sensu will reject rules with special characters - the Transport will attempt to fix up rule names, but it's best to stick to letters, numbers and spaces
The transport only deals in absolutes - it ignores the got worse/got better states
The agent will buffer alerts, but LibreNMS will not - if your agent is offline, alerts will be dropped
There is no backchannel between Sensu and LibreNMS - if you make changes in Sensu to LibreNMS alerts, they'll be lost on the next event (silences will work)
Example:
Config Example Sensu Endpoint http://localhost:3031 Sensu Namespace eu-west Check Prefix lnms Source Key hostname"},{"location":"Alerting/Transports/#signl4","title":"SIGNL4","text":"
SIGNL4 offers critical alerting, incident response and service dispatching for operating critical infrastructure. It alerts you persistently via app push, SMS text, voice calls, and email including tracking, escalation, on-call duty scheduling and collaboration.
Integrating SIGNL4 with LibreNMS to forward critical alerts with detailed information to responsible people or on-call teams. The integration supports triggering as well as closing alerts.
In the configuration for your SIGNL4 alert transport you just need to enter your SIGNL4 webhook URL including team or integration secret.
Example:
Config Example Webhook URL https://connect.signl4.com/webhook/{team-secret}
You can find more information about the integration here.
The Slack transport will POST the alert message to your Slack Incoming WebHook using the attachments option, you are able to specify multiple webhooks along with the relevant options to go with it. Simple html tags are stripped from the message. All options are optional, the only required value is for url, without this then no call to Slack will be made.
We currently support the following attachment options:
author_name
We currently support the following global message options:
channel_name : Slack channel name (without the leading '#') to which the alert will go
icon_emoji : Emoji name in colon format to use as the author icon
Slack docs
The alert template can make use of Slack markdown. In the Slack markdown dialect, custom links are denoted with HTML angled brackets, but LibreNMS strips these out. To support embedding custom links in alerts, use the bracket/parentheses markdown syntax for links. For example if you would typically use this for a Slack link:
<https://www.example.com|My Link>
Use this in your alert template:
[My Link](https://www.example.com)
Example:
Config Example Webhook URL https://slack.com/url/somehook Channel network-alerts Author Name LibreNMS Bot Icon :scream:"},{"location":"Alerting/Transports/#smseagle","title":"SMSEagle","text":"
SMSEagle is a hardware SMS Gateway that can be used via their HTTP API using a Username and password.
Destination numbers are one per line, with no spaces. They can be in either local or international dialling format.
SMSEagle Docs
Example:
Config Example SMSEagle Host ip.add.re.ss User smseagle_user Password smseagle_user_password Mobiles +3534567890 0834567891"},{"location":"Alerting/Transports/#smsmode","title":"SMSmode","text":"
SMSmode is a SMS provider that can be configured by using the generic API Transport. You need a token you can find on your personnal space.
SMSmode docs
Example:
Config Example Transport type Api API Method POST API URL http://api.smsmode.com/http/1.6/sendSMS.do Options accessToken=PUT_HERE_YOUR_TOKEN numero=PUT_HERE_DESTS_NUMBER_COMMA_SEPARATEDmessage={{ $msg }}"},{"location":"Alerting/Transports/#splunk","title":"Splunk","text":"
LibreNMS can send alerts to a Splunk instance and provide all device and alert details.
Config Example Host 127.0.0.1 UDP Port 514"},{"location":"Alerting/Transports/#syslog","title":"Syslog","text":"
You can have LibreNMS emit alerts as syslogs complying with RFC 3164.
More information on RFC 3164 can be found here: https://tools.ietf.org/html/rfc3164
Example output: <26> Mar 22 00:59:03 librenms.host.net librenms[233]: [Critical] network.device.net: Port Down - port_id => 98939; ifDescr => xe-1/1/0;
Each fault will be sent as a separate syslog.
Example:
Config Example Host 127.0.0.1 Port 514 Facility 3"},{"location":"Alerting/Transports/#telegram","title":"Telegram","text":"
Thank you to snis for these instructions.
First you must create a telegram account and add BotFather to you list. To do this click on the following url: https://telegram.me/botfather
Generate a new bot with the command \"/newbot\" BotFather is then asking for a username and a normal name. After that your bot is created and you get a HTTP token. (for more options for your bot type \"/help\")
Add your bot to telegram with the following url: http://telegram.me/<botname> to use app or https://web.telegram.org/<botname> to use in web, and send some text to the bot.
The BotFather should have responded with a token, copy your token code and go to the following page in chrome: https://api.telegram.org/bot<tokencode>/getUpdates (this could take a while so continue to refresh until you see something similar to below)
You see a json code with the message you sent to the bot. Copy the Chat id. In this example that is \u201c-9787468\u201d within this example: \"message\":{\"message_id\":7,\"from\":\"id\":656556,\"first_name\":\"Joo\",\"last_name\":\"Doo\",\"username\":\"JohnDoo\"},\"chat\":{\"id\":-9787468,\"title\":\"Telegram Group\"},\"date\":1435216924,\"text\":\"Hi\"}}]}.
Now create a new \"Telegram transport\" in LibreNMS (Global Settings -> Alerting Settings -> Telegram transport). Click on 'Add Telegram config' and put your chat id and token into the relevant box.
If want to use a group to receive alerts, you need to pick the Chat ID of the group chat, and not of the Bot itself.
Telegram Docs
Example:
Config Example Chat ID 34243432 Token 3ed32wwf235234 Format HTML or MARKDOWN"},{"location":"Alerting/Transports/#twilio-sms","title":"Twilio SMS","text":"
Twilio will send your alert via SMS. From your Twilio account you will need your account SID, account token and your Twilio SMS phone number that you would like to send the alerts from. Twilio's APIs are located at: https://www.twilio.com/docs/api?filter-product=sms
Example:
Config Example SID ACxxxxxxxxxxxxxxxxxxxxxxxxxxxx Token 7xxxx573acxxxbc2xxx308d6xxx652d32 Twilio SMS Number 8888778660"},{"location":"Alerting/Transports/#ukfast-pss","title":"UKFast PSS","text":"
UKFast PSS tickets can be raised from alerts using the UKFastPSS transport. This required an API key with PSS write permissions
Example:
Config Example API Key ABCDefgfg12 Author 5423 Priority Critical Secure true"},{"location":"Alerting/Transports/#victorops","title":"VictorOps","text":"
VictorOps provide a webHook url to make integration extremely simple. To get the URL required login to your VictorOps account and go to:
The URL provided will have $routing_key at the end, you need to change this to something that is unique to the system sending the alerts such as librenms. I.e:
Config Example Post URL https://alert.victorops.com/integrations/generic/20132414/alert/2f974ce1-08fc-4dg8-a4f4-9aee6cf35c98/librenms"},{"location":"Alerting/Transports/#kayako-classic","title":"Kayako Classic","text":"
LibreNMS can send alerts to Kayako Classic API which are then converted to tickets. To use this module, you need REST API feature enabled in Kayako Classic and configured email account at LibreNMS. To enable this, do this:
AdminCP -> REST API -> Settings -> Enable API (Yes)
Also you need to know the department id to provide tickets to appropriate department and a user email to provide, which is used as ticket author. To get department id: navigate to appropriate department name at the departments list page in Admin CP and watch the number at the end of url. Example: http://servicedesk.example.com/admin/Base/Department/Edit/17. Department ID is 17
As a requirement, you have to know API Url, API Key and API Secret to connect to servicedesk
Kayako REST API Docs
Example:
Config Example Kayako URL http://servicedesk.example.com/api/ Kayako API Key 8cc02f38-7465-4a0c-8730-bb3af122167b Kayako API Secret Y2NhZDIxNDMtNjVkMi0wYzE0LWExYTUtZGUwMjJiZDI0ZWEzMmRhOGNiYWMtNTU2YS0yODk0LTA1MTEtN2VhN2YzYzgzZjk5 Kayako Department 1"},{"location":"Alerting/Transports/#signal-cli","title":"Signal CLI","text":"
Use the Signal Mesenger for Alerts. Run the Signal CLI with the D-Bus option.
GitHub Project
Example:
Config Example Path /opt/signal-cli/bin/signal-cli Recipient type Group Recipient dfgjsdkgljior4345=="},{"location":"Alerting/Transports/#smsfeedback","title":"SMSFeedback","text":"
SMSFeedback is a SAAS service, which can be used to deliver Alerts via API, using API url, Username & Password.
They can be in international dialling format only.
SMSFeedback Api Docs
Example:
Config Example User smsfeedback_user Password smsfeedback_password Mobiles 71234567890 Sender name CIA"},{"location":"Alerting/Transports/#zenduty","title":"Zenduty","text":"
Leveraging LibreNMS<>Zenduty Integration, users can send new LibreNMS alerts to the right team and notify them based on on-call schedules via email, SMS, Phone Calls, Slack, Microsoft Teams and mobile push notifications. Zenduty provides engineers with detailed context around the LibreNMS alert along with playbooks and a complete incident command framework to triage, remediate and resolve incidents with speed.
Create a LibreNMS Integration from inside Zenduty, then copy the Webhook URL from Zenduty to LibreNMS.
For a detailed guide with screenshots, refer to the LibreNMS documentation at Zenduty.
Example:
Config Example WebHook URL https://www.zenduty.com/api/integration/librenms/integration-key/"},{"location":"Developing/Application-Notes/","title":"Notes On Application Development","text":""},{"location":"Developing/Application-Notes/#librenms-json-snmp-extends","title":"LibreNMS JSON SNMP Extends","text":"
The polling function json_app_get makes it easy to poll complex data using SNMP extends and JSON.
The following exceptions are provided by it.
It takes three parameters, in order in the list below.
Integer :: Device ID to fetch it for.
String :: The extend name. For example, if 'zfs' is passed it will be converted to 'nsExtendOutputFull.3.122.102.115'.
Integer :: Minimum expected version of the JSON return.
The required keys for the returned JSON are as below.
version :: The version of the snmp extend script. Should be numeric and at least 1.
error :: Error code from the snmp extend script. Should be > 0 (0 will be ignored and negatives are reserved)
errorString :: Text to describe the error.
data :: An key with an array with the data to be used.
The supported exceptions are as below.
JsonAppPollingFailedException :: Empty return from SNMP.
JsonAppParsingFailedException :: Could not parse the JSON
JsonAppWrongVersionException :: Older version than supported.
JsonAppExtendErroredException :: Polling and parsing was good, but the returned data has an error set. This may be checked via $e->getParsedJson() and then checking the keys error and errorString.
The error value can be accessed via $e->getCode(). The output can be accessed via $->getOutput() Only returned JsonAppParsingFailedException. The parsed JSON can be access via $e->getParsedJson().
An example below from includes/polling/applications/zfs.inc.php...
try {\n $zfs = json_app_get($device, $name, 1)['data'];\n} catch (JsonAppMissingKeysException $e) {\n //old version with out the data key\n $zfs = $e->getParsedJson();\n} catch (JsonAppException $e) {\n echo PHP_EOL . $name . ':' . $e->getCode() . ':' . $e->getMessage() . PHP_EOL;\n update_application($app, $e->getCode() . ':' . $e->getMessage(), []);\n\n return;\n}\n
Also worth noting that json_app_get supports compressed data via base64 encoded gzip. If base64 encoding is detected on the the SNMP return, it will be gunzipped and then parsed.
https://github.com/librenms/librenms-agent/blob/master/utils/librenms_return_optimizer may be used to optimize JSON returns.
"},{"location":"Developing/Application-Notes/#application-data-storage","title":"Application Data Storage","text":"
The $app model is supplied for each application poller and graph. You may access and update the $app->data field to store arrays of data the Application model.
When you call update_application() the $app model will be saved along with any changes to the data field.
// set the varaible data to $foo\n$app->data = [\n 'item_A' => 123,\n 'item_B' => 4.5,\n 'type' => 'foo',\n 'other_items' => [ 'a', 'b', 'c' ],\n];\n\n// save the change\n$app->save();\n\n// var_dump the contents of the variable\nvar_dump($app->data);\n
This document will try and provide a good overview of how the code is structured within LibreNMS. We will go through the main directories and provide information on how and when they are used. LibreNMS now uses Laravel for much of it's frontend (webui) and database code. Much of the Laravel documentation applies: https://laravel.com/docs/structure
Directories from the (filtered) structure tree below are some of the directories that will be most interesting during development:
Classes that don't belong to the Laravel application belong in this directory, with a directory structure that matches the namespace. One class per file. See PSR-0 for details.
This is the main file which all links within LibreNMS are parsed through. It loads the majority of the relevant includes needed for the control panel to function. CSS and JS files are also loaded here.
This directory is quite big and contains all the files to make the cli and polling / discovery to work. This code is not currently accessible from Laravel code (intentionally).
All the discovery and polling code. The format is usually quite similar between discovery and polling. Both are made up of modules and the files within the relevant directories will match that module. So for instance if you want to update the os detection for a device, you would look in includes/discovery/os/ for a file named after the operating system such as linux: includes/discovery/linux.inc.php. Within here you would update or add support for newer OS'. This is the same for polling as well.
This is where the majority of the website core files are located. These tend to be files that contain functions or often used code segments that can be included where needed rather than duplicating code.
In here is a list of of files that generate PDF reports available to the user. These are dynamically called in from html/pdf.php based on the report the user requests.
This directory contains all of the ajax calls when generating the table of data. Most have been converted over so if you are planning to add a new table of data then you will do so here for all of the back end data calls.
This directory contains the URL structure when browsing the Web UI. So for example /devices/ is actually a call to includes/html/pages/devices.inc.php, /device/tab=ports/ is includes/html/pages/device/ports.inc.php.
Here is where all of the mibs are located. Generally standard mibs should be in the root directory and specific vendor mibs should be in their own subdirectory.
One of the goals of the LibreNMS project is to enable users to get all of the help they need from our documentation.
The documentation uses the markdown markup language and is generated with mkdocs. To edit or create markdown you only need a text editor, but it is recommended to build your docs before submitting, in order to check them visually. The section on this page has instructions for this step.
When you are adding a new feature or extension, we need to have full documentation to go along with it. It's quite simple to do this:
Find the relevant directory to store your new document in, General, Support and Extensions are the most likely choices.
Think of a descriptive name that's not too long, it should match what they may be looking for or describes the feature.
Add the new document into the nav section of mkdocs.yml if it needs to appear in the table of contents
Ensure the first line contains: source: path/to/file.md - don't include the initial doc/.
In the body of the document, be descriptive but keep things simple. Some tips:
If the document could cover different distros like CentOS and Ubuntu please try and include the information for them all. If that's not possible then at least put a placeholder in asking for contributions.
Ensure you use the correct formatting for commands and code blocks by wrapping one liners in backticks or blocks in ```.
Put content into sub-headings where possible to organise the content.
If you rename a file, please add a redirect for the old file in mkdocs.yml like so:
Please ensure you add the document to the relevant section within pages of mkdocs.yml so that it's in the correct menu and is built. Forgetting this step will result in your document never seeing the light of day :)
Our docs are based on Markdown using mkdocs which adheres to markdown specs and nothing more, because of that we also import a couple of extra libraries:
pymdownx.tasklist
pymdownx.tilde
This means you can use:
~~strikethrough~~ to perform strikethrough
- [X] List items
Url's can be made [like this](https://www.librenms.org) like this
Code can be placed in `` for single line or ``` for multiline.
# Can be used for main headings which translates to a <h1> tag, increasing the #'s will increase the hX tags.
### Can be used for sub-headings which will appear in the TOC to the left.
Settings should be prefixed with !!! setting \"<webui setting path>\"
If you encounter permissions issues, these might be reoslved by using the user option, with whatever user you are building as, e.g. -u librenms
A configuration file for building LibreNMS docs is already included in the distribution: /opt/librenms/mkdocs.yml. The various configuration directives are documented here.
Build from the librenms base directory: cd /opt/librenms.
Building is simple:
mkdocs build\n
This will output all the documentation in html format to /opt/librenms/out (this folder will be ignored from any commits).
mkdocs includes it's own light-weight webserver for this purpose.
Viewing is as simple as running the following command:
$ mkdocs serve\nINFO - Building documentation...\n<..>\nINFO - Documentation built in 12.54 seconds\n<..>\nINFO - Serving on http://127.0.0.1:8000\n<..>\nINFO - Start watching changes\n
Now you will find the complete set of LibreNMS documentation by opening your browser to localhost:8000.
Note it is not necessary to build before viewing as the serve command will do this for you. Also the server will update the documents it is serving whenever changes to the markdown are made, such as in another terminal.
"},{"location":"Developing/Creating-Documentation/#viewing-docs-from-another-machine","title":"Viewing docs from another machine","text":"
By default the server will only listen for connections from the local machine. If you are building on a different machine you can use the following directive to listen on all interfaces:
mkdocs serve --dev-addr=0.0.0.0:8000\n
WARNING: this is not a secure webserver, do this at your own risk, with appropriate host security and do not leave the server running.
"},{"location":"Developing/Creating-Release/","title":"Creating a release","text":""},{"location":"Developing/Creating-Release/#github","title":"GitHub","text":"
You can create a new release on GitHub.
Enter the tag version that month, i.e for September 2016 you would enter 201609.
Enter a title, we usually use August 2016 Release
Enter a placeholder for the body, we will edit this later.
For this, we assume you are using the master branch to create the release against.
We now generate the changelog using the GitHub API itself so it shouldn't matter what state your local branch is in so long as it has the code to generate the changelog itself.
Using the GitHub API means we can use the labels associated with merged pull requests to categorise the changelog. We also then record who made the pull request to thank them in the changelog itself.
You will be asked for a GitHub personal access token. You can generate this here. No permissions should be needed so just give it a name and click Generate Token. You can then export the token as an environment variable GH_TOKEN or place it in your .env file.
The basic command to run is by using artisan. Here you pass new tag (1.41) and previous tag (1.40). For further help run php artisan release:tag --help. This will generate a changelog up to the latest master branch, if you want it to be done against something else then pass the latest pull request number with --pr $PR_NUMBER.
php artisan release:tag 1.41 1.40\n
Now commit and push the change that has been made to doc/General/Changelog.md.
Once the pull request has been merged in for the Changelog, you can create a new release on GitHub.
Create two threads on the community site:
A changelog thread example
An info thread example
Tweet it
Facebook it
Google Plus it
LinkedIn it
"},{"location":"Developing/Dynamic-Config/","title":"Adding new config settings","text":"
Adding support for users to update a new config option via the WebUI is now a lot easier for general options. This document shows you how to add a new config option and even section to the WebUI.
Config settings are defined in misc/config_definitions.json
You should give a little thought to the name of your config setting. For example: a good setting for snmp community, would be snmp.community. The dot notation is path and when the config is hydrated, it is converted to a nested array. If the user is overriding the option in config.php it would use the format $config['snmp']['community']
The config definition system inherently supports translation. You must add the English names in the resoures/lang/en/settings.php file (and other languages if you can).
You may set the type field to a custom type and define a Vue.js component to display it to the user.
The Vue.js component should be named as \"SettingType\" where type is the custom type entered with the first letter capitalized. Vue.js components exist in the resources/js/components directory.
Here is an empty component named SettingType (make sure to rename it). It pulls in BaseSetting mixin for basic setting code to reuse. You should review the BaseSetting component.
Using Vue.js is beyond the scope of this document. Documentation can be found at vuejs.org.
"},{"location":"Developing/Getting-Started/","title":"Get ready to contribute to LibreNMS","text":"
This document is intended to help you get your local environment set up to contribute code to the LibreNMS project.
"},{"location":"Developing/Getting-Started/#setting-up-a-development-environment","title":"Setting up a development environment","text":"
When starting to develop, it may be tempting to just make changes on your production server, but that will make things harder for you. Taking a little time to set up somewhere to work on code changes can really help.
Possible options:
A Linux computer, VM, or container
Another directory on your LibreNMS server
Windows Subsystem for Linux
"},{"location":"Developing/Getting-Started/#set-up-your-development-git-clone","title":"Set up your development git clone","text":"
Follow the documentation on using git
Install development dependencies ./scripts/composer_wrapper.php install
Set variables in .env, including database settings. Which could be a local or remote MySQL server including your production DB.
LibreNMS uses continuous integration to test code changes to help reduce bugs. This also helps guarantee the changes you contribute won't be broken in the future. You can find out more in our Validating Code Documentation
The default database connection for automated testing is testing.
To override the database parameters for unit tests, configure your .env file accordingly. The defaults (from config/database.php) are:
Sometimes you want to find out what a variable contains (such as the data return from an snmpwalk). You can dump one or more variables and halt execution with the dd() function.
dd($variable1, $variable2);\n
"},{"location":"Developing/Getting-Started/#inspecting-web-pages","title":"Inspecting web pages","text":"
Installing the development dependencies and setting APP_DEBUG enables the Laravel Debugbar This will allow you to inspect page generation and errors right in your web browser.
"},{"location":"Developing/Getting-Started/#better-code-completion-in-ides-and-editors","title":"Better code completion in IDEs and editors","text":"
You can generate some files to improve code completion. (These file are not updated automatically, so you may need to re-run these command periodically)
You can capture and emulate devices using Snmpsim. LibreNMS has a set of scripts to make it easier to work with snmprec files. LibreNMS Snmpsim helpers
You must have a working snmptrapd. See SNMP TRAP HANDLER
Make sure the MIB is loaded from the trap you are adding. Edit /etc/systemd/system/snmptrapd.service.d/mibs.conf to add it then restart snmptrapd.
MIBDIRS option is not recursive, so you need to specify each directory individually.
Create a new class in LibreNMS\\Snmptrap\\Handlers that implements the LibreNMS\\Interfaces\\SnmptrapHandler interface. For example:
<?php\n/**\n * ColdBoot.php\n *\n * Handles the SNMPv2-MIB::coldStart trap\n *\n * This program is free software: you can redistribute it and/or modify\n * it under the terms of the GNU General Public License as published by\n * the Free Software Foundation, either version 3 of the License, or\n * (at your option) any later version.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the\n * GNU General Public License for more details.\n *\n * You should have received a copy of the GNU General Public License\n * along with this program. If not, see <https://www.gnu.org/licenses/>.\n *\n * @package LibreNMS\n * @link https://www.librenms.org\n */\n\nnamespace LibreNMS\\Snmptrap\\Handlers;\n\nuse App\\Models\\Device;\nuse LibreNMS\\Enum\\Severity;\nuse LibreNMS\\Interfaces\\SnmptrapHandler;\nuse LibreNMS\\Snmptrap\\Trap;\n\nclass ColdBoot implements SnmptrapHandler\n{\n /**\n * Handle snmptrap.\n * Data is pre-parsed and delivered as a Trap.\n *\n * @param Device $device\n * @param Trap $trap\n * @return void\n */\n public function handle(Device $device, Trap $trap)\n {\n $trap->log('SNMP Trap: Device ' . $device->displayName() . ' cold booted', $device->device_id, 'reboot', Severity::Warning);\n }\n}\n
where number on the end means color of the eventlog:
The handle function inside your new class will receive a LibreNMS/Snmptrap/Trap object containing the parsed trap. It is common to update the database and create event log entries within the handle function.
"},{"location":"Developing/SNMP-Traps/#getting-information-from-the-trap","title":"Getting information from the Trap","text":""},{"location":"Developing/SNMP-Traps/#source-information","title":"Source information","text":"
$trap->getDevice(); // gets Device model for the device associated with this trap\n$trap->ip; // gets source IP of this trap\n$trap->getTrapOid(); // returns the string you registered your class with\n
"},{"location":"Developing/SNMP-Traps/#retrieving-data-from-the-trap","title":"Retrieving data from the Trap","text":"
$trap->getOidData('IF-MIB::ifDescr.114');\n
getOidData() requires the full name including any additional index. You can use these functions to search the OID keys.
$trap->findOid('ifDescr'); // returns the first oid key that contains the string\n$trap->findOids('ifDescr'); // returns all oid keys containing the string\n
Submitting new traps requires them to be fully tested. You can find many examples in the tests/Feature/SnmpTraps/ directory.
Here is a basic example of a test that trap handler only creates a log message. If your trap modifies the database, you should also test that it does so.
<?php\n\nnamespace LibreNMS\\Tests\\Feature\\SnmpTraps;\n\nclass ColdStratTest extends SnmpTrapTestCase\n{\n public function testColdStart(): void\n {\n $this->assertTrapLogsMessage(rawTrap: <<<'TRAP'\n{{ hostname }}\nUDP: [{{ ip }}]:44298->[192.168.5.5]:162\nDISMAN-EVENT-MIB::sysUpTimeInstance 0:0:1:12.7\nSNMPv2-MIB::snmpTrapOID.0 SNMPv2-MIB::coldStart\nTRAP,\n log: 'SNMP Trap: Device {{ hostname }} cold booted', // The log message sent\n failureMessage: 'Failed to handle SNMPv2-MIB::coldStart', // an informative message to let user know what failed\n args: [4, 'reboot'], // the additional arguments to the log method\n );\n }\n}\n
"},{"location":"Developing/Sensor-State-Support/","title":"Sensor State Support","text":""},{"location":"Developing/Sensor-State-Support/#introduction","title":"Introduction","text":"
In this section we are briefly going to walk through, what it takes to write sensor state support. We will also briefly get around the concepts of the current sensor state monitoring.
Each time a sensor needs to be polled, the system needs to know which sensor is it that it need to poll, at what oid is this sensor located and what class the sensor is etc. This information is fetched from the sensors table.
Is where we map the possible returned state sensor values to a generic LibreNMS value, in order to make displaying and alerting more generic. We also map these values to the actual state sensor(state_index) where these values are actually returned from.
The LibreNMS generic states are derived from Nagios:
This example will be based on a Cisco power supply sensor and is all it takes to have sensor state support for Cisco power supplies in Cisco switches. The file should be located in /includes/discovery/sensors/state/cisco.inc.php.
This document is broken down into the relevant sections depending on what support you are adding. During all of these examples we will be using the OS of pulse as the example OS we will add.
Adding the initial detection.
Adding Memory and CPU information.
Adding Health / Sensor information.
Adding Wireless Sensor information.
Adding custom graphs.
Adding Unit tests (required).
Optional Settings
We currently have a script in pre-beta stages that can help speed up the process of deploying a new OS. It has support for add sensors in a basic form (except state sensors).
In this example, we will add a new OS called test-os using the device ID 101 that has already been added. It will be of the type network and belongs to the vendor, Cisco:
The process will then step you through the process with some more questions. Please be warned, this is currently pre-beta and may cause some issues. Please let us know of any on Discord.
"},{"location":"Developing/Using-Git/#clone-the-repo","title":"Clone the repo","text":"
Ok so now that you have forked the repo, you now need to clone it to your local install where you can then make the changes you need and submit them back.
cd /opt/\ngit clone git@github.com:/yourusername/librenms.git\n
As you become more familiar you may find a better workflow that fits your needs, until then this should be a safe workflow for you to follow.
Before you start work on a new branch / feature. Make sure you are up to date.
cd /opt/librenms\ngit checkout master\ngit pull upstream master\ngit push origin master\n
At this stage it's worth pointing out that we have some standard checks that are performed when you submit a pull request, you can run these checks yourself to be sure no issues are present in your pull request.
Now, create a new branch to do you work on. It's important that you do this as you are then able to work on more than one feature at a time and submit them as pull requests individually. If you did all your work in the master branch then it gets a bit messy!
You need to give your branch a name. If an issue is open (or closed on GitHub) then you can use that, in this example if the issue number is 123 then we will use issue-123. If a post exists on the community forum then you can use the post id like community-123. You're also welcome to use any arbitrary name for your branch but try and make it relevant to what the branch is.
git checkout -b issue-123\n
Now, code away. Make the changes you need, test, change and test again :) When you are ready to submit the updates as a pull request then commit away.
git add path/to/new/files/or/folders\ngit commit -a -m 'Added feature to do X, Y and Z'\ngit push origin issue-123\n
If you need to rebase against master then you can do this with:
If after do this you get some merge conflicts then you need to resolve these before carrying on.
Please try to squash all commits into one, this isn't essential as we can do this when we merge but it would be helpful to do this before you submit your pull request.
Now you will be ready to submit a pull request from within GitHub. To do this, go to your GitHub page for the LibreNMS repo. Now select the branch you have just been working on (issue-123) from the drop down to the left and then click 'Pull Request'. Fill in the details to describe the work you have done and click 'Create pull request'.
Thanks for your first pull request :)
Ok, that should get you started on the contributing path. If you have any other questions then stop by our Discord Server
"},{"location":"Developing/Using-Git/#hints-and-tips","title":"Hints and tips","text":"
As part of the pull request process with GitHub we run some automated build tests to ensure that the code is error free, standards compliant and our test suite builds successfully.
Rather than submit a pull request and wait for the results, you can run these checks yourself to ensure a more seamless merge.
All of these commands should be run from within the librenms directory and can be run as the librenms user unless otherwise noted.
Install composer (you can skip this if composer is already installed).
curl -sS https://getcomposer.org/installer | php
Composer will now be installed into /opt/librenms/composer.phar.
Now install the dependencies we require:
./composer.phar install
Once composer is installed you can now run the code validation script:
./lnms dev:check
If you see Tests ok, submit away :) then all is well. If you see other output then it should contain what you need to resolve the issues and re-test.
Git has a hook system which you can use to trigger checks at various stages. Utilising the ./lnms dev:check you can make this part of your commit process.
Add ./lnms dev:check to your .git/hooks/pre-commit:
First we define our graphs in includes/definitions.inc.php to share our work and contribute in the development of LibreNMS. :-) (or place in config.php if you don't plan to contribute)
OS polling is not necessarily where custom polling should be done, please speak to one of the core devs in Discord for guidance.
Let's update our example file to add additional polling:
includes/polling/os/pulse.inc.php\n
We declare two specific graphs for users and sessions numbers. Theses two graphs will be displayed on the firewall section of the graphs tab as it was written in the definition include file.
This document will guide you through adding health / sensor information for your new device.
Currently, we have support for the following health metrics along with the values we expect to see the data in:
Class Measurement airflow cfm ber ratio charge % chromatic_dispersion ps/nm cooling W count # current A dbm dBm delay s eer eer fanspeed rpm frequency Hz humidity % load % loss % power W power_consumed kWh power_factor ratio pressure kPa quality_factor dB runtime Min signal dBm snr SNR state # temperature C tv_signal dBmV bitrate bps voltage V waterflow l/m percent %"},{"location":"Developing/os/Health-Information/#simple-health-discovery","title":"Simple health discovery","text":"
We have support for defining health / sensor discovery using YAML files so that you don't need to know how to write PHP.
Please note that DISPLAY-HINTS are disabled so ensure you use the correct divisor / multiplier if applicable.
All yaml files are located in includes/definitions/discovery/$os.yaml. Defining the information here is not always possible and is heavily reliant on vendors being sensible with the MIBs they generate. Only snmp walks are supported, and you must provide a sane table that can be traversed and contains all the data you need. We will use netbotz as an example here.
At the top you can define one or more mibs to be used in the lookup of data:
mib: NETBOTZV2-MIB For use of multiple MIB files separate them with a colon: mib: NETBOTZV2-MIB:SECOND-MIB
For data: you have the following options:
The only sensor we have defined here is airflow. The available options are as follows:
oid (required): This is the name of the table you want to snmp walk for data.
value (optional): This is the key within the table that contains the value. If not provided will use oid
num_oid (required for PullRequests): If not provided, this parameter should be computed automatically by discovery process. This parameter is still required to submit a pull request. This is the numerical OID that contains value. This should usually include {{ $index }}. In case the index is a string, {{ $str_index_as_numeric }} can be used instead and will convert the string to the equivalent OID representation.
divisor (optional): This is the divisor to use against the returned value.
multiplier (optional): This is the multiplier to use against the returned value.
low_limit (optional): This is the critical low threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
low_warn_limit (optional): This is the warning low threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
warn_limit (optional): This is the warning high threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
high_limit (optional): This is the critical high threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
descr (required): The visible label for this sensor. It can be a key with in the table or a static string, optionally using {{ index }}.
group (optional): Groups sensors together under in the webui, displaying this text. Not specifying this will put the sensors in the default group.
index (optional): This is the index value we use to uniquely identify this sensor. {{ $index }} will be replaced by the index from the snmp walk.
skip_values (optional): This is an array of values we should skip over (see note below).
skip_value_lt (optional): If sensor value is less than this, skip the discovery.
skip_value_gt (optional): If sensor value is greater than this, skip the discovery.
entPhysicalIndex and entPhysicalIndex_measured (optional) : If the sensor belongs to a physical entity then you can link them here. The currently supported variants are :
entPhysicalIndex contains the entPhysicalIndex from entPhysical table, and entPhysicalIndex_measured is NULL
entPhysicalIndex contains \"ifIndex\" value of the linked port and entPhysicalIndex_measured contains \"ports\"
user_func (optional): You can provide a function name for the sensors value to be processed through (i.e. Convert fahrenheit to celsius use fahrenheit_to_celsius)
snmp_flags (optional): this sets the flags to be sent to snmpwalk, it overrides flags set on the sensor type and os. The default is '-OQUb'. A common issue is dealing with string indexes, setting '-OQUsbe' will change them to numeric oids. Setting ['-OQUsbe', '-Pu'] will also allow _ in oid names. You can find more in the Man Page
rrd_type (optional): You can change the type of the RRD file that will be created to store the data. By default, type GAUGE is used. More details can be found here: https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html
For options: you have the following available:
divisor: This is the divisor to use against the returned value.
multiplier: This is the multiplier to use against the returned value.
skip_values: This is an array of values we should skip over (see note below).
skip_value_lt: If sensor value is less than this, skip the discovery.
skip_value_gt: If sensor value is greater than this, skip the discovery.
Multiple variables can be used in the sensor's definition. The syntax is {{ $variable }}. Any oid in the current table can be used, as well as pre_cached data. The index ($index) and the sub_indexes (in case the oid is indexed multiple times) are also available: if $index=\"1.20\", then $subindex0=\"1\" and $subindex1=\"20\".
When referencing an oid in another table the full index will be used to match the other table. If this is undesirable, you may use a single sub index by appending the sub index after a colon to the variable name. Example {{ $ifName:2 }}
skip_values can also compare items within the OID table against values. The index of the sensor is used to retrieve the value from the OID, unless a target index is appended to the OID. Additionally, you may check fields from the device. Comparisons behave on a logical OR basis when chained, so only one of them needs to be matched for that particular sensor to be skipped during discovery. An example of this is below:
If you aren't able to use yaml to perform the sensor discovery, you will most likely need to use Advanced health discovery.
"},{"location":"Developing/os/Health-Information/#advanced-health-discovery","title":"Advanced health discovery","text":"
If you can't use the yaml files as above, then you will need to create the discovery code in php. If it is possible to create via yaml, php discovery will likely be rejected due to the much higher chance of later problems, so it is highly suggested to use yaml.
The directory structure for sensor information is includes/discovery/sensors/$class/$os.inc.php. The format of all the sensors follows the same code format which is to collect sensor information via SNMP and then call the discover_sensor() function; except state sensors which requires additional code. Sensor information is commonly found in an ENTITY mib supplied by device's vendor in the form of a table. Other mib tables may be used as well. Sensor information is first collected by includes/discovery/sensors/pre_cache/$os.inc.php. This program will pull in data from mib tables into a $pre_cache array that can then be used in includes/discovery/sensors/$class/$os.inc.php to extract specific values which are then passed to discover_sensor().
discover_sensor() Accepts the following arguments:
&$valid = This is always null. This is unused.
$class = Required. This is the sensor class from the table above (i.e humidity).
$device = Required. This is the $device array.
$oid = Required. This must be the numerical OID for where the data can be found, i.e .1.2.3.4.5.6.7.0
$index = Required. This must be unique for this sensor class, device and type. Typically it's the index from the table being walked, or it could be the name of the OID if it's a single value.
$type = Required. This should be the OS name, i.e. pulse.
$descr = Required. This is a descriptive value for the sensor. Some devices will provide names to use.
$divisor = Defaults to 1. This is used to divide the returned value.
$multiplier = Defaults to 1. This is used to multiply the returned value.
$low_limit = Defaults to null. Sets the low threshold limit for the sensor, used in alerting to report out range sensors.
$low_warn_limit = Defaults to null. Sets the low warning limit for the sensor, used in alerting to report near out of range sensors.
$warn_limit = Defaults to null. Sets the high warning limit for the sensor, used in alerting to report near out of range sensors.
$high_limit = Defaults to null. Sets the high limit for the sensor, used in alerting to report out range sensors.
$current = Defaults to null. Can be used to set the current value on discovery. Poller will update this on the next poll cycle anyway.
$poller_type = Defaults to snmp. Things like the unix-agent can set different values but for the most part this should be left as snmp.
$entPhysicalIndex = Defaults to null. Sets the entPhysicalIndex to be used to look up further hardware if available.
$entPhysicalIndex_measured = Defaults to null. Sets the type of entPhysicalIndex used, i.e ports.
$user_func = Defaults to null. You can provide a function name for the sensors value to be processed through (i.e. Convert fahrenheit to celsius use fahrenheit_to_celsius)
$group = Defaults to null. Groups sensors together under in the webui, displaying this text.
$rrd_type = Default to 'GAUGE'. Allows to change the type of the RRD file created for this sensor. More details can be found here in the RRD documentation: https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html
For the majority of devices, this is all that's required to add support for a sensor. Polling is done based on the data gathered using discover_sensor(). If custom polling is needed then the file format is similar to discovery: includes/polling/sensors/$class/$os.inc.php. Whilst it's possible to perform additional snmp queries within polling this should be avoided where possible. The value for the OID is already available as $sensor_value.
Graphing is performed automatically for sensors, no custom graphing is required or supported.
"},{"location":"Developing/os/Health-Information/#adding-a-new-sensor-class","title":"Adding a new sensor class","text":"
You will need to add code for your new sensor class in the following existing files:
app/Models/Sensor.php: add a free icon from Font Awesome in the $icons array.
doc/Developing/os/Health-Information.md: documentation for every sensor class is mandatory.
includes/discovery/sensors.inc.php: add the sensor class to the $run_sensors array.
includes/discovery/functions.inc.php: optional - if sensible low_limit and high_limit values are guessable when a SNMP-retrievable threshold is not available, add a case for the sensor class to the sensor_limit() and/or sensor_low_limit() functions.
LibreNMS/Util/ObjectCache.php: optional - choose menu grouping for the sensor class.
includes/html/pages/device/health.inc.php: add a dbFetchCell(), $datas[], and $type_text[] entry for the sensor class.
includes/html/pages/device/overview.inc.php: add require 'overview/sensors/$class.inc.php' in the desired order for the device overview page.
includes/html/pages/health.inc.php: add a $type_text[] entry for the sensor class.
lang/en/sensors.php: add human-readable names and units for the sensor class in English, feel free to do so for other languages as well.
Create and populate new files for the sensor class in the following places:
includes/discovery/sensors/$class/: create the folder where advanced php-based discovery files are stored. Not used for yaml discovery. =======
includes/html/pages/device/health.inc.php: add a dbFetchCell(), $datas[], and $type_text[] entry for the sensor class.
includes/html/pages/device/overview.inc.php: add require 'overview/sensors/$class.inc.php' in the desired order for the device overview page.
includes/html/pages/health.inc.php: add a $type_text[] entry for the sensor class.
lang/en/sensors.php: add human-readable names and units for the sensor class in English, feel free to do so for other languages as well.
Create and populate new files for the sensor class in the following places:
includes/discovery/sensors/$class/: create the folder where advanced php-based discovery files are stored. Not used for yaml discovery.
includes/html/graphs/device/$class.inc.php: define unit names used in RRDtool graphs.
includes/html/graphs/sensor/$class.inc.php: define various parameters for RRDtool graphs.
"},{"location":"Developing/os/Health-Information/#advanced-health-sensor-example","title":"Advanced health sensor example","text":"
This example shows how to build sensors using the advanced method. In this example we will be collecting optical power level (dBm) from Adva FSP150CC family MetroE devices. This example will assume an understanding of SNMP and MIBs.
First we setup includes/discovery/sensors/pre_cache/adva_fsp150.inc as shown below. The first line walks the cmEntityObject table to get information about the chassis and line cards. From this information we extract the model type which will identify which tables in the CM-Facility-Mib the ports are populated in. The program then reads the appropriate table into the $pre_cache array adva_fsp150_ports. This array will have OID indexies for each port, which we will use later to identify our sensor OIDs.
Next we are going to build our sensor discovery code. These are optical readings, so the file will be created as the dBm sensor type in includes/discover/sensors/dbm/adva_fsp150.inc.php. Below is a snippet of the code:
First the program will loop through each port's index value. In the case of Advas, the ports are names Ethernet 1-1-1-1, 1-1-1-2, etc, and they are indexed as oid.1.1.1.1, oid.1.1.1.2, etc in the mib.
Next the program checks which table the port exists in and that the connector type is 'fiber'. There are other port tables in the full code that were ommitted from the example for brevity. Copper media won't have optical readings, so if the media type isn't fiber we skip discovery for that port.
The next two lines build the OIDs for getting the optical receive and transmit values using the $index for the port. Using the OIDs the program gets the current receive and transmit values ($currentRx and $currentTx repectively) to verify the values are not 0. Not all SFPs collect digital optical monitoring (DOM) data, in the case of Adva the value of both transmit and recieve will be 0 if DOM is not available. While 0 is a valid value for optical power, its extremely unlikely that both will be 0 if DOM is present. If DOM is not available, then the program stops discovery for that port. Note that while this is the case with Adva, other vendors may differ in how they handle optics that do not supply DOM. Please check your vendor's mibs.
Next the program assigns the values of $entPhysicalIndex and $entPhysicalIndex_measured. In this case $entPhysicalIndex is set to the value of the cmEthernetTrafficPortIfIndex so that it is associated with port. This will also allow the sensor graphs to show up on the associated port's page in the GUI in addition to the Health page.
Following that the program uses a database call to get the description of the port which will be used as the title for the graph in the GUI.
Lastly the program calls discover_sensor() and passes the information collected in the previous steps. The null values are for low, low warning, high, and high warning values, which are not collected in the Adva's MIB.
You can manually run discovery to verify the code works by running ./discovery.php -h $device_id -m sensors. You can use -v to see what calls are being used during discovery and -d to see debug output. In the output under #### Load disco module sensors #### you can see a list of sensors types. If there is a + a sensor is added, if there is a - one was deleted, and a . means no change. If there is nothing next to the sensor type then the sensor was not discovered. There is is also information about changes to the database and RRD files at the bottom.
OS discovery is how LibreNMS detects which OS should be used for a device. Generally detection should use sysObjectID or sysDescr, but you can also snmpget an oid and check for a value. snmpget is discouraged because it slows down all os detections, not just the added os.
To begin, create the new OS file which should be called includes/definitions/pulse.yaml. Here is a working example:
mib_dir: You can use this to specify an additional directory to look in for MIBs. An array is not accepted, only one directory may be specified.
mib_dir: juniper\n
poller_modules: This is a list of poller modules to either enable (1) or disable (0). Check misc/config_definitions.json to see which modules are enabled/disabled by default.
discovery_modules: This is the list of discovery modules to either enable (1) or disable (0). Check misc/config_definitions.json to see which modules are enabled/disabled by default.
OS discovery collects additional standardized data about the OS. These are specified in the discovery yaml includes/definitions/discovery/<os>.yaml or LibreNMS/OS/<os>.php if more complex collection is required.
version The version of the OS running on the device.
hardware The hardware version for the device. For example: 'WS-C3560X-24T-S'
features Features for the device, for example a list of enabled software features.
serial The main serial number of the device.
"},{"location":"Developing/os/Initial-Detection/#yaml-based-os-discovery","title":"Yaml based OS discovery","text":"
sysDescr_regex apply a regex or list of regexes to the sysDescr to extract named groups, this data has the lowest precedence
<field> specify an oid or list of oids to attempt to pull the data from, the first non-empty response will be used
<field>_regex parse the value out of the returned oid data, must use a named group
<field>_template combine multiple oid results together to create a final string value. The result is trimmed.
<field>_replace An array of replacements ['search regex', 'replace'] or regex to remove
hardware_mib MIB used to translate sysObjectID to get hardware. hardware_regex can process the result.
If the device has MIBs available and you use it in the detection then you can add these in. It is highly recommended that you add mibs to a vendor specific directory. For instance HP mibs are in mibs/hp. Please ensure that these directories are specified in the yaml detection file, see mib_dir above.
"},{"location":"Developing/os/Initial-Detection/#icon-and-logo","title":"Icon and Logo","text":"
It is highly recommended to use SVG images where possible, these scale and provide a nice visual image for users with HiDPI screens. If you can't find SVG images then please use png.
Create an SVG image of the icon and logo. Legacy PNG bitmaps are also supported but look bad on HiDPI.
A vector image should not contain padding.
The file should not be larger than 20 Kb. Simplify paths to reduce large files.
Use plain SVG without gzip compression.
The SVG root element must not contain length and width attributes, only viewBox.
Use Path -> Simplify to simplify paths of large files.
Use File -> Document Properties\u2026 -> Resize page to content\u2026 to remove padding.
Use File -> Clean up document to remove unused gradients, patterns, or markers.
Use File -> Save As -> Plain SVG to save the final image.
By optimizing the SVG you can shrink the file size in some cases to less than 20 %. SVG Optimizer does a great job. There is also an online version.
"},{"location":"Developing/os/Initial-Detection/#the-final-check","title":"The final check","text":"
Discovery
./discovery.php -d -h HOSTNAME\n
Polling
lnms device:poll HOSTNAME\n
At this step we should see all the values retrieved in LibreNMS.
Note: If you have made a number of changes to either the OS's Discovery files, it's possible earlier edits have been cached. As such, if you do not get expected behaviour when completing the final check above, try removing the cache file first:
LibreNMS will attempt to detect memory statistics using the standard HOST-RESOURCES-MIB and UCD-SNMP-MIB MIBs. To detect non-standard MIBs, they can be defined via Yaml.
In order to successfully detect memory amount and usage, two of the for keys below are required. Some OS only provide a usage percentage, which will work, but a total RAM amount will not be displayed.
The code can also interpret table based OIDs and supports many of the same features as Health Sensors including {{ }} parsing, skip_values, and precache.
Valid data entry keys:
oid oid to walk to collect processor data
total oid or integer total memory size in bytes (or precision)
used oid memory used in bytes (or precision)
free oid memory free in bytes (or precision)
percent_used oid of percentage of used memory
descr A visible description of the memory measurement defaults to \"Memory\"
warn_percent Usage percentage to used for alert purposes
precision precision for all byte values, typically a power of 2 (1024 for example)
classused to generate rrd filename, defaults to system. If system, buffers, and cached exist they will be combined to calculate available memory.
type used to generate rrd filename, defaults to the os name
index used to generate rrd filename, defaults to the oid index
skip_values skip values see Health Sensors for specification
snmp_flags additional net-snmp flags
"},{"location":"Developing/os/Mem-CPU-Information/#custom-processor-discovery-and-polling","title":"Custom Processor Discovery and Polling","text":"
If you need to implement custom discovery or polling you can implement the MempoolsDiscovery interface and the MempoolsPolling interface in the OS class. MempoolsPolling is optional, standard polling will be used based on OIDs stored in the database.
OS Class files reside under LibreNMS\\OS
<?php\n\nnamespace LibreNMS\\OS;\n\nuse LibreNMS\\Interfaces\\Discovery\\MempoolsDiscovery;\nuse LibreNMS\\Interfaces\\Polling\\MempoolsPolling;\n\nclass Example extends \\LibreNMS\\OS implements MempoolsDiscovery, MempoolsPolling\n{\n /**\n * Discover a Collection of Mempool models.\n * Will be keyed by mempool_type and mempool_index\n *\n * @return \\Illuminate\\Support\\Collection \\App\\Models\\Mempool\n */\n public function discoverMempools()\n {\n // TODO: Implement discoverMempools() method.\n }\n\n /**\n * @param \\Illuminate\\Support\\Collection $mempools \\App\\Models\\Mempool\n * @return \\Illuminate\\Support\\Collection \\App\\Models\\Mempool\n */\n public function pollMempools($mempools)\n {\n // TODO: Implement pollMempools() method.\n }\n}\n
Key Default Description oid required The string based oid to fetch data, could be a table or a single value num_oid optional The numerical oid to fetch data from when polling, usually should be appended by {{ $index }}. Computed by discovery process if not provided. value optional Oid to retrieve data from, primarily used for tables precision 1 The multiplier to multiply the data by. If this is negative, the data will be multiplied then subtracted from 100. descr Processor Description of this processor, may be an oid or plain string. Helpful values {{ $index }} and {{$count}} type Name of this sensor. This is used with the index to generate a unique id for this sensor. index {{ $index }} The index of this sensor, defaults to the index of the oid. skip_values optional Do not detect this sensor if the value matches
Accessing values within yaml:
{{ $index }} The index after the given oid {{ $count }} The count of entries (starting with 1) {{ $oid }} Any oid in the table or pre-fetched"},{"location":"Developing/os/Mem-CPU-Information/#custom-processor-discovery-and-polling_1","title":"Custom Processor Discovery and Polling","text":"
If you need to implement custom discovery or polling you can implement the ProcessorDiscovery interface and the ProcessorPolling interface in the OS class.
OS Class files reside under LibreNMS\\OS
<?php\nnamespace LibreNMS\\OS;\n\nuse LibreNMS\\Device\\Processor;\nuse LibreNMS\\Interfaces\\Discovery\\ProcessorDiscovery;\nuse LibreNMS\\Interfaces\\Polling\\ProcessorPolling;\nuse LibreNMS\\OS;\n\nclass ExampleOS extends OS implements ProcessorDiscovery, ProcessorPolling\n{\n /**\n * Discover processors.\n * Returns an array of LibreNMS\\Device\\Processor objects that have been discovered\n *\n * @return array Processors\n */\n public function discoverProcessors()\n {\n // discovery code here\n }\n\n /**\n * Poll processor data. This can be implemented if custom polling is needed.\n *\n * @param array $processors Array of processor entries from the database that need to be polled\n * @return array of polled data\n */\n public function pollProcessors(array $processors)\n {\n // polling code here\n }\n}\n
"},{"location":"Developing/os/Settings/","title":"Optional OS Settings","text":"
This page documents settings that can be set in the os yaml files or in config.php. All settings listed here are optional. If they are not set, the global default will be used.
"},{"location":"Developing/os/Settings/#user-override-in-configphp","title":"User override in config.php","text":"
Users can override these settings in their config.php.
By default we use ifDescr to label ports/interfaces. Setting either ifname or ifalias will override that. Only set one of these. ifAlias is user supplied. ifindex will append the ifindex to the port label.
ifname: true\nifalias: true\n\nifindex: true\n
"},{"location":"Developing/os/Settings/#poller-and-discovery-modules","title":"Poller and Discovery Modules","text":"
The various discovery and poller modules can be enabled or disabled per OS. The defaults are usually reasonable, so likely you won't want to change more than a few. These modules can be enabled or disabled per-device in the webui and per os or globally in config.php. Usually, a poller module will not work if it's corresponding discovery module is not enabled.
You should avoid setting these to false in the OS definitions unless it has a significant negative impact on polling. Setting modules in the definition reduces user control of modules.
Some devices have buggy snmp implementations and don't respond well to the more efficient snmpbulkwalk. To disable snmpbulkwalk and only use snmpwalk for an OS set the following.
snmp_bulk: false\n
If only some specific OIDs fail with snmpbulkwalk. You can disable just those OIDs. This needs to match exactly the OID being walked by LibreNMS. MIB::oid is preferred to prevent name collisions.
oids:\n no_bulk:\n - UCD-SNMP-MIB::laLoadInt\n
"},{"location":"Developing/os/Settings/#limit-the-oids-per-snmpget","title":"Limit the oids per snmpget","text":"
Tests ensure LibreNMS works as expected, now and in the future. New OS should provide as much test data as needed and added test data for existing OS is welcome.
Saved snmp data can be found in tests/snmpsim/*.snmprec and saved database data can be found in tests/data/*.json. Please review this for any sensitive data before submitting. When replacing data, make sure it is modified in a consistent manner.
We utilise snmpsim to do unit testing. For OS discovery, we can mock snmpsim, but for other tests you will need it installed and functioning. We run snmpsim during our integration tests, but not by default when running lnms dev:check. You can install snmpsim with the command pip3 install snmpsim.
"},{"location":"Developing/os/Test-Units/#capturing-test-data","title":"Capturing test data","text":"If test data already exists
If test data already exists, but is for a different device/configuration with the same OS. Then you should use the --variant (-v) option to specify a different variant of the OS, this will be tested completely separate from other variants. If there is only one variant, please do not specify one.
./scripts/collect-snmp-data.php is provided to make it easy to collect data for tests. Running collect-snmp-data.php with the --hostname (-h) allows you to capture all data used to discover and poll a device already added to LibreNMS. Make sure to re-run the script if you add additional support. Check the command-line help for more options.
"},{"location":"Developing/os/Test-Units/#2-save-test-data","title":"2. Save test data","text":"
After you have collected snmp data, run ./scripts/save-test-data.php with the --os (-o) option to dump the post discovery and post poll database entries to json files. This step requires snmpsim, if you are having issues, the maintainers may help you generate it from the snmprec you created in the previous step.
Generally, you will only need to collect data once. After you have the data you need in the snmprec file, you can just use save-test-data.php to update the database dump (json) after that.
Note: To run tests, ensure you have executed ./scripts/composer_wrapper.php install from your LibreNMS root directory. This will read composer.json and install any dependencies required.
After you have saved your test data, you should run lnms dev:check verify they pass.
To run the full suite of tests enable database and snmpsim reliant tests: lnms dev:check unit --db --snmpsim
Snmprec files are simple files that store the snmp data. The data format is simple with three columns: numeric oid, type code, and data. Here is an example snippet.
During testing LibreNMS will use any info in the snmprec file for snmp calls. This one provides sysDescr (.1.3.6.1.2.1.1.1.0, 4 = Octet String) and sysObjectID (.1.3.6.1.2.1.1.2.0, 6 = Object Identifier), which is the minimum that should be provided for new snmprec files.
To look up the numeric OID and type of an string OID with snmptranslate:
If the base os (.snmprec) already contains test data for the module you are testing or that data conflicts with your new data, you must use a variant to store your test data (-v)."},{"location":"Developing/os/Test-Units/#add-initial-detection","title":"Add initial detection","text":"
Add device to LibreNMS. It is generic and device_id = 42
Run ./scripts/collect-snmp-data.php -h 42, initial snmprec will be created
Add initial detection for example-os
Run discovery to make sure it detects properly ./discovery.php -h 42
Add any additional os items like version, hardware, features, or serial.
If there is additional snmp data required, run ./scripts/collect-snmp-data.php -h 42
Run ./scripts/save-test-data.php -o example-os to update the dumped database data.
Review data. If you modified the snmprec or code (don't modify json manually) run ./scripts/save-test-data.php -o example-os -m os
Run lnms dev:check unit --db --snmpsim
If the tests succeed submit a pull request
"},{"location":"Developing/os/Test-Units/#additional-module-support-or-test-data","title":"Additional module support or test data","text":"
Add code to support module or support already exists.
./scripts/collect-snmp-data.php -h 42 -m <module>, this will add more data to the snmprec file
Review data. If you modified the snmprec (don't modify json manually) run ./scripts/save-test-data.php -o example-os -m <module>
Run lnms dev:check unit --db --snmpsim
If the tests succeed submit a pull request
"},{"location":"Developing/os/Test-Units/#json-application-test-writing-using-scriptsjson-app-toolphp","title":"JSON Application Test Writing Using ./scripts/json-app-tool.php","text":"
First you will need a good example JSON output produced via SNMP extend in question.
Read the help via ./scripts/json-app-tool.php -h.
Generate the SNMPrec data via ./scripts/json-app-tool.php -a appName -s > ./tests/snmpsim/linux_appName-v1.snmprec. If the SNMP extend name OID different than the application name, then you will need to pass the -S flag for over riding that.
Generate the test JSON data via ./scripts/json-app-tool.php -a appName -t > ./tests/data/linux_appName-v1.json.
Update the generated './tests/data/linux_appName-v1.json' making sure that all the expected metrics are present. This assumes that everything under .data in the JSON will be collapsed and used.
During test runs if it does not appear to be detecting the app and it has a different app name and SNMP extend name OID, make sure that -S is set properly and that 'includes/discovery/applications.inc.php' has been updated.
This document will guide you through adding wireless sensors for your new wireless device.
Currently we have support for the following wireless metrics along with the values we expect to see the data in:
Type Measurement Interface Description ap-count % WirelessApCountDiscovery The number of APs attached to this controller capacity % WirelessCapacityDiscovery The % of operating rate vs theoretical max ccq % WirelessCcqDiscovery The Client Connection Quality channel count WirelessChannelDiscovery The channel, use of frequency is preferred cell count WirelessCellDiscovery The cell in a multicell technology clients count WirelessClientsDiscovery The number of clients connected to/managed by this device distance km WirelessDistanceDiscovery The distance of a radio link in Kilometers error-rate bps WirelessErrorRateDiscovery The rate of errored packets or bits, etc error-ratio % WirelessErrorRatioDiscovery The percent of errored packets or bits, etc errors count WirelessErrorsDiscovery The total bits of errored packets or bits, etc frequency MHz WirelessFrequencyDiscovery The frequency of the radio in MHz, channels can be converted mse dB WirelessMseDiscovery The Mean Square Error noise-floor dBm WirelessNoiseFloorDiscovery The amount of noise received by the radio power dBm WirelessPowerDiscovery The power of transmit or receive, including signal level quality % WirelessQualityDiscovery The % of quality of the link, 100% = perfect link rate bps WirelessRateDiscovery The negotiated rate of the connection (not data transfer) rssi dBm WirelessRssiDiscovery The Received Signal Strength Indicator snr dB WirelessSnrDiscovery The Signal to Noise ratio, which is signal - noise floor sinr dB WirelessSinrDiscovery The Signal-to-Interference-plus-Noise Ratio rsrq dB WirelessRsrqDiscovery The Reference Signal Received Quality rsrp dBm WirelessRsrpDiscovery The Reference Signals Received Power xpi dBm WirelessXpiDiscovery The Cross Polar Interference values ssr dB WirelessSsrDiscovery The Signal strength ratio, the ratio(or difference) of Vertical rx power to Horizontal rx power utilization % WirelessUtilizationDiscovery The % of utilization compared to the current rate
You will need to create a new OS class for your os if one doesn't exist under LibreNMS/OS. The name of this file should be the os name in camel case for example airos -> Airos, ios-wlc -> IosWlc.
Your new OS class should extend LibreNMS\\OS and implement the interfaces for the sensors your os supports.
namespace LibreNMS\\OS;\n\nuse LibreNMS\\Device\\WirelessSensor;\nuse LibreNMS\\Interfaces\\Discovery\\Sensors\\WirelessClientsDiscovery;\nuse LibreNMS\\OS;\n\nclass Airos extends OS implements WirelessClientsDiscovery\n{\n public function discoverWirelessClients()\n {\n $oid = '.1.3.6.1.4.1.41112.1.4.5.1.15.1'; //UBNT-AirMAX-MIB::ubntWlStatStaCount.1\n return array(\n new WirelessSensor('clients', $this->getDeviceId(), $oid, 'airos', 1, 'Clients')\n );\n }\n}\n
All discovery interfaces will require you to return an array of WirelessSensor objects.
new WirelessSensor() Accepts the following arguments:
$type = Required. This is the sensor class from the table above (i.e humidity).
$device_id = Required. You can get this value with $this->getDeviceId()
$oids = Required. This must be the numerical OID for where the data can be found, i.e .1.2.3.4.5.6.7.0. If this is an array of oids, you should probably specify an $aggregator.
$subtype = Required. This should be the OS name, i.e airos.
$index = Required. This must be unique for this sensor type, device and subtype. Typically it's the index from the table being walked or it could be the name of the OID if it's a single value.
$description = Required. This is a descriptive value for the sensor. Shown to the user, if this is a per-ssid statistic, using SSID: $ssid here is appropriate
$current = Defaults to null. Can be used to set the current value on discovery. If this is null the values will be polled right away and if they do not return valid value(s), the sensor will not be discovered. Supplying a value here implies you have already verified this sensor is valid.
$multiplier = Defaults to 1. This is used to multiply the returned value.
$divisor = Defaults to 1. This is used to divided the returned value.
$aggregator = Defaults to sum. Valid values: sum, avg. This will combine multiple values from multiple oids into one.
$access_point_id = Defaults to null. If this is a wireless controller, you can link sensors to entries in the access_points table.
$high_limit = Defaults to null. Sets the high limit for the sensor, used in alerting to report out range sensors.
$low_limit = Defaults to null. Sets the low threshold limit for the sensor, used in alerting to report out range sensors.
$high_warn = Defaults to null. Sets the high warning limit for the sensor, used in alerting to report near out of range sensors.
$low_warn = Defaults to null. Sets the low warning limit for the sensor, used in alerting to report near out of range sensors.
$entPhysicalIndex = Defaults to null. Sets the entPhysicalIndex to be used to look up further hardware if available.
$entPhysicalIndexMeasured = Defaults to null. Sets the type of entPhysicalIndex used, i.e ports.
Polling is done automatically based on the discovered data. If for some reason you need to override polling, you can implement the required polling interface in LibreNMS/Interfaces/Polling/Sensors. Using the polling interfaces should be avoided if possible.
Graphing is performed automatically for wireless sensors, no custom graphing is required or supported.
The agent can be used to gather data from remote systems you can use LibreNMS in combination with check_mk (found here). The agent can be extended to include data about applications on the remote system.
5: Copy each of the scripts from agent-local/ into /usr/lib/check_mk_agent/local that you require to be graphed. You can find detail setup instructions for specific applications above.
6: Make each one executable that you want to use with chmod +x /usr/lib/check_mk_agent/local/$script
8: Login to the LibreNMS web interface and edit the device you want to monitor. Under the modules section, ensure that unix-agent is enabled.
9: Then under Applications, enable the apps that you plan to monitor.
10: Wait for around 10 minutes and you should start seeing data in your graphs under Apps for the device.
"},{"location":"Extensions/Agent-Setup/#restrict-the-devices-on-which-the-agent-listens-linux-systemd","title":"Restrict the devices on which the agent listens: Linux systemd","text":"
If you want to restrict which network adapter the agent listens on, do the following:
1: Edit /etc/systemd/system/check_mk.socket
2: Under the [Socket] section, add a new line BindToDevice= and the name of your network adapter.
3: If the script has already been enabled in systemd, you may need to issue a systemctl daemon-reload and then systemctl restart check_mk.socket
Grab version 1.2.6b5 of the check_mk agent from the check_mk github repo (exe/msi or compile it yourself depending on your usage): https://github.com/tribe29/checkmk/tree/v1.2.6b5/agents/windows
Run the msi / exe
Make sure your LibreNMS instance can reach TCP port 6556 on your target.
When using the snmp extend method, the application discovery module will pick up which applications you have set up for monitoring automatically, even if the device is already in LibreNMS. The application discovery module is enabled by default for most *nix operating systems, but in some cases you will need to manually enable the application discovery module.
One major thing to keep in mind when using SNMP extend is these run as the snmpd user that can be an unprivileged user. In these situations you need to use sudo.
To test if you need sudo, first check the user snmpd is running as. Then test if you can run the extend script as that user without issue. For example if snmpd is running as 'Debian-snmp' and we want to run the extend for proxmox, we check that the following run without error:
sudo -u Debian-snmp /usr/local/bin/proxmox\n
If it doesn't work, then you will need to use sudo with the extend command. For the example above, that would mean adding the line below to the sudoers file:
Debian-snmp ALL = NOPASSWD: /usr/local/bin/proxmox\n
Finally we would need to add sudo to the extend command, which would look like that for proxmox:
"},{"location":"Extensions/Applications/#json-return-optimization-using-librenms_return_optimizer","title":"JSON Return Optimization Using librenms_return_optimizer","text":"
While the json_app_get does allow for more complex and larger data to be easily returned by a extend and the data to then be worked with, this can also sometimes result in large returns that occasionally don't play nice with SNMP on some networks.
librenms_return_optimizer fixes this via taking the extend output piped to it, gzipping it, and then converting it to base64. The later is needed as net-snmp does not play that nice with binary data, converting most of the non-printable characters to .. This does add a bit of additional overhead to the gzipped data, but still tends to be result in a return that is usually a third of the size for JSONs items.
The change required is fairly simply. So for the portactivity example below...
The following apps have extends that have native support for this, if congiured to do so.
suricata
"},{"location":"Extensions/Applications/#enable-the-application-discovery-module","title":"Enable the application discovery module","text":"
Edit the device for which you want to add this support
Click on the Modules tab and enable the applications module.
This will be automatically saved, and you should get a green confirmation pop-up message.
After you have enabled the application module, it would be wise to then also enable which applications you want to monitor, in the rare case where LibreNMS does not automatically detect it.
Note: Only do this if an application was not auto-discovered by LibreNMS during discovery and polling.
"},{"location":"Extensions/Applications/#enable-the-applications-to-be-discovered","title":"Enable the application(s) to be discovered","text":"
Go to the device you have just enabled the application module for.
Click on the Applications tab and select the applications you want to monitor.
This will also be automatically saved, and you should get a green confirmation pop-up message.
The unix-agent does not have a discovery module, only a poller module. That poller module is always disabled by default. It needs to be manually enabled if using the agent. Some applications will be automatically enabled by the unix-agent poller module. It is better to ensure that your application is enabled for monitoring. You can check by following the steps under the SNMP Extend heading.
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
Verify it is working by running /etc/snmp/apache-stats.py Package urllib3 for python3 needs to be installed. In Debian-based systems for example you can achieve this by issuing:
apt-get install python3-urllib3\n
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend apache /etc/snmp/apache-stats.py\n
Restart snmpd on your host
Test by running
snmpwalk <various options depending on your setup> localhost NET-SNMP-EXTEND-MIB::nsExtendOutput2Table\n
Install the agent on this device if it isn't already and copy the apache script to /usr/lib/check_mk_agent/local/
Verify it is working by running /usr/lib/check_mk_agent/local/apache (If you get error like \"Can't locate LWP/Simple.pm\". libwww-perl needs to be installed: apt-get install libwww-perl)
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
On the device page in Librenms, edit your host and check the Apache under the Applications tab.
Verify it is working by running /etc/snmp/asterisk
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend asterisk /etc/snmp/asterisk\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Restart your bind9/named after changing the configuration.
Verify that everything works by executing rndc stats && cat /var/cache/bind/stats. In case you get a Permission Denied error, make sure you changed the ownership correctly.
Also be aware that this file is appended to each time rndc stats is called. Given this it is suggested you setup file rotation for it. Alternatively you can also set zero_stats to 1 in the config.
The script for this also requires the Perl module File::ReadBackwards.
If it is not available, it can be installed by cpan -i File::ReadBackwards.
You may possibly need to configure the agent/extend script as well.
The config file's path defaults to the same path as the script, but with .config appended. So if the script is located at /etc/snmp/bind, the config file will be /etc/snmp/bind.config. Alternatively you can also specify a config via -c $file.
Anything starting with a # is comment. The format for variables are $variable=$value. Empty lines are ignored. Spaces and tabs at either the start or end of a line are ignored.
Content of an example /etc/snmp/bind.config . Please edit with your own settings.
rndc = The path to rndc. Default: /usr/bin/env rndc\ncall_rndc = A 0/1 boolean on whether or not to call rndc stats.\n Suggest to set to 0 if using netdata. Default: 1\nstats_file = The path to the named stats file. Default: /var/cache/bind/stats\nagent = A 0/1 boolean for if this is being used as a LibreNMS\n agent or not. Default: 0\nzero_stats = A 0/1 boolean for if the stats file should be zeroed\n first. Default: 0 (1 if guessed)\n
If you want to guess at the configuration, call the script with -g and it will print out what it thinks it should be.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Install the agent on this device if it isn't already and copy the script to /usr/lib/check_mk_agent/local/bind via wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/bind -O /usr/lib/check_mk_agent/local/bind
Due to the lack of SNMP support in the BIRD daemon, this application extracts all configured BGP protocols and parses it into LibreNMS. This application supports both IPv4 and IPv6 Peer processing.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend bird2 '/usr/bin/sudo /usr/sbin/birdc -r show protocols all'\n
Edit your sudo users (usually visudo) and add at the bottom:
Debian-snmp ALL=(ALL) NOPASSWD: /usr/sbin/birdc\n
If your snmp daemon is running on a user that isnt Debian-snmp make sure that user has the correct permission to execute birdc
Verify the time format for bird2 is defined. Otherwise iso short ms (hh:mm:ss) is the default value that will be used. Which is not compatible with the datetime parsing logic used to parse the output from the bird show command. timeformat protocol is the one important to be defibned for the bird2 app parsing logic to work.
Example starting point using Bird2 shorthand iso long (YYYY-MM-DD hh:mm:ss):
timeformat base iso long;\ntimeformat log iso long;\ntimeformat protocol iso long;\ntimeformat route iso long;\n
Timezone can be manually specified, example \"%F %T %z\" (YYYY-MM-DD hh:mm:ss +11:45). See the Bird 2 docs for more information
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
a. (Required): Key 'domains' contains a list of domains to check. b. (Optional): You can define a port. By default it checks on port 443. c. (Optional): You may define a certificate location for self-signed certificates."},{"location":"Extensions/Applications/#snmp-extend_6","title":"SNMP Extend","text":"
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend certificate /etc/snmp/certificate.py\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The config file is a ini file and handled by Config::Tiny.
- mode :: single or multi, for if this is a single repo or for\n multiple repos.\n - Default :: single\n\n- repo :: Directory for the borg backup repo.\n - Default :: undef\n\n- passphrase :: Passphrase for the borg backup repo.\n - Default :: undef\n\n- passcommand :: Passcommand for the borg backup repo.\n - Default :: undef\n
For single repos all those variables are in the root section of the config, so lets the repo is at '/backup/borg' with a passphrase of '1234abc'.
repo=/backup/borg\nrepo=1234abc\n
For multi, each section outside of the root represents a repo. So if there is '/backup/borg1' with a passphrase of 'foobar' and '/backup/derp' with a passcommand of 'pass show backup' it would be like below.
mode=multi\n\n[borg1]\nrepo=/backup/borg1\npassphrase=foobar\n\n[derp]\nrepo=/backup/derp\npasscommand=pass show backup\n
If 'passphrase' and 'passcommand' are both specified, then passcommand is used.
The metrics are all from .data.totals in the extend return.
Value Type Description errored repos Total number of repos that info could not be fetched for. locked repos Total number of locked repos locked_for seconds Longest time any repo has been locked. time_since_last_modified seconds Largest time - mtime for the repo nonce total_chunks chunks Total number of chunks total_csize bytes Total compressed size of all archives in all repos. total_size byes Total uncompressed size of all archives in all repos. total_unique_chunks chunks Total number of unique chuckes in all repos. unique_csize bytes Total deduplicated size of all archives in all repos. unique_size chunks Total number of chunks in all repos."},{"location":"Extensions/Applications/#capev2","title":"CAPEv2","text":"
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend power-stat /etc/snmp/power-stat.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Copy the shell script to the desired host. By default, it will only show the status for containers that are running. To include all containers modify the constant in the script at the top of the file and change it to ONLY_RUNNING_CONTAINERS = False
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend entropy /etc/snmp/entropy.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
If not specified, \"/usr/bin/env fail2ban-client\" is used.
Restart snmpd on your host
If you wish to use caching, add the following to /etc/crontab and restart cron.
*/3 * * * * root /etc/snmp/fail2ban -u\n
Restart or reload cron on your system.
If you have more than a few jails configured, you may need to use caching as each jail needs to be polled and fail2ban-client can't do so in a timely manner for than a few. This can result in failure of other SNMP information being polled.
For additional details of the switches, please see the POD in the script it self at the top.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The FreeRADIUS application extension requires that status_server be enabled in your FreeRADIUS config. For more information see: https://wiki.freeradius.org/config/Status
You should note that status requests increment the FreeRADIUS request stats. So LibreNMS polls will ultimately be reflected in your stats/charts.
Go to your FreeRADIUS configuration directory (usually /etc/raddb or /etc/freeradius).
cd sites-enabled
ln -s ../sites-available/status status
Restart FreeRADIUS.
You should be able to test with the radclient as follows...
If you've made any changes to the FreeRADIUS status_server config (secret key, port, etc.) edit freeradius.sh and adjust the config variable accordingly.
Edit your snmpd.conf file and add:
extend freeradius /etc/snmp/freeradius.sh\n
Restart snmpd on the host in question.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
If you've made any changes to the FreeRADIUS status_server config (secret key, port, etc.) edit freeradius.sh and adjust the config variable accordingly.
Edit the freeradius.sh script and set the variable 'AGENT' to '1' in the config.
Configure FSCLI in the script. You may also have to create an /etc/fs_cli.conf file if your fs_cli command requires authentication.
Verify it is working by running /etc/snmp/freeswitch
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend freeswitch /etc/snmp/freeswitch\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend gpsd /etc/snmp/gpsd\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading at the top of the page.
Set it up to be be ran by cron by root. Yes, you can directly call this script from SNMPD, but be aware, especially with Libvirt, there is a very real possibility of the snmpget timing out, especially if a VM is spinning up/down as virsh domstats can block for a few seconds or so then.
A small python3 script that reports current DHCP leases stats and pool usage of ISC DHCP Server.
Also you have to install the dhcpd-pools and the required Perl modules. Under Ubuntu/Debian just run apt install cpanminus ; cpanm Net::ISC::DHCPd::Leases Mime::Base64 File::Slurp or under FreeBSD pkg install p5-JSON p5-MIME-Base64 p5-App-cpanminus p5-File-Slurp ; cpanm Net::ISC::DHCPd::Leases.
Option Description -c $file Path to dhcpd.conf. -l $file Path to lease file. -Z Enable GZip+Base64 compression. -d Do not de-dup. -w $file File to write it out to.
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Configure the config at /usr/local/etc/logsize.conf. You can find the documentation for the config file in the extend. Below is a small example.
# monitor log sizes of logs directly udner /var/log\n[sets.var_log]\ndir=\"/var/log/\"\n\n# monitor remote logs from network devices\n[sets.remote_network]\ndir=\"/var/log/remote/network/\"\n\n# monitor remote logs from windows sources\n[sets.remote_windows]\ndir=\"/var/log/remote/windows/\"\n\n# monitor suricata flows logs sizes\n[sets.suricata_flows]\ndir=\"/var/log/suricata/flows/current\"\n
If the directories all readable via SNMPD, this script can be ran via snmpd. Otherwise it needs setup in cron. Similarly is processing a large number of files, it may also need setup in cron if it takes the script awhile to run.
linux_config_files is an application intended to monitor a Linux distribution's configuration files via that distribution's configuration management tool/system. At this time, ONLY RPM-based (Fedora/RHEL) SYSTEMS ARE SUPPORTED utilizing the rpmconf tool. The linux_config_files application collects and graphs the total count of configuration files that are out of sync and graphs that number.
Fedora/RHEL: Rpmconf is a utility that analyzes rpm configuration files using the RPM Package Manager. Rpmconf reports when a new configuration file standard has been issued for an upgraded/downgraded piece of software. Typically, rpmconf is used to provide a diff of the current configuration file versus the new, standard configuration file. The administrator can then choose to install the new configuration file or keep the old one.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend mailscanner /etc/snmp/mailscanner.php\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend mdadm /etc/snmp/mdadm\n
Verify it is working by running
sudo /etc/snmp/mdadm\n
Restart snmpd on your host
sudo service snmpd restart\n
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend memcached /etc/snmp/memcached\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Install your munin scripts into the above directory.
To create your own custom munin scripts, please see this example:
#!/bin/bash\nif [ \"$1\" = \"config\" ]; then\n echo 'graph_title Some title'\n echo 'graph_args --base 1000 -l 0' #not required\n echo 'graph_vlabel Some label'\n echo 'graph_scale no' #not required, can be yes/no\n echo 'graph_category system' #Choose something meaningful, can be anything\n echo 'graph_info This graph shows something awesome.' #Short desc\n echo 'foobar.label Label for your unit' # Repeat these two lines as much as you like\n echo 'foobar.info Desc for your unit.'\n exit 0\nfi\necho -n \"foobar.value \" $(date +%s) #Populate a value, here unix-timestamp\n
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
The MySQL script requires PHP-CLI and the PHP MySQL extension, so please verify those are installed.
CentOS (May vary based on PHP version)
yum install php-cli php-mysql\n
Debian (May vary based on PHP version)
apt-get install php-cli php-mysql\n
Unlike most other scripts, the MySQL script requires a configuration file mysql.cnf in the same directory as the extend or agent script with following content:
Note that depending on your MySQL installation (chrooted install for example), you may have to specify 127.0.0.1 instead of localhost. Localhost make a MySQL connection via the mysql socket, while 127.0.0.1 make a standard IP connection to mysql.
Note also if you get a mysql error Uncaught TypeError: mysqli_num_rows(): Argument #1, this is because you are using a newer mysql version which doesnt support UNBLOCKING for slave statuses, so you need to also include the line $chk_options['slave'] = false; into mysql.cnf to skip checking slave statuses
Edit /etc/snmp/mysql to set your MySQL connection constants or declare them in /etc/snmp/mysql.cnf (new file)
Edit your snmpd.conf file and add:
extend mysql /etc/snmp/mysql\n
Restart snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend nginx /etc/snmp/nginx\n
(Optional) If you have SELinux in Enforcing mode, you must add a module so the script can request /nginx-status:
cat << EOF > snmpd_nginx.te\nmodule snmpd_nginx 1.0;\n\nrequire {\n type httpd_t;\n type http_port_t;\n type snmpd_t;\n class tcp_socket name_connect;\n}\n\n#============= snmpd_t ==============\n\nallow snmpd_t http_port_t:tcp_socket name_connect;\nEOF\ncheckmodule -M -m -o snmpd_nginx.mod snmpd_nginx.te\nsemodule_package -o snmpd_nginx.pp -m snmpd_nginx.mod\nsemodule -i snmpd_nginx.pp\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ntp-client /etc/snmp/ntp-client\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
"},{"location":"Extensions/Applications/#ntp-server-aka-ntpd","title":"NTP Server aka NTPD","text":"
A shell script that gets stats from ntp server (ntpd).
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ntp-server /etc/snmp/ntp-server.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit the snmpd.conf file to include the extend by adding the following line to the end of the config file:
extend chronyd /etc/snmp/chrony\n
Note: Some distributions need sudo-permissions for the script to work with SNMP Extend. See the instructions on the section SUDO for more information.
Restart snmpd service on the host
Application should be auto-discovered and its stats presented on the Apps-page on the host. Note: Applications module needs to be enabled on the host or globally for the statistics to work as intended.
Update root crontab with. This is required as it will this will likely time out otherwise. Use */1 if you want to have the most recent stats when polled or to */5 if you just want at exactly a 5 minute interval.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ogs /etc/snmp/rocks.sh\n
Restart snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
A small shell script that checks your system package manager for any available updates. Supports apt-get/pacman/yum/zypper package managers.
For pacman users automatically refreshing the database, it is recommended you use an alternative database location --dbpath=/var/lib/pacman/checkupdate
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend osupdate /etc/snmp/osupdate\n
Restart snmpd on your host
Note: apt-get depends on an updated package index. There are several ways to have your system run apt-get update automatically. The easiest is to create /etc/apt/apt.conf.d/10periodic and pasting the following in it: APT::Periodic::Update-Package-Lists \"1\";. If you have apticron, cron-apt or apt-listchanges installed and configured, chances are that packages are already updated periodically .
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend phpfpmsp /etc/snmp/php-fpm\n
Create the config file /usr/local/etc/php-fpm_extend.json. Alternate locations may be specified using the the -f switch. Akin to like below. For more information, see /etc/snmp/php-fpm --help.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
To get all data you must get your API auth token from Pi-hole server and change the API_AUTH_KEY entry inside the snmp script.
Restard snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Make sure the cache file in /etc/snmp/postfixdetailed is some place that snmpd can write too. This file is used for tracking changes between various values between each time it is called by snmpd. Also make sure the path for pflogsumm is correct.
Run /etc/snmp/postfixdetailed to create the initial cache file so you don't end up with some crazy initial starting value. Please note that each time /etc/snmp/postfixdetailed is ran, the cache file is updated, so if this happens in between LibreNMS doing it then the values will be thrown off for that polling period.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
NOTE: If using RHEL for your postfix server, qshape must be installed manually as it is not officially supported. CentOs 6 rpms seem to work without issues.
Install the Nagios check check_postgres.pl on your system: https://github.com/bucardo/check_postgres
Verify the path to check_postgres.pl in /etc/snmp/postgres is correct.
(Optional) If you wish to change the DB username (default: pgsql), enable the postgres DB in totalling (e.g. set ignorePG to 0, default: 1), or set a hostname for check_postgres.pl to connect to (default: the Unix Socket postgresql is running on), then create the file /etc/snmp/postgres.config with the following contents (note that not all of them need be defined, just whichever you'd like to change):
DBuser=monitoring\nignorePG=0\nDBhost=localhost\n
Note that if you are using netdata or the like, you may wish to set ignorePG to 1 or otherwise that total will be very skewed on systems with light or moderate usage.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The LibreNMS polling host must be able to connect to port 8082 on the monitored device. The web-server must be enabled, see the Recursor docs: https://doc.powerdns.com/md/recursor/settings/#webserver
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
PowerMon tracks the power usage on your host and can report on both consumption and cost, using a python script installed on the host.
PowerMon consumption graph
Currently the script uses one of two methods to determine current power usage:
ACPI via libsensors
HP-Health (HP Proliant servers only)
The ACPI method is quite unreliable as it is usually only implemented by battery-powered devices, e.g. laptops. YMMV. However, it's possible to support any method as long as it can return a power value, usually in Watts.
TIP: You can achieve this by adding a method and a function for that method to the script. It should be called by getData() and return a dictionary.
Because the methods are unreliable for all hardware, you need to declare to the script which method to use. The are several options to assist with testing, see --help.
For this to work, the following log items need enabled for Privoxy.
debug 2 # show each connection status\ndebug 512 # Common Log Format\ndebug 1024 # Log the destination for requests Privoxy didn't let through, and the reason why.\ndebug 4096 # Startup banner and warnings\ndebug 8192 # Non-fatal errors\n
If your logfile is not at /var/log/privoxy/logfile, that may be changed via the -f option.
If privoxy-log-parser.pl is not found in your standard $PATH setting, you may will need up call the extend via /usr/bin/env with a $PATH set to something that includes it.
Once that is done, just wait for the server to be rediscovered or just enable it manually.
Pwrstatd (commonly known as powerpanel) is an application/service available from CyberPower to monitor their PSUs over USB. It is currently capable of reading the status of only one PSU connected via USB at a time. The powerpanel software is available here: https://www.cyberpowersystems.com/products/software/power-panel-personal/
Note: If you are using Raspian, the default user is Debian-snmp. Change snmp above to Debian-snmp. You can verify the user snmpd is using with ps aux | grep snmpd
Restart snmpd on PI host
"},{"location":"Extensions/Applications/#raspberry-pi-gpio-monitor","title":"Raspberry Pi GPIO Monitor","text":"
SNMP extend script to monitor your IO pins or sensor modules connected to your GPIO header.
1: Make sure you have wiringpi installed on your Raspberry Pi. In Debian-based systems for example you can achieve this by issuing:
apt-get install wiringpi\n
2: Download the script to your Raspberry Pi. wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/rpigpiomonitor.php -O /etc/snmp/rpigpiomonitor.php
3: (optional) Download the example configuration to your Raspberry Pi. wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/rpigpiomonitor.ini -O /etc/snmp/rpigpiomonitor.ini
4: Make the script executable: chmod +x /etc/snmp/rpigpiomonitor.php
5: Create or edit your rpigpiomonitor.ini file according to your needs.
6: Check your configuration with rpigpiomonitor.php -validate
7: Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
Install/Setup: For Install/Setup Local Librenms RRDCached: Please see RRDCached
Will collect stats by: 1. Connecting directly to the associated device on port 42217 2. Monitor thru snmp with SNMP extend, as outlined below 3. Connecting to the rrdcached server specified by the rrdcached setting
SNMP extend script to monitor your (remote) RRDCached via snmp
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend sdfsinfo /etc/snmp/sdfsinfo\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
url = Url how to get access to Seafile Server\nusername = Login to Seafile Server.\n It is important that used Login has admin privileges.\n Otherwise most API calls will be denied.\npassword = Password to the configured login.\naccount_identifier = Defines how user accounts are listed in RRD Graph.\n Options are: name, email\nhide_monitoring_account = With this Boolean you can hide the Account which you\n use to access Seafile API\n
Note:It is recommended to use a dedicated Administrator account for monitoring.
Setup a cronjob to run it. This ensures slow to poll disks won't result in errors.
*/5 * * * * /etc/snmp/smart -u -Z\n
Edit your snmpd.conf file and add:
extend smart /bin/cat /var/cache/smart\n
You will also need to create the config file, which defaults to the same path as the script, but with .config appended. So if the script is located at /etc/snmp/smart, the config file will be /etc/snmp/smart.config. Alternatively you can also specific a config via -c.
Anything starting with a # is comment. The format for variables is $variable=$value. Empty lines are ignored. Spaces and tabes at either the start or end of a line are ignored. Any line with out a matched variable or # are treated as a disk.
#This is a comment\ncache=/var/cache/smart\nsmartctl=/usr/bin/env smartctl\nuseSN=1\nada0\nada1\nda5 /dev/da5 -d sat\ntwl0,0 /dev/twl0 -d 3ware,0\ntwl0,1 /dev/twl0 -d 3ware,1\ntwl0,2 /dev/twl0 -d 3ware,2\n
The variables are as below.
cache = The path to the cache file to use. Default: /var/cache/smart\nsmartctl = The path to use for smartctl. Default: /usr/bin/env smartctl\nuseSN = If set to 1, it will use the disks SN for reporting instead of the device name.\n 1 is the default. 0 will use the device name.\n
A disk line is can be as simple as just a disk name under /dev/. Such as in the config above The line \"ada0\" would resolve to \"/dev/ada0\" and would be called with no special argument. If a line has a space in it, everything before the space is treated as the disk name and is what used for reporting and everything after that is used as the argument to be passed to smartctl.
If you want to guess at the configuration, call it with -g and it will print out what it thinks it should be.
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Optionally setup nightly self tests for the disks. The exend will run the specified test on all configured disks if called with the -t flag and the name of the SMART test to run.
This is for replacing Nagios/Icinga or the LibreNMS service integration in regards to NRPE. This allows LibreNMS to query what checks were ran on the server and keep track of totals of OK, WARNING, CRITICAL, and UNKNOWN statuses.
The big advantage over this compared to a NRPE are as below.
It does not need to know what checks are configured on it.
Also does not need to wait for the tests to run as sneck is meant to be ran via cron and the then return the cache when queried via SNMP, meaning a lot faster response time, especially if slow checks are being performed.
Works over proxied SNMP connections.
Included are alert examples. Although for setting up custom ones, the metrics below are provided.
Metric Description ok Total OK checks warning Total WARNING checks critical Total CRITICAL checks unknown Total UNKNOWN checks errored Total checks that errored time_to_polling Differnce in seconds between when polling data was generated and when polled time_to_polling_abs The absolute value of time_to_polling. check_$CHECK Exit status of a specific check $CHECK is equal to the name of the check in question. So foo would be check_foo
The standard Nagios/Icinga style exit codes are used and those are as below.
Exit Meaning 0 okay 1 warning 2 critical 3+ unknown
To use time_to_polling, it will need to enabled via setting the config item below. The default is false. Unless set to true, this value will default to 0. If enabling this, one will want to make sure that NTP is in use every were or it will alert if it goes over a difference of 540s.
Configure any of the checks you want to run in /usr/local/etc/sneck.conf. You con find it documented here.
Set it up in cron. This will mean you don't need to wait for all the checks to complete when polled via SNMP, which for like SMART or other long running checks will mean it timing out. Also means it does not need called via sudo as well.
For metrics the stats are migrated as below from the stats JSON.
f_drop_percent and drop_percent are computed based on the found data.
Instance Key Stats JSON Key uptime .stats.uptime total .stats.captured.total drop .stats.captured.drop ignore .stats.captured.ignore threshold .stats.captured.theshold after .stats.captured.after match .stats.captured.match bytes .stats.captured.bytes_total bytes_ignored .stats.captured.bytes_ignored max_bytes_log_line .stats.captured.max_bytes_log_line eps .stats.captured.eps f_total .stats.flow.total f_dropped .stats.flow.dropped
Those keys are appended with the name of the instance running with _ between the instance name and instance metric key. So uptime for ids would be ids_uptime.
The default is named 'ids' unless otherwise specified via the extend.
There is a special instance name of .total which is the total of all the instances. So if you want the total eps, the metric would be .total_eps. Also worth noting that the alert value is the highest one found among all the instances.
Any configuration of sagan_stat_check should be done in the cron setup. If the default does not work, check the docs for it at MetaCPAN for sagan_stat_check
The Socket Statistics application polls ss and scrapes socket statuses. Individual sockets and address-families may be filtered out within the script's optional configuration JSON file.
The following socket types are polled directly. Filtering a socket type will disable direct polling as-well-as indirect polling within any address-families that list the socket type as their child:
dccp (also exists within address-families \"inet\" and \"inet6\")\nmptcp (also exists within address-families \"inet\" and \"inet6\")\nraw (also exists within address-families \"inet\" and \"inet6\")\nsctp (also exists within address-families \"inet\" and \"inet6\")\ntcp (also exists within address-families \"inet\" and \"inet6\")\nudp (also exists within address-families \"inet\" and \"inet6\")\nxdp\n
The following socket types are polled within an address-family only:
The following address-families are polled directly and have their child socket types tab-indented below them. Filtering a socket type (see \"1\" above) will filter it from the address-family. Filtering an address-family will filter out all of its child socket types. However, if those socket types are not DIRECTLY filtered out (see \"1\" above), then they will continue to be monitored either directly or within other address-families in which they exist:
(Optional) Create a /etc/snmp/ss.json file and specify:
\"ss_cmd\" - String path to the ss binary: [\"/sbin/ss\"]
\"socket_types\" - A comma-delimited list of socket types to include. The following socket types are valid: dccp, icmp6, mptcp, p_dgr, p_raw, raw, sctp, tcp, ti_dg, ti_rd, ti_sq, ti_st, u_dgr, u_seq, u_str, udp, unknown, v_dgr, v_dgr, xdp. Please note that the \"unknown\" socket type is represented in /sbin/ss output with the netid \"???\". Please also note that the p_dgr and p_raw socket types are specific to the \"link\" address family; the ti_dg, ti_rd, ti_sq, and ti_st socket types are specific to the \"tipc\" address family; the u_dgr, u_seq, and u_str socket types are specific to the \"unix\" address family; and the v_dgr and v_str socket types are specific to the \"vsock\" address family. Filtering out the parent address families for the aforementioned will also filter out their specific socket types. Specifying \"all\" includes all of the socket types. For example: to include only tcp, udp, icmp6 sockets, you would specify \"tcp,udp,icmp6\": [\"all\"]
\"addr_families\" - A comma-delimited list of address families to include. The following families are valid: inet, inet6, link, netlink, tipc, unix, vsock. As mentioned above under (b), filtering out the link, tipc, unix, or vsock address families will also filter out their respective socket types. Specifying \"all\" includes all of the families. For example: to include only inet and inet6 families, you would specify \"inet,inet6\": [\"all\"]
You will want to make sure Suricata is set to output the stats to the eve file once a minute. This will help make sure that it won't be to far back in the file and will make sure it is recent when the cronjob runs.
Any configuration of suricata_stat_check should be done in the cron setup. If the default does not work, check the docs for it at MetaCPAN for suricata_stat_check
Install the agent on this device if it isn't already and copy the tinydns script to /usr/lib/check_mk_agent/local/
Note: We assume that you use DJB's Daemontools to start/stop tinydns. And that your tinydns instance is located in /service/dns, adjust this path if necessary.
Replace your log's run file, typically located in /service/dns/log/run with:
#!/bin/sh\nexec setuidgid dnslog tinystats ./main/tinystats/ multilog t n3 s250000 ./main/\n
Restart TinyDNS and Daemontools: /etc/init.d/svscan restart Note: Some say svc -t /service/dns is enough, on my install (Gentoo) it doesn't rehook the logging and I'm forced to restart it entirely.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ups-nut /etc/snmp/ups-nut.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Optionally if you have multiple UPS or your UPS is not named APCUPS you can specify its name as an argument into /etc/snmp/ups-nut.sh
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Create the optional config file, /usr/local/etc/wireguard_extend.json.
key default description include_pubkey 0 Include the pubkey with the return. use_short_hostname 1 If the hostname should be shortened to just the first part. public_key_to_arbitrary_name {} A hash of pubkeys to name mappings. pubkey_resolvers Resolvers to use for the pubkeys.
The default for pubkey_resolvers is config,endpoint_if_first_allowed_is_subnet_use_hosts,endpoint_if_first_allowed_is_subnet_use_ip,first_allowed_use_hosts,first_allowed_use_ip.
resolver description config Use the mappings from .public_key_to_arbitrary_name . endpoint_if_first_allowed_is_subnet_use_hosts If the first allowed IP is a subnet, see if a matching IP can be found in hosts for the endpoint. endpoint_if_first_allowed_is_subnet_use_getent If the first allowed IP is a subnet, see if a hit can be found for the endpoint IP via getent hosts. endpoint_if_first_allowed_is_subnet_use_ip If the first allowed IP is a subnet, use the endpoint IP for the name. first_allowed_use_hosts See if a match can be found in hosts for the first allowed IP. first_allowed_use_getent Use getent hosts to see try to fetch a match for the first allowed IP. first_allowed_use_ip Use the first allowed IP as the name.
LibreNMS supports multiple authentication modules along with Two Factor Auth. Here we will provide configuration details for these modules. Alternatively, you can use Socialite Providers which supports a wide variety of social/OAuth/SAML authentication methods.
To enable a particular authentication module you need to set this up in config.php. Please note that only ONE module can be enabled. LibreNMS doesn't support multiple authentication mechanisms at the same time.
auth/general
lnms config:set auth_mechanism mysql\n
"},{"location":"Extensions/Authentication/#user-levels-and-user-account-type","title":"User levels and User account type","text":"
1: Normal User: You will need to assign device / port permissions for users at this level.
5: Global Read: Read only Administrator.
10: Administrator: This is a global read/write admin account.
11: Demo Account: Provides full read/write with certain restrictions (i.e can't delete devices).
Note Oxidized configs can often contain sensitive data. Because of that only Administrator account type can see configs.
"},{"location":"Extensions/Authentication/#note-for-selinux-users","title":"Note for SELinux users","text":"
When using SELinux on the LibreNMS server, you need to allow Apache (httpd) to connect LDAP/Active Directory server, this is disabled by default. You can use SELinux Booleans to allow network access to LDAP resources with this command:
Install php-ldap or php8.1-ldap, making sure to install the same version as PHP.
If you have issues with secure LDAP try setting
auth/ad
lnms config:set auth_ad_check_certificates 0\n
this will ignore certificate errors.
"},{"location":"Extensions/Authentication/#require-actual-membership-of-the-configured-groups","title":"Require actual membership of the configured groups","text":"
If you set auth_ad_require_groupmembership to 1, the authenticated user has to be a member of the specific group. Otherwise all users can authenticate, and will be either level 0 or you may set auth_ad_global_read to 1 and all users will have read only access unless otherwise specified.
Cleanup of old accounts is done by checking the authlog. You will need to set the number of days when old accounts will be purged AUTOMATICALLY by daily.sh.
Please ensure that you set the authlog_purge value to be greater than active_directory.users_purge otherwise old users won't be removed.
Replace ad-admingroup with your Active Directory admin-user group and ad-usergroup with your standard user group. It is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
This yields (&(objectclass=user)(sAMAccountName=$username)) for the user filter and (&(objectclass=group)(sAMAccountName=$group)) for the group filter.
Install php_ldap or php7.0-ldap, making sure to install the same version as PHP.
For the below, keep in mind the auth DN is composed using a string join of auth_ldap_prefix, the username, and auth_ldap_suffix. This means it needs to include = in the prefix and , in the suffix. So lets say we have a prefix of uid=, the user derp, and the suffix of ,ou=users,dc=foo,dc=bar, then the result is uid=derp,ou=users,dc=foo,dc=bar.
"},{"location":"Extensions/Authentication/#ldap-bind-user-optional","title":"LDAP bind user (optional)","text":"
If your ldap server does not allow anonymous bind, it is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
Please note that a mysql user is created for each user the logs in successfully. Users are assigned the user role by default, unless radius sends a reply attribute with a role.
The attribute Filter-ID is a standard Radius-Reply-Attribute (string) that can be assigned a specially formatted string to assign a single role to the user.
The string to send in Filter-ID reply attribute must start with librenms_role_ followed by the role name. For example to set the admin role send librenms_role_admin.
The following strings correspond to the built-in roles, but any defined role can be used: - librenms_role_normal - Sets the normal user level. - librenms_role_admin - Sets the administrator level. - librenms_role_global-read - Sets the global read level
LibreNMS will ignore any other strings sent in Filter-ID and revert to default role that is set in your config.
$config['radius']['hostname'] = 'localhost';\n$config['radius']['port'] = '1812';\n$config['radius']['secret'] = 'testing123';\n$config['radius']['timeout'] = 3;\n$config['radius']['users_purge'] = 14; // Purge users who haven't logged in for 14 days.\n$config['radius']['default_level'] = 1; // Set the default user level when automatically creating a user.\n
Freeradius has a function called Radius Huntgroup which allows to send different attributes based on NAS. This may be utilized if you already use Filter-ID in your environment and also want to use radius with LibreNMS.
Cleanup of old accounts is done by checking the authlog. You will need to set the number of days when old accounts will be purged AUTOMATICALLY by daily.sh.
Please ensure that you set the $config['authlog_purge'] value to be greater than $config['radius']['users_purge'] otherwise old users won't be removed.
LibreNMS will expect the user to have authenticated via your webservice already. At this stage it will need to assign a userlevel for that user which is done in one of two ways:
A user exists in MySQL still where the usernames match up.
A global guest user (which still needs to be added into MySQL:
$config['http_auth_guest'] = \"guest\";\n
This will then assign the userlevel for guest to all authenticated users.
"},{"location":"Extensions/Authentication/#http-authentication-ad-authorization","title":"HTTP Authentication / AD Authorization","text":"
Config option: ad-authorization
This module is a combination of http-auth and active_directory
LibreNMS will expect the user to have authenticated via your webservice already (e.g. using Kerberos Authentication in Apache) but will use Active Directory lookups to determine and assign the userlevel of a user. The userlevel will be calculated by using AD group membership information as the active_directory module does.
The configuration is the same as for the active_directory module with two extra, optional options: auth_ad_binduser and auth_ad_bindpassword. These should be set to a AD user with read capabilities in your AD Domain in order to be able to perform searches. If these options are omitted, the module will attempt an anonymous bind (which then of course must be allowed by your Active Directory server(s)).
There is also one extra option for controlling user information caching: auth_ldap_cache_ttl. This option allows to control how long user information (user_exists, userid, userlevel) are cached within the PHP Session. The default value is 300 seconds. To disable this caching (highly discourage) set this option to 0.
This module is a combination of http-auth and ldap
LibreNMS will expect the user to have authenticated via your webservice already (e.g. using Kerberos Authentication in Apache) but will use LDAP to determine and assign the userlevel of a user. The userlevel will be calculated by using LDAP group membership information as the ldap module does.
The configuration is similar to the ldap module with one extra option: auth_ldap_cache_ttl. This option allows to control how long user information (user_exists, userid, userlevel) are cached within the PHP Session. The default value is 300 seconds. To disabled this caching (highly discourage) set this option to 0.
$config['auth_mechanism'] = 'ldap-authorization';\n$config['auth_ldap_server'] = 'ldap.example.com'; // Set server(s), space separated. Prefix with ldaps:// for ssl\n$config['auth_ldap_suffix'] = ',ou=People,dc=example,dc=com'; // appended to usernames\n$config['auth_ldap_groupbase'] = 'ou=groups,dc=example,dc=com'; // all groups must be inside this\n$config['auth_ldap_groups']['admin']['roles'] = ['admin']; // set admin group to admin role\n$config['auth_ldap_groups']['pfy']['roles'] = ['global-read']; // set pfy group to global read only role\n$config['auth_ldap_groups']['support']['roles'] = ['user']; // set support group as a normal user\n
"},{"location":"Extensions/Authentication/#additional-options-usually-not-needed_1","title":"Additional options (usually not needed)","text":"
$config['auth_ldap_version'] = 3; # v2 or v3\n$config['auth_ldap_port'] = 389; // 389 or 636 for ssl\n$config['auth_ldap_starttls'] = True; // Enable TLS on port 389\n$config['auth_ldap_prefix'] = 'uid='; // prepended to usernames\n$config['auth_ldap_group'] = 'cn=groupname,ou=groups,dc=example,dc=com'; // generic group with level 0\n$config['auth_ldap_groupmemberattr'] = 'memberUid'; // attribute to use to see if a user is a member of a group\n$config['auth_ldap_groupmembertype'] = 'username'; // username type to find group members by, either username (default), fulldn or puredn\n$config['auth_ldap_emailattr'] = 'mail'; // attribute for email address\n$config['auth_ldap_attr.uid'] = 'uid'; // attribute to check username against\n$config['auth_ldap_userlist_filter'] = 'service=informatique'; // Replace 'service=informatique' by your ldap filter to limit the number of responses if you have an ldap directory with thousand of users\n$config['auth_ldap_cache_ttl'] = 300;\n
"},{"location":"Extensions/Authentication/#ldap-bind-user-optional_1","title":"LDAP bind user (optional)","text":"
If your ldap server does not allow anonymous bind, it is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
$config['auth_ldap_binduser'] = 'ldapbind'; // will use auth_ldap_prefix and auth_ldap_suffix\n#$config['auth_ldap_binddn'] = 'CN=John.Smith,CN=Users,DC=MyDomain,DC=com'; // overrides binduser\n$config['auth_ldap_bindpassword'] = 'password';\n
"},{"location":"Extensions/Authentication/#viewembedded-graphs-without-being-logged-into-librenms","title":"View/embedded graphs without being logged into LibreNMS","text":"
The single sign-on mechanism is used to integrate with third party authentication providers that are managed outside of LibreNMS - such as ADFS, Shibboleth, EZProxy, BeyondCorp, and others. A large number of these methods use SAML the module has been written assuming the use of SAML, and therefore these instructions contain some SAML terminology, but it should be possible to use any software that works in a similar way.
In order to make use of the single sign-on module, you need to have an Identity Provider up and running, and know how to configure your Relying Party to pass attributes to LibreNMS via header injection or environment variables. Setting these up is outside of the scope of this documentation.
As this module deals with authentication, it is extremely careful about validating the configuration - if it finds that certain values in the configuration are not set, it will reject access rather than try and guess.
This, along with the defaults, sets up a basic Single Sign-on setup that:
Reads values from environment variables
Automatically creates users when they're first seen
Automatically updates users with new values
Gives everyone privilege level 10
This happens to mimic the behaviour of http-auth, so if this is the kind of setup you want, you're probably better of just going and using that mechanism.
If there is a proxy involved (e.g. EZProxy, Azure AD Application Proxy, NGINX, mod_proxy) it's essential that you have some means in place to prevent headers being injected between the proxy and the end user, and also prevent end users from contacting LibreNMS directly.
This should also apply to user connections to the proxy itself - the proxy must not be allowed to blindly pass through HTTP headers. modsecurity_ should be considered a minimum, with a full WAF being strongly recommended. This advice applies to the IDP too.
The mechanism includes very basic protection, in the form of an IP whitelist with should contain the source addresses of your proxies:
This configuration item should contain an array with a list of IP addresses or CIDR prefixes that are allowed to connect to LibreNMS and supply environment variables or headers.
If for some reason your relying party doesn't store the username in REMOTE_USER, you can override this choice.
$config['sso']['user_attr'] = 'HTTP_UID';\n
Note that the user lookup is a little special - normally headers are prefixed with HTTP_, however this is not the case for remote user - it's a special case. If you're using something different you need to figure out of the HTTP_ prefix is required or not yourself.
"},{"location":"Extensions/Authentication/#automatic-user-createupdate","title":"Automatic User Create/Update","text":"
If these are not enabled, user logins will be (somewhat silently) rejected unless an administrator has created the account in advance. Note that in the case of SAML federations, unless release of the users true identity has been negotiated with the IDP, the username (probably ePTID) is not likely to be predicable.
As used above, static gives every single user the same privilege level. If you're working with a small team, or don't need access control, this is probably suitable.
If your Relying Party is capable of calculating the necessary privilege level, you can configure the module to read the privilege number straight from an attribute. sso_level_attr should contain the name of the attribute that the Relying Party exposes to LibreNMS - as long as sso_mode is correctly set, the mechanism should find the value.
This mechanism expects to find a delimited list of groups within the attribute that sso_group_attr points to. This should be an associative array of group name keys, with privilege levels as values. The mechanism will scan the list and find the highest privilege level that the user is entitled to, and assign that value to the user.
If there are no matches between the user's groups and the sso_group_level_map, the user will be assigned the privilege level specified in the sso_static_level variable, with a default of 0 (no access). This feature can be used to provide a default access level (such as read-only) to all authenticated users.
Additionally, this format may be specific to Shibboleth; other relying party software may need changes to the mechanism (e.g. mod_auth_mellon may create pseudo arrays).
There is an optional value for sites with large numbers of groups:
LibreNMS has no capability to log out a user authenticated via Single Sign-On - that responsibility falls to the Relying Party.
If your Relying Party has a magic URL that needs to be called to end a session, you can configure LibreNMS to direct the user to it:
# Example for Shibboleth\n$config['auth_logout_handler'] = '/Shibboleth.sso/Logout';\n\n# Example for oauth2-proxy\n$config['auth_logout_handler'] = '/oauth2/sign_out';\n
This option functions independently of the Single Sign-on mechanism.
LibreNMS provides the ability to automatically add devices on your network, we can do this via a few methods which will be explained below and also indicate if they are enabled by default.
All discovery methods run when discovery runs (every 6 hours by default and within 5 minutes for new devices).
Please note that you need at least ONE device added before auto-discovery will work.
The first thing to do though is add the required configuration options to config.php.
"},{"location":"Extensions/Auto-Discovery/#additional-options","title":"Additional Options","text":""},{"location":"Extensions/Auto-Discovery/#discovering-devices-by-ip","title":"Discovering devices by IP","text":"
By default we don't add devices by IP address, we look for a reverse dns name to be found and add with that. If this fails and you would like to still add devices automatically then you will need to set $config['discovery_by_ip'] = true;
By default we require unique sysNames when adding devices (this is returned over snmp by your devices). If you would like to allow devices to be added with duplicate sysNames then please set
$config['autodiscovery']['xdp'] = false; to disable.
This includes FDP, CDP and LLDP support based on the device type.
The LLDP/xDP links with neighbours will always be discovered as soon as the discovery module is enabled. However, LibreNMS will only try to add the new devices discovered with LLDP/xDP if $config['autodiscovery']['xdp'] = true;.
Devices may be excluded from xdp discovery by sysName and sysDescr.
//Exclude devices by name\n$config['autodiscovery']['xdp_exclude']['sysname_regexp'][] = '/host1/';\n$config['autodiscovery']['xdp_exclude']['sysname_regexp'][] = '/^dev/';\n\n//Exclude devices by description\n$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/Vendor X/';\n$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/Vendor Y/';\n
Devices may be excluded from cdp discovery by platform.
//Exclude devices by platform(Cisco only)\n$config['autodiscovery']['cdp_exclude']['platform_regexp'][] = '/WS-C3750G/';\n
These devices are excluded by default:
$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/-K9W8/'; // Cisco Lightweight Access Point\n$config['autodiscovery']['cdp_exclude']['platform_regexp'][] = '/^Cisco IP Phone/'; //Cisco IP Phone\n
Apart from the aforementioned Auto-Discovery options, LibreNMS is also able to proactively scan a network for SNMP-enabled devices using the configured version/credentials.
SNMP Scan will scan nets by default and respects autodiscovery.nets-exclude.
To run the SNMP-Scanner you need to execute the snmp-scan.py from within your LibreNMS installation directory.
Here the script's help-page for reference:
usage: snmp-scan.py [-h] [-t THREADS] [-g GROUP] [-l] [-v] [--ping-fallback] [--ping-only] [-P] [network ...]\n\nScan network for snmp hosts and add them to LibreNMS.\n\npositional arguments:\n network CIDR noted IP-Range to scan. Can be specified multiple times\n This argument is only required if 'nets' config is not set\n Example: 192.168.0.0/24\n Example: 192.168.0.0/31 will be treated as an RFC3021 p-t-p network with two addresses, 192.168.0.0 and 192.168.0.1\n Example: 192.168.0.1/32 will be treated as a single host address\n\noptional arguments:\n -h, --help show this help message and exit\n -t THREADS How many IPs to scan at a time. More will increase the scan speed, but could overload your system. Default: 32\n -g GROUP The poller group all scanned devices will be added to. Default: The first group listed in 'distributed_poller_group', or 0 if not specificed\n -l, --legend Print the legend.\n -v, --verbose Show debug output. Specifying multiple times increases the verbosity.\n --ping-fallback Add the device as an ICMP only device if it replies to ping but not SNMP.\n --ping-only Always add the device as an ICMP only device.\n -P, --ping Deprecated. Use --ping-fallback instead.\n
Newly discovered devices will be added to the default_poller_group, this value defaults to 0 if unset.
When using distributed polling, this value can be changed locally by setting $config['default_poller_group'] in config.php or globally by using lnms config:set.
# Set the compact view mode for the availability map\nlnms config:set webui.availability_map_compact false\n\n# Size of the box for each device in the availability map (not compact)\nlnms config:set webui.availability_map_box_size 165\n\n# Sort by status instead of hostname\nlnms config:set webui.availability_map_sort_status false\n\n# Show the device group drop-down on the availabiltiy map page\nlnms config:set webui.availability_map_use_device_groups true\n
With the billing module you can create a bill, assign a quota to it and add ports to it. It then tracks the ports usage and shows you the usage in the bill, including any overage. Accounting by both total transferred data and 95th percentile is supported.
To enable and use the billing module you need to perform the following steps:
Edit config.php and add (or enable) the following line near the end of the config
Billing data is stored in the MySQL database, and you may wish to purge the detailed stats for old data (per-month totals will always be kept). To enable this, add the following to config.php:
$config['billing_data_purge'] = 12; // Number of months to retain\n
Data for the last complete billing cycle will always be retained - only data older than this by the configured number of months will be removed. This task is performed in the daily cleanup tasks.
For 95th Percentile billing, the default behavior is to use the highest of the input or output 95th Percentile calculation.
To instead use the combined total of inout + output to derive the 95th percentile, This can be changed on a per bill basis by setting 95th Calculation to \"Aggregate\".
To change the default option to Aggregate, add the following the config.php:
$config['billing']['95th_default_agg'] = 1; // Set aggregate 95th as default\n
This configuration setting is cosmetic and only changes the default selected option when adding a new bill.
The Component extension provides a generic database storage mechanism for discovery and poller modules. The Driver behind this extension was to provide the features of ports, in a generic manner to discovery/poller modules.
It provides a status (Nagios convention), the ability to Disable (do not poll), or Ignore (do not Alert).
When this data from both the component and component_prefs tables is returned in one single consolidated array, there is the potential for someone to attempt to set an attribute (in the component_prefs) table that is used in the component table. Because of this all fields of the component table are reserved, they cannot be used as custom attributes, if you update these the module will attempt to write them to the component table, not the component_prefs table.
"},{"location":"Extensions/Component/#edit-the-array","title":"Edit the Array","text":"
Once you have a component array from getComponents the first thing to do is extract the components for only the single device you are editing. This is required because the setComponentPrefs function only saves a single device at a time.
When writing the component array there are several caveats to be aware of, these are:
$ARRAY must be in the format of a single device ID - $ARRAY[$COMPONENT_ID][Attribute] = 'Value'; NOT in the multi device format returned by getComponents - $ARRAY[$DEVICE_ID][$COMPONENT_ID][Attribute] = 'Value';
You cannot edit the Component ID or the Device ID
reserved fields can not be removed
if a change is found an entry will be written to the eventlog.
It is intended that discovery/poller modules will detect the status of a component during the polling cycle. Status is logged using the Nagios convention for status codes, where:
0 = Ok,\n1 = Warning,\n2 = Critical\n
If you are creating a poller module which can detect a fault condition simply set STATUS to something other than 0 and ERROR to a message that indicates the problem.
To actually raise an alert, the user will need to create an alert rule. To assist with this several Alerting Macro's have been created:
%macro.component_normal - A component that is not disabled or ignored and in a Normal state.
%macro.component_warning - A component that is not disabled or ignored and NOT in a Warning state.
%macro.component_critical - A component that is not disabled or ignored and NOT in a Critical state.
To raise alerts for components, the following rules could be created:
%macros.component_critical = \"1\" - To alert on all Critical components
%macros.component_critical = \"1\" && %component.type = \"<Type of Component>\" - To alert on all Critical components of a particular type.
If there is a particular component you would like excluded from alerting, simply set the ignore field to 1.
The data that is written to each alert when it is raised is in the following format:
LibreNMS has the ability to create custom maps to give a quick overview of parts of the network including up/down status of devices and link utilisation. These are also referred to as weather maps.
Once some maps have been created, they will be visible to any users who have read access to all devices on a given map. Custom maps are available through the Overview -> Maps -> Custom Maps menu.
Some key points about the viewer are:
Nodes will change colour if they are down or disabled
Links are only associated with a single network interface
Link utilisation can only be shown if the link speed is known
Link speed is decoded from SNMP if possible (Upload/Download) and defaults to the physical speed if SNMP data is not available, or cannot be decoded
Links will change colour as follows:
Black if the link is down, or the max speed is unknown
Green at 0% utilisation, with a gradual change to
Yellow at 50% utilisation, with a gradual change to
Orange at 75% utilisation, with a gradual change to
To access the custom map editor, a user must be an admin. The editor is accessed through the Overview -> Maps -> Custom Map Editor menu.
Once you are in the editor, you will be given a drop-down list of all the custom maps so you can choose one to edit, or select \"Create New Map\" to create a new map.
When you create a new map, you will be presented with a page to set some global map settings. These are:
Name: The name for the map
Width: The width of the map in pixels
Height: The height of the map in pixels
Node Alignment: When devices are added to the map, this will align the devices to an invisible grid this many pixels wide, which can help to make the maps look better. This can be set to 0 to disable.
Background: An image (PNG/JPG) up to 2MB can be uploaded as a background.
These settings can be changed at any stage by clicking on the \"Edit Map Settings\" button in the top-left of the editor.
Once you have a map, you can start by adding \"nodes\" to the map. A node represents a device, or an external point in the network (e.g. the internet) To add a node, you click on the \"Add Node\" button in the control bar, then click on the map area where you want to add the node. You will then be aked for the following information:
Label: The text to display on this point in the network
Device: If this node represents a device, you can select the device from the drop-down. This will overwrite the label, which you can then change if you want to.
Style: You can select the style of the node. If a device has been selected you can choose the LibreNMS icon by choosing \"Device Image\". You can also choose \"Icon\" to select an image for the device.
Icon: If you choose \"Icon\" in the style box, you can select from a list of images to represent this node
There are also options to choose the size and colour of the node and the font.
Once you have finished choosing the options for the node, you can press Save to add it to the map. NOTE: This does not save anything to the database immediately. You need to click on the \"Save Map\" button in the top-right to save your changes to the database.
You can edit a node at any time by selecting it on the map and clicking on the \"Edit Node\" button in the control bar.
You can also modify the default settings for all new nodes by clicking on the \"Edit Node Default\" button at the top of the page.
Once you have 2 or more nodes, you can add links between the nodes. These are called edges in the editor. To add a link, click on the \"Add Edge\" button in the control bar, then click on one of the nodes you want to link and drag the cursor to the second node that you want to link. You will then be prompted for the following information:
From: The node that the link runs from (it will default to first node you selected)
To: The node that the link runs to (it will default to the second node you selected)
Port: If the From or To node is linked to a device, you can select an interface from one of the devices and the custom map will show traffic utilisation for the selected interface.
Reverse Port Direction: If the selected port displays data in the wrong direction for the link, you can reverse it by toggling this option.
Line Style: You can try different line styles, especially if you are running multiple links between the same 2 nodes
Show percent usage: Choose whether to have text on the lines showing the link utilisation as a percentage
Recenter Line: If you tick this box, the centre point of the line will be moved back to half way between the 2 nodes when you click on the save button.
Once you have finished choosing the options for the node, you can press Save to add it to the map. NOTE: This does not save anything to the database immediately. You need to click on the \"Save Map\" button in the top-right to save your changes to the database.
Once you press save, you it will create 3 objects on the screen, 2 arrows and a round node in the middle. Having the 3 objects allows you to move the mid point of the line off centre, and also allows us to display bandwidth information for both directions of the link.
You can edit an edge at any time by selecting it on the map and clicking on the \"Edit Edge\" button in the control bar.
You can also modify the default settings for all new edges by clicking on the \"Edit Edge Default\" button at the top of the page.
When you drag items around the map, some of the lines will bend. This will cause a \"Re-Render Map\" button to appear at the top-right of the page. This button can be clicked on to cause all lines to be re-drawn the way they will be shown in the viewer.
Once you are happy with a set of changes that you have made, you can click on the \"Save Map\" button in the top-right of the page to commit changes to the database. This will cause anyone viewing the map to see the new version the next time their page refreshes.
You can add your own images to use on the custom map by copying files into the html/images/custommap/icons/ directory. Any files with a .svg, .png or .jpg extension will be shown in the image selection drop-down in the custom map editor.
"},{"location":"Extensions/Customizing-the-Web-UI/","title":"Customizing the Web UI","text":""},{"location":"Extensions/Customizing-the-Web-UI/#custom-menu-entry","title":"Custom menu entry","text":"
Create the file resources/views/menu/custom.blade.php
"},{"location":"Extensions/Customizing-the-Web-UI/#custom-device-menu-action","title":"Custom device menu action","text":"
You can add custom external links in the menu on the device page.
This feature allows you to easily link applications to related systems, as shown in the example of Open-audIT.
The url value is parsed by the Laravel Blade templating engine. You can access device variables such as $device->hostname, $device->sysName and use full PHP.
Field Description url Url blade template resulting in valid url. Required. title Title text displayed in the menu. Required. icon Font Awesome icon class. Default: fa-external-link external Open link in new window. Default: true action Show as action on device list. Default: false"},{"location":"Extensions/Customizing-the-Web-UI/#launching-windows-programs-from-the-librenms-device-menu","title":"Launching Windows programs from the LibreNMS device menu","text":"
You can launch windows programs from links in LibreNMS, but it does take some registry entries on the client device. Save the following as winbox.reg, edit for your winbox.exe path and double click to add to your registry.
"},{"location":"Extensions/Customizing-the-Web-UI/#setting-the-primary-device-menu-action","title":"Setting the primary device menu action","text":"
You can change the icon that is clickable in the device without having to open the dropdown menu. The primary button is edit device by default.
settings/webui/device
lnms config:set html.device.primary_link web\n
Value Description edit Edit device web Connect to the device via https/http ssh launch ssh:// protocol to the device, make sure you have a handler registered telnet launch telnet:// protocol to the device capture Link to the device capture page custom1 Custom Link 1 custom2 Custom Link 2 custom3 Custom Link 3 custom4 Custom Link 4 custom5 Custom Link 5 custom6 Custom Link 6 custom7 Custom Link 7 custom8 Custom Link 8
!!! Custom http, ssh, telnet ports
Custom ports can be set through the device setting misc tab and will be appended to the Uri. Empty value will not append anything and automatically default to the standard. - custom ssh port set to 2222 will result in ssh://10.0.0.0:2222 - custom telnet port set to 2323 will result in telnet://10.0.0.0:2323
Create customised dashboards in LibreNMS per user. You can share dashboards with other users. You can also make a custom dashboard and default it for all users in LibreNMS.
LibreNMS has a whole list of Widgets to select from.
Alerts Widget: Displays all alert notifications.
Availability Map: Displays all devices with colored tiles, green up, yellow for warning (device has been restarted in last 24 hours), red for down. You can also list all services and ignored/disabled devices in this widget.
Components Status: List all components Ok state, Warning state, Critical state.
Device Summary horizontal: List device totals, up, down, ignored, disabled. Same for ports and services.
Device Summary vertical: List device totals, up, down, ignored, disabled. Same for ports and services.
Eventlog: Displays all events with your devices and LibreNMS.
External Image: can be used to show external images on your dashboard. Or images from inside LibreNMS.
Globe Map: Will display map of the globe.
Graph: Can be used to display graphs from devices.
Graylog: Displays all Graylog's syslog entries.
Notes: use for html tags, embed links and external web pages. Or just notes in general.
Server Stats: Will display gauges for CPU, Memory, Storage usage. Note the device type has to be listed as \"Server\".
Syslog: Displays all syslog entries.
Top Devices: By Traffic, or Uptime, or Response time, or Poller Duration, or Processor load, or Memory Usage, or Storage Usage.
Top Interfaces: Lists top interfaces by traffic utilization.
World Map: displays all your devices locations. From syslocation or from override sysLocation.
<iframe src=\"your_url\" frameBorder=\"0\" width=\"100%\" height = \"100%\">\n <p>Your browser does not support iframes.</p>\n</iframe>\n
Note you may need to play with the width and height and also size your widget properly.
src=\"url\" needs to be URL to webpage you are linking to. Also some web pages may not support html embedded or iframe.
"},{"location":"Extensions/Dashboards/#how-to-create-ports-graph","title":"How to create ports graph","text":"
In the dashboard, you want to create an interface graph select the widget called
'Graph' then select \"Port\" -> \"Bits\"
Note: you can map the port by description or the alias or by port id. You will need to know this in order to map the port to the graph.
"},{"location":"Extensions/Dashboards/#dimension-parameter-replacement-for-generic-image-widget","title":"Dimension parameter replacement for Generic-image widget","text":"
When using the Generic-image widget you can provide the width and height of the widget with your request. This will ensure that the image will fit nicely with the dimensions if the Generic-image widget. You can add @AUTO_HEIGHT@ and @AUTO_WIDTH@ to the Image URL as parameters.
For Dell OpenManage support you will need to install Dell OpenManage (yeah - really :)) (minimum 5.1) onto the device you want to monitor. Ensure that net-snmp is using srvadmin, you should see something similar to:
master agentx\nview all included .1\naccess notConfigGroup \"\" any noauth exact all none none\nsmuxpeer .1.3.6.1.4.1.674.10892.1\n
Restart net-snmp:
service snmpd restart\n
Ensure that srvadmin is started, this is usually done by executing:
Download OpenManage from Dell's support page Link and install OpenManage on your windows server. Make sure you have SNMP setup and running on your windows server.
LibreNMS has the ability to show you a dynamic network map based on device dependencies that have been configure. These maps are accessed through the following menu options:
The rule is based on the MySQL structure your data is in. Such as tablename.columnname. If you already know the entity you want, you can browse around inside MySQL using show tables and desc <tablename>.
As a working example and a common question, let's assume you want to group devices by hostname. If your hostname format is dcX.[devicetype].example.com. You would use the field devices.hostname.
If you want to group them by device type, you would add a rule for routers of devices.hostname endswith rtr.example.com.
If you want to group them by DC, you could use the rule devices.hostname regex dc1\\..*\\.example\\.com (Don't forget to escape periods in the regex)
You can create static groups (and convert dynamic groups to static) to put specific devices in a group. Just select static as the type and select the devices you want in the group.
You can now select this group from the Devices -> All Devices link in the navigation at the top. You can also use the group to map alert rules to by creating an alert mapping Overview -> Alerts -> Rule Mapping.
The LibreNMS dispatcher service (librenms-service.py) is a new method of running the poller service at set times. It does not replace the php scripts, just the cron entries running them.
"},{"location":"Extensions/Dispatcher-Service/#external-requirements","title":"External Requirements","text":""},{"location":"Extensions/Dispatcher-Service/#a-recent-version-of-python","title":"A recent version of Python","text":"
The LibreNMS service requires Python 3 and some features require behaviour only found in Python3.4+.
If you want to use distributed polling, you'll need a Redis instance to coordinate the nodes. It's recommended that you do not share the Redis database with any other system - by default, Redis supports up to 16 databases (numbered 0-15). You can also use Redis on a single host if you want
It's strongly recommended that you deploy a resilient cluster of redis systems, and use redis-sentinel.
You should not rely on the password for the security of your system. See https://redis.io/topics/security
LibreNMS can still use memcached as a locking mechanism when using distributed polling. So you can configure memcached for this purpose unless you have updates disabled.
See Locking Mechanisms at https://docs.librenms.org/Extensions/Distributed-Poller/
You should already have this, but the pollers do need access to the SQL database. The LibreNMS service runs faster and more aggressively than the standard poller, so keep an eye on the number of open connections and other important health metrics.
Connection settings are required in .env. The .env file is generated after composer install and APP_KEY and NODE_ID are set. Remember that the APP_KEY value must be the same on all your pollers.
#APP_KEY= #Required, generated by composer install\n#NODE_ID= #Required, generated by composer install\n\nDB_HOST=localhost\nDB_DATABASE=librenms\nDB_USERNAME=librenms\nDB_PASSWORD=\n
Once you have your Redis database set up, configure it in the .env file on each node. Configure the redis cache driver for distributed locking.
There are a number of options - most of them are optional if your redis instance is standalone and unauthenticated (neither recommended).
##\n## Standalone\n##\nREDIS_HOST=127.0.0.1\nREDIS_PORT=6379\nREDIS_DB=0\nREDIS_TIMEOUT=60\n\n# If requirepass is set in redis set everything above as well as: (recommended)\nREDIS_PASSWORD=PasswordGoesHere\n\n# If ACL's are in use, set everything above as well as: (highly recommended)\nREDIS_USERNAME=UsernameGoesHere\n\n##\n## Sentinel\n##\nREDIS_SENTINEL=redis-001.example.org:26379,redis-002.example.org:26379,redis-003.example.org:26379\nREDIS_SENTINEL_SERVICE=mymaster\n\n# If requirepass is set in sentinel, set everything above as well as: (recommended)\nREDIS_SENTINEL_PASSWORD=SentinelPasswordGoesHere\n\n# If ACL's are in use, set everything above as well as: (highly recommended)\nREDIS_SENTINEL_USERNAME=SentinelUsernameGoesHere\n
For more information on ACL's, see https://redis.io/docs/management/security/acl/
Note that if you use Sentinel, you may still need REDIS_PASSWORD, REDIS_USERNAME, REDIS_DB and REDIS_TIMEOUT - Sentinel just provides the address of the instance currently accepting writes and manages failover. It's possible (and recommended) to have authentication both on Sentinel and the managed Redis instances.
There are also some SQL options, but these should be inherited from your LibreNMS web UI configuration.
Logs are sent to the system logging service (usually journald or rsyslog) - see https://docs.python.org/3/library/logging.html#logging-levels for the options available.
$config['distributed_poller'] = true; # Set to true to enable distributed polling\n$config['distributed_poller_name'] = php_uname('n'); # Uniquely identifies the poller instance\n$config['distributed_poller_group'] = 0; # Which group to poll\n
"},{"location":"Extensions/Dispatcher-Service/#tuning-the-number-of-workers","title":"Tuning the number of workers","text":"
See https://your_librenms_install/poller
You want to keep Consumed Worker Seconds comfortably below Maximum Worker Seconds. The closer the values are to each other, the flatter the CPU graph of the poller machine. Meaning that you are utilizing your CPU resources well. As long as Consumed WS stays below Maximum WS and Devices Pending is 0, you should be ok.
If Consumed WS is below Maximum WS and Devices Pending is > 0, your hardware is not up to the task.
Maximum WS equals the number of workers multiplied with the number of seconds in the polling period. (default 300)
The watchdog scheduler is disabled by default. You can enable it by setting the following:
$config['service_watchdog_enabled'] = true;\n
The watchdog scheduler will check that the poller log file has been written to within the last poll period. If there is no change to the log file since, the watchdog will restart the polling service. The poller log file is set by $config['log_file'] and defaults to ./logs/librenms.log
Once the LibreNMS service is installed, the cron scripts used by LibreNMS to start alerting, polling, discovery and maintenance tasks are no longer required and must be disabled either by removing or commenting them out. The service handles these tasks when enabled.
"},{"location":"Extensions/Dispatcher-Service/#systemd-service-with-watchdog","title":"systemd service with watchdog","text":"
This service file is an alternative to the above service file. It uses the systemd WatchdogSec= option to restart the service if it does not receive a keep-alive from the running process.
A systemd unit file can be found in misc/librenms-watchdog.service. To install run:
This requires: python3-systemd (or python-systemd on older systems) or https://pypi.org/project/systemd-python/ If you run this systemd service without python3-systemd it will restart every 30 seconds.
* may only be installed on one server (however, some can be clustered)
Distributed Polling allows the workers to be spread across additional servers for horizontal scaling. Distributed polling is not intended for remote polling.
Devices can be grouped together into a poller_group to pin these devices to a single or a group of designated pollers.
All pollers need to write to the same set of RRD files, preferably via RRDcached.
It is also a requirement that at least one locking service is in place to which all pollers can connect. There are currently three locking mechanisms available
memcached
redis (preferred)
sql locks (default)
All of the above locking mechanisms are natively supported in LibreNMS. If none are specified, it will default to using SQL.
"},{"location":"Extensions/Distributed-Poller/#requirements-for-distributed-polling","title":"Requirements for distributed polling","text":"
These requirements are above the normal requirements for a full LibreNMS install.
rrdtool version 1.4 or above
At least one locking mechanism configured
a rrdcached install
By default, all hosts are shared and have the poller_group = 0. To pin a device to a poller, set it to a value greater than 0 and set the same value in the poller's config with distributed_poller_group. One can also specify a comma separated string of poller groups in distributed_poller_group. The poller will then poll devices from any of the groups listed. If new devices get added from the poller they will be assigned to the first poller group in the list unless the group is specified when adding the device.
The following is a standard config, combined with a locking mechanism below:
Preferably you should set the memcached server settings via the web UI. Under Settings > Global Settings > Distributed poller, you fill out the memcached host and port, and then in your .env file you will need to add:
CACHE_DRIVER=memcached\n
If you want to use memcached, you will also need to install an additional Python 3 python-memcached package."},{"location":"Extensions/Distributed-Poller/#example-setups","title":"Example Setups","text":""},{"location":"Extensions/Distributed-Poller/#openstack","title":"OpenStack","text":"
Below is an example setup based on a real deployment which at the time of writing covers over 2,500 devices and 50,000 ports. The setup is running within an OpenStack environment with some commodity hardware for remote pollers. Here's a diagram of how you can scale LibreNMS out:
This is a distributed setup that I created for a regional hybrid ISP (fixed wireless/fiber optic backhaul). It was created at around the ~4,000 device mark to transition from multiple separate instances to one more central. When I left the company, it was monitoring: * 10,800 devices * 307,700 ports * 37,000 processors * 17,000 wireless sensors * ~480,000 other objects/sensors.
As our goal was more to catch alerts and monitor overall trends we went with a 10 minute polling cycle. Polling the above would take roughly 8 minutes and 120GHz worth of CPU across all VMs. CPUs were older Xeons (E5). The diagram below shows the CPU and RAM utilization of each VM during polling. Disk space utilization for SQL/RRD is also included.
Device discovery was split off into its own VM as that process would take multiple hours.
Workers were assigned in the following way:
Web/RRD Server:
alerting: 1
billing: 2
discovery: 0
ping: 1
poller: 10
services: 16
Discovery Server:
alerting: 1
billing: 2
discovery: 60
ping: 1
poller: 5
services: 8
Pollers
alerting: 1
billing: 2
discovery: 0
ping: 1
poller: 40
services: 8
Each poller had on average 19,500/24,000 worker seconds consumed.
RRDCached is incredibly important; this setup ran on spinning disks due to the wonders of caching.
I very strongly recommend setting up recursive DNS on your discovery and polling servers. While I used DNSMASQ there are many options.
SQL tuner will help you quite a bit. You'll also want to increase your maximum connections amount to support the pollers. This setup was at 500. Less important, but putting ~12GB of the database in RAM was reported to have helped web UI performance as well as some DB-heavy Tableau reports. RAM was precious in this environment or it would've been more, but it wasn't necessary either.
Be careful with keeping the default value for 'Device Down Retry' as it can eat up quite a lot of poller activity. I freed up over 20,000 worker seconds when setting this to only happen once or twice per 10-minute polling cycle. The impact of this will vary depending on the percentage of down device in your system. This example had it set at 400 seconds.
Also be wary of keeping event log and syslog entries for too long as it can have a pretty negative effect on web UI performance.
To resolve an issue with large device groups the php fpm max input vars was increased to 20000.
All of these VMs were within the same physical data center so latency was minimal.
The decision of redis over the other locking methods was arbitrary but in over 2 years I never had to touch that VM aside from security updates.
How you set the distribution up is entirely up to you. You can choose to host the majority of the required services on a single virtual machine or server and then a poller to actually query the devices being monitored, all the way through to having a dedicated server for each of the individual roles. Below are notes on what you need to consider both from the software layer, but also connectivity.
"},{"location":"Extensions/Distributed-Poller/#web-api-layer","title":"Web / API Layer","text":"
This is typically Apache but we have setup guides for both Nginx and Lighttpd which should work perfectly fine. There is nothing unique about the role this service is providing except that if you are adding devices from this layer then the web service will need to be able to connect to the end device via SNMP and perform an ICMP test.
It is advisable to run RRDCached within this setup so that you don't need to share the rrd folder via a remote file share such as NFS. The web service can then generate rrd graphs via RRDCached. If RRDCached isn't an option then you can mount the rrd directory to read the RRD files directly.
Central storage should be provided so all RRD files can be read from and written to in one location. As suggested above, it's recommended that RRD Cached is configured and used.
For this example, we are running RRDCached to allow all pollers and web/api servers to read/write to the rrd files with the rrd directory also exported by NFS for simple access and maintenance.
Pollers can be installed and run from anywhere, the only requirements are:
They can access the Memcache instance
They can create RRD files via some method such as a shared filesystem or RRDTool >=1.5.5
They can access the MySQL server
You can either assign pollers into groups and set a poller group against certain devices, this will mean that those devices will only be processed by certain pollers (default poller group is 0) or you can assign all pollers to the default poller group for them to process any and all devices.
This will provide the ability to have a single poller behind a NAT firewall monitor internal devices and report back to your central system. You will then be able to monitor those devices from the Web UI as normal.
Another benefit to this is that you can provide N+x pollers, i.e if you know that you require three pollers to process all devices within 300 seconds then adding a 4th poller will mean that should any one single poller fail then the remaining three will complete polling in time. You could also use this to take a poller out of service for maintenance, i.e OS updates and software updates.
It is extremely advisable to either run a central recursive dns server such as pdns-recursor and have all of your pollers use this or install a recursive dns server on each poller - the volume of DNS requests on large installs can be significant and will slow polling down enough to cause issues with a large number of devices.
A last note to make sure of, is that all pollers writing to the same DB need to have the same APP_KEY value set in the .env file.
Depending on your setup will depend on how you configure your discovery processes.
Cron based polling
It's not necessary to run discovery services on all pollers. In fact, you should only run one discovery process per poller group. Designate a single poller to run discovery (or a separate server if required).
If you run billing, you can do this in one of two ways:
Run poll-billing.php and calculate-billing.php on a single poller which will create billing information for all bills. Please note this poller must have snmp access to all of your devices which have ports within a bill.
The other option is to enable $config['distributed_billing'] = true; in config.php. Then run poll-billing.php on a single poller per group. You can run calculate-billing.php on any poller but only one poller overall.
Dispatcher service When using the dispatcher service, discovery can run on all nodes.
Normally, LibreNMS sends an ICMP ping to the device before polling to check if it is up or down. This check is tied to the poller frequency, which is normally 5 minutes. This means it may take up to 5 minutes to find out if a device is down.
Some users may want to know if devices stop responding to ping more quickly than that. LibreNMS offers a ping.php script to run ping checks as quickly as possible without increasing snmp load on your devices by switching to 1 minute polling.
WARNING: If you do not have an alert rule that alerts on device status, enabling this will be a waste of resources. You can find one in the Alert Rules Collection.
"},{"location":"Extensions/Fast-Ping-Check/#setting-the-ping-check-to-1-minute","title":"Setting the ping check to 1 minute","text":"
1: If you are using RRDCached, stop the service.
- This will flush all pending writes so that the rrdstep.php script can change the steps.\n
2: Change the ping_rrd_step setting in config.php
poller/rrdtool
lnms config:set ping_rrd_step 60\n
3: Update the rrd files to change the step (step is hardcoded at file creation in rrd files)
./scripts/rrdstep.php -h all\n
4: Add the following line to /etc/cron.d/librenms to allow 1 minute ping checks
NOTE: If you are using distributed pollers you can restrict a poller to a group by appending -g to the cron entry. Alternatively, you should only run ping.php on a single node.
Cron only has a resolution of one minute, so for sub-minute ping checks we need to adapt both ping and alerts entries. We add two entries per function, but add a delay before one of these entries.
Remember, you need to remove the original ping.php and alerts.php entries in crontab before proceeding!
1: Set ping_rrd_step
poller/rrdtool
lnms config:set ping_rrd_step 30\n
2: Update the rrd files
./scripts/rrdstep.php -h all\n
3: Update cron (removing any other ping.php or alert.php entries)
The ping.php script respects device dependencies, but the main poller does not (for technical reasons). However, using this script does not disable the icmp check in the poller and a child may be reported as down before the parent.
ping.php uses much the same settings as the poller fping with one exception: retries is used instead of count. ping.php does not measure loss and avg response time, only up/down, so once a device responds it stops pinging it.
This is currently being tested, use at your own risk.
LibreNMS can be used with a MariaDB Galera Cluster. This is a Multi Master cluster, meaning each node in the cluster can read and write to the database. They all have the same ability. LibreNMS will randomly choose a working node to read and write requests to.
For more information see https://laravel.com/docs/database#read-and-write-connections
It is best practice to have a minimum of 3 nodes in the cluster, A odd number of nodes is recommended in the event nodes have a disagreement on data, they will have a tie breaker.
It's recommended that all servers be similar in hardware performance, cluster performance can be affected by the slowest server in the cluster.
Backup the database before starting, and backing up the database regularly is still recommended even in a working cluster environment.
"},{"location":"Extensions/Galera-Cluster/#install-and-configure-galera","title":"Install and Configure Galera","text":""},{"location":"Extensions/Galera-Cluster/#install-galera4-and-mariadb-server","title":"Install Galera4 and MariaDB Server","text":"
These can be obtained from your OS package manager. For example in Ubuntu.
Change the following values for your environment. * wsrep_cluster_address - All the IP address's of your nodes. * wsrep_cluster_name - Name of cluster, should be the same for all nodes * wsrep_node_address - IP address of this node. * wsrep_node_name - Name of this node."},{"location":"Extensions/Galera-Cluster/#edit-librenms-env","title":"Edit LibreNMS .env","text":"
LibreNMS supports up to 9 galera nodes, you define these nodes in the .env file. For each node we have the ability to define if this librenms installation/poller is able to write, read or both to that node. The galera nodes you define here can be the same or differnt for each librenms poller. If you have a poller you only want to write/read to one galera node, you would simply add one DB_HOST, and omit all the rest. This allows you to precisely control what galera nodes a librenms poller is reading and or writing too.
DB_HOST is always set to read/write.
DB_HOST must be set, however, it does not have to be the same on each poller, it can be different as long as it's part of the same galera cluster.
If the node that is set to DB_HOST is down, things like lnms db command no longer work, as they only use DB_HOST and don't failover to other nodes.
Set DB_CONNECTION=mysql_cluster to enable
DB_STICKY can be used if you are pulling out of sync data form the database in a read request. For more information see https://laravel.com/docs/database#the-sticky-option
To see some stats on how the Galera cluster is preforming run the following.
lnms db\n
In the database run following mysql query
SHOW GLOBAL STATUS LIKE 'wsrep_%';\n
Variable Name Value Notes ----------------------------------- ---------------------------------------------------------------- --------------------------------------------------------- wsrep_cluster_size 2 Current number of nodes in Cluster wsrep_cluster_state_uuid e71582f3-cf14-11eb-bcf6-a23029e16405 Last Transaction UUID, Should be the same for each node wsrep_connected On On = Connected with other nodes wsrep_local_state_comment Synced Synced with other nodes"},{"location":"Extensions/Galera-Cluster/#restarting-the-entire-cluster","title":"Restarting the Entire Cluster","text":"
In a cluster environment, steps should be taken to ensure that ALL nodes are not offline at the same time. Failed nodes can recover without issue as long as one node remains online. In the event that ALL nodes are offline, the following should be done to ensure you are starting the cluster with the most up-to-date database. To do this login to each node and running the following
We have simple integration for GateOne, you will be redirected to your Gateone command line frontend to access your equipment. (Currently this only works with SSH)
GateOne itself isn't included within LibreNMS, you will need to install this separately either on the same infrastructure as LibreNMS or as a totally standalone appliance. The installation is beyond the scope of this document.
Config is simple, include the following in your config.php:
Note: You must use the full url including the trailing /!
We also support prefixing the currently logged in Librenms user to the SSH connection URL that is created, eg. ssh://admin@localhost To enable this, put the following in your config.php:
We have simple integration for Graylog, you will be able to view any logs from within LibreNMS that have been parsed by the syslog input from within Graylog itself. This includes logs from devices which aren't in LibreNMS still, you can also see logs for a specific device under the logs section for the device.
Currently, LibreNMS does not associate shortnames from Graylog with full FQDNS. If you have your devices in LibreNMS using full FQDNs, such as hostname.example.com, be aware that rsyslogd, by default, sends the shortname only. To fix this, add
$PreserveFQDN on
to your rsyslog config to send the full FQDN so device logs will be associated correctly in LibreNMS. Also see near the bottom of this document for tips on how to enable/suppress the domain part of hostnames in syslog-messages for some platforms.
Graylog itself isn't included within LibreNMS, you will need to install this separately either on the same infrastructure as LibreNMS or as a totally standalone appliance.
Config is simple, here's an example based on Graylog 2.4:
Graylog messages are stored using GMT timezone. You can display graylog messages in LibreNMS webui using your desired timezone by setting the following option using lnms config:set:
If you don't want to use an admin account for connection to Graylog Log into http:///api/api-browser/global/index.html using graylog admin credentials Browse to: Roles: User roles Click on: Create a new role In JSON body paste this:
If you have enabled TLS for the Graylog API and you are using a self-signed certificate, please make sure that the certificate is trusted by your LibreNMS host, otherwise the connection will fail. Additionally, the certificate's Common Name (CN) has to match the FQDN or IP address specified in
external/graylog
lnms config:set graylog.server example.com\n
"},{"location":"Extensions/Graylog/#match-any-address","title":"Match Any Address","text":"
If you want to match the source address of the log entries against any IP address of a device instead of only against the primary address and the host name to assign the log entries to a device, you can activate this function using
There are 2 configuration parameters to influence the behaviour of the \"Recent Graylog\" table on the overview page of the devices.
external/graylog
lnms config:set graylog.device-page.rowCount 10\n
Sets the maximum number of rows to be displayed (default: 10)
external/graylog
lnms config:set graylog.device-page.loglevel 7\n
You can set which loglevels that should be displayed on the overview page. (default: 7, min: 0, max: 7)
external/graylog
lnms config:set graylog.device-page.loglevel 4\n
Shows only entries with a log level less than or equal to 4 (Emergency, Alert, Critical, Error, Warning).
You can set a default Log Level Filter with
lnms config:set graylog.loglevel 7\n
(applies to /graylog and /device/device=/tab=logs/section=graylog/ (min: 0, max: 7)"},{"location":"Extensions/Graylog/#domain-and-hostname-handling","title":"Domain and hostname handling","text":"
Suppressing/enabling the domain part of a hostname for specific platforms
You should see if what you get in syslog/Graylog matches up with your configured hosts first. If you need to modify the syslog messages from specific platforms, this may be of assistance:
Okay this is a very quick walk-through in writing your own commands for the IRC-Bot.
First of all, create a file in includes/ircbot, the file-name should be in this format: command.inc.php.
When editing the file, do not open nor close PHP-tags. Any variable you assign will be discarded as soon as your command returns. Some variables, specially all listed under $this->, have special meanings or effects. Before a command is executed, the IRC-Bot ensures that the MySQL-Socket is working, that $this->user points to the right user and that the user is authenticated. Below you will find a table with related functions and attributes. You can chain-load any built-in command by calling $this->_command(\"My Parameters\"). You cannot chain-load external commands.
To enable your command, edit your config.php and add something like this:
"},{"location":"Extensions/IRC-Bot-Extensions/#functions-and-attributes","title":"Functions and Attributes","text":"
... that are accessible from within an extension
"},{"location":"Extensions/IRC-Bot-Extensions/#functions","title":"Functions","text":"Function( (Type) $Variable [= Default] [,...] ) Returns Description $this->getChan( )String Returns channel of current event. $this->getData( (boolean) $Block = false )String/Boolean Returns a line from the IRC-Buffer if it's not matched against any other command. If $Block is true, wait until a suitable line is returned. $this->getUser( )String Returns nick of current user. Not to confuse with $this->user! $this->get_user( )Array See $this->user in Attributes. $this->irc_raw( (string) $Protocol )Boolean Sends raw IRC-Protocol. $this->isAuthd( )Booleantrue if the user is authenticated. $this->joinChan( (string) $Channel )Boolean Joins given $Channel. $this->log( (string) $Message )Boolean Logs given $Message into STDOUT. $this->read( (string) $Buffer )String/Boolean Returns a line from given $Buffer or false if there's nothing suitable inside the Buffer. Please use $this->getData() for handler-safe data retrieval. $this->respond( (string) $Message )Boolean Responds to the request auto-detecting channel or private message."},{"location":"Extensions/IRC-Bot-Extensions/#attributes","title":"Attributes","text":"Attribute Type Description $paramsString Contains all arguments that are passed to the .command. $this->chanArray Channels that are configured. $this->commandsArray Contains accessible commands. $this->configArray Contains $config from config.php. $this->dataString Contains raw IRC-Protocol. $this->debugBoolean Debug-Flag. $this->externalArray Contains loaded extra commands. $this->nickString Bot's nick on the IRC. $this->passString IRC-Server's passphrase. $this->portInt IRC-Server's port-number. $this->serverString IRC-Server's hostname. $this->sslBoolean SSL-Flag. $this->tickInt Interval to check buffers in microseconds. $this->userArray Array containing details about the user that sent the request."},{"location":"Extensions/IRC-Bot-Extensions/#example","title":"Example","text":"
includes/ircbot/join-ng.inc.php
if( $this->user['level'] != 10 ) {\n return $this->respond(\"Sorry only admins can make me join.\");\n }\n if( $this->getChan() == \"#noc\") {\n $this->respond(\"Joining $params\");\n $this->joinChan($params);\n } else {\n $this->respond(\"Sorry, only people from #noc can make join.\");\n }\n
LibreNMS has an easy to use IRC-Interface for basic tasks like viewing last log-entry, current device/port status and such.
By default the IRC-Bot will not start when executed and will return an error until at least $config['irc_host'] and $config['irc_port'] has been specified inside config.php. (To start the IRC-Bot run ./irc.php )
If no channel has been specified with $config['irc_chan'], ##librenms will be used. The default Nick for the bot is LibreNMS.
The Bot will reply the same way it's being called. If you send it the commands via Query, it will respond in the Query. If you send the commands via a Channel, then it will respond in the Channel.
"},{"location":"Extensions/IRC-Bot/#configuration-defaults","title":"Configuration & Defaults","text":"Option Default-Value Notes $config['irc_alert']false Optional; Enables Alerting-Socket. EXPERIMENTAL$config['irc_alert_chan']false Optional; Multiple channels can be defined as Array or delimited with ,. EXPERIMENTAL$config['irc_alert_utf8']false Optional; Enables use of strikethrough in alerts via UTF-8 encoded characters. Might cause trouble for some clients. $config['irc_alert_short']false Optional; Send a one line alert summary instead of multi-line detailed alert. $config['irc_authtime']3 Optional; Defines how long in Hours an auth-session is valid. $config['irc_chan']##librenms Optional; Multiple channels can be defined as Array or delimited with ,. Passwords are defined after a space-character. $config['irc_debug']false Optional; Enables debug output (Wall of text) $config['irc_external'] Optional; Array or , delimited string with commands to include from includes/ircbot/*.inc.php$config['irc_host'] Required; Domain or IP to connect. If it's an IPv6 Address, embed it in []. (Example: [::1]) $config['irc_maxretry']5 Optional; How many connection attempts should be made before giving up $config['irc_nick']LibreNMS Optional; $config['irc_pass'] Optional; This sends the IRC-PASS Sequence to IRC-Servers that require Password on Connect $config['irc_port']6667 Required; To enable SSL append a + before the Port. (Example: +6697) $config['irc_ctcp']false Optional; Enable/disable ctcp-replies from the bot (currently VERSION, PING and TIME). $config['irc_ctcp_version']LibreNMS IRCbot. https://www.librenms.org/ Optional; Reply-string to CTCP VERSION requests $config['irc_auth'] Optional; Array of hostmasks that are automatically authenticated."},{"location":"Extensions/IRC-Bot/#irc-commands","title":"IRC-Commands","text":"Command Description .auth <User/Token> If <user>: Request an Auth-Token. If <token>: Authenticate session. .device <hostname> Prints basic information about given hostname. .down List hostnames that are down, if any. .help List available commands. .join <channel> Joins <channel> if user has admin-level. .listdevices Lists the hostnames of all known devices. .log [<N>] Prints N lines or last line of the eventlog. .port <hostname> <ifname> Prints Port-related information from ifname on given hostname. .quit Disconnect from IRC and exit. .reload Reload configuration. .status <type> Prints status information for given type. Type can be devices, services, ports. Shorthands are: dev,srv,prt.version Prints $this->config['project_name_version'].
( /!\\ All commands are case-insensitive but their arguments are case-sensitive)
Any client matching one of the first two hostmasks will automatically be authenticated as the \"admin\" user in LibreNMS, and clients matching the last line will be authenticated as the user \"john\" in LibreNMS, without using .auth and a waiting for a valid token.
The bot is coded in a unified way. This makes writing extensions by far less painful. Simply add your command to the $config['irc_external'] directive and create a file called includes/ircbot/command.inc.php containing your code. The string behind the call of .command is passed as $params. The user who requested something is accessible via $this->user. Send your reply/ies via $this->respond($string).
A more detailed documentation of the functions and variables available for extensions can be found at IRC-Bot Extensions;
Librenms can interpret, display and group certain additional information on ports. This is done based on the format that the port description is written although it's possible to customise the parser to be specific for your setup.
By default we ship all metrics to RRD files, either directly or via RRDCached. On top of this you can ship metrics to Graphite, InfluxDB (v1 or v2 API), OpenTSDB or Prometheus. At present you can't use these backends to display graphs within LibreNMS and will need to use something like Grafana.
For further information on configuring LibreNMS to ship data to one of the other backends then please see the documentation below.
If you wish to render info for configure channels for a device, you need add the various profile-stat directories your system uses, which for most systems will be as below.
When adding sources to nfsen.conf, it is important to use the hostname that matches what is configured in LibreNMS, because the rrd files NfSen creates is named after the source name (ident), and it doesn't allow you to use an IP address instead. However, in LibreNMS, if your device is added by an IP address, add your source with any name of your choice, and create a symbolic link to the rrd file.
cd /var/nfsen/profiles-stat/sitea/\nln -s mychannel.rrd librenmsdeviceIP.rrd\n
external/nfsen
lnms config:set nfsen_split_char '_'\n
This value tells us what to replace the full stops . in the devices hostname with.
external/nfsen
lnms config:set nfsen_suffix '_yourdomain_com'\n
The above is a very important bit as device names in NfSen are limited to 21 characters. This means full domain names for devices can be very problematic to squeeze in, so therefor this chunk is usually removed.
On a similar note, NfSen profiles for channels should be created with the same name.
"},{"location":"Extensions/NFSen/#stats-defaults-and-settings","title":"Stats Defaults and Settings","text":"
Below are the default settings used with nfdump for stats.
For more defaulted information on that, please see nfdump(1). The default location for nfdump is /usr/bin/nfdump. If nfdump is located elsewhere, set it with
The above is a array containing a list for the drop down menu how many top items should be returned.
external/nfsen
lnms config:set nfsen_top_default 20\n
The above sets default top number to use from the drop down.
external/nfsen
lnms config:set nfsen_stat_default srcip\n
The above sets default stat type to use from the drop down.
record Flow Records\nip Any IP Address\nsrcip SRC IP Address\ndstip DST IP Address\nport Any Port\nsrcport SRC Port\ndstport DST Port\nsrctos SRC TOS\ndsttos DST TOS\ntos TOS\nas AS\nsrcas SRC AS\ndstas DST AS\n
external/nfsen
lnms config:set nfsen_order_default packets\n
The above sets default order type to use from the drop down. Any of the following below are currently supported.
flows Number of total flows for the time period.\npacket Number of total packets for the time period.\nbytes Number of total bytes for the time period.\npps Packets Per Second\nbps Bytes Per Second\nbpp Bytes Per Packet\n
external/nfsen
lnms config:set nfsen_last_default 900\n
The above is the last default to use from the drop down.
The above associative array contains time intervals for how far back to go. The keys are the length in seconds and the value is just a description to display.
LibreNMS has the ability to show you a dynamic network map based on data collected from devices. These maps are accessed through the following menu options:
Overview -> Maps -> Network
Overview -> Maps -> Device Group Maps
The Neighbours -> Map tab when viewing a single device (the Neighbours tab will only show if a device has xDP neighbours)
These network maps can be based on:
xDP Discovery
MAC addresses (ARP entries matching interface IP and MAC)
By default, both are are included but you can enable / disable either one using the following config option:
Either remove mac or xdp depending on which you want. XDP is based on FDP, CDP and LLDP support based on the device type.
It is worth noting that the global map could lead to a large network map that is slow to render and interact with. The network map on the device neighbour page, or building device groups and using the device group maps will be more usable on large networks.
The map display can be configured by altering the Vis JS Options
"},{"location":"Extensions/OAuth-SAML/","title":"OAuth and SAML Support","text":""},{"location":"Extensions/OAuth-SAML/#introduction","title":"Introduction","text":"
LibreNMS has support for Laravel Socialite to try and simplify the use of OAuth 1 or 2 providers such as using GitHub, Microsoft, Twitter + many more and SAML.
Socialite Providers supports more than 100+ 3rd parties so you will most likely find support for the SAML or OAuth provider you need without too much trouble.
Please do note however, these providers are not maintained by LibreNMS so we cannot add support for new ones and we can only provide you basic help with general configuration. See the Socialite Providers website for more information on adding a new OAuth provider.
Below we will guide you on how to install SAML or some of these OAth providers, you should be able to use these as a guide on how to install any others you may need but please, please, ensure you read the Socialite Providers documentation carefully.
GitHub Provider Microsoft Provider Okta Provider SAML2
Please ensure you set APP_URL within your .env file so that callback URLs work correctly with the identify provider.
Note
Once you have configured your OAuth or SAML2 provider, please ensure you check the Post configuration settings section at the end.
"},{"location":"Extensions/OAuth-SAML/#github-and-microsoft-examples","title":"GitHub and Microsoft Examples","text":""},{"location":"Extensions/OAuth-SAML/#install-plugin","title":"Install plugin","text":"
Note
First we need to install the plugin itself. The plugin name can be slightly different so be sure to check the Socialite Providers documentation and look for this line, composer require socialiteproviders/github which will give you the name you need for the command, i.e: socialiteproviders/github.
GitHubMicrosoftOkta
lnms plugin:add socialiteproviders/github
lnms plugin:add socialiteproviders/microsoft
lnms plugin:add socialiteproviders/okta
"},{"location":"Extensions/OAuth-SAML/#find-the-provider-name","title":"Find the provider name","text":"
Next we need to find the provider name and writing it down
Note
It's almost always the name of the provider in lowercase but can be different so check the Socialite Providers documentation and look for this line, github => [ which will give you the name you need for the above command: github.
So our provider name is okta, write this down."},{"location":"Extensions/OAuth-SAML/#register-oauth-application","title":"Register OAuth application","text":""},{"location":"Extensions/OAuth-SAML/#register-a-new-application","title":"Register a new application","text":"
Now we need some values from the OAuth provider itself, in most cases you need to register a new \"OAuth application\" at the providers site. This will vary from provider to provider but the process itself should be similar to the examples below.
Note
The callback URL is always: https://your-librenms-url/auth/provider/callback It doesn't need to be a public available site, but it almost always needs to support TLS (https)!
GitHubMicrosoftOkta
For our example with GitHub we go to GitHub Developer Settings and press \"Register a new application\":
Fill out the form accordingly (with your own values):
For our example with Microsoft we go to \"Azure Active Directory\" > \"App registrations\" and press \"New registration\"
Fill out the form accordingly using your own values):
Copy the value of the Application (client) ID and Directory (tenant) ID and save them, you will need them in the next step.
For our example with Okta, we go to Applications>Create App Integration, Select OIDC - OpenID Connect, then Web Application.
Fill in the Name, Logo, and Assignments based on your preferred settings. Leave the Sign-In Redirect URI field, this is where you will edit this later:
Note your Okta domain or login url. Sometimes this can be a vanity url like login.company.com, or sometimes just company.okta.com.
Click save.
"},{"location":"Extensions/OAuth-SAML/#generate-a-new-client-secret","title":"Generate a new client secret","text":"GitHubMicrosoftOkta
Press 'Generate a new client secret' to get a new client secret.
Select Certificates & secrets under Manage. Select the 'New client secret' button. Enter a value in Description and select one of the options for Expires and select 'Add'.
Copy the client secret Value (not Secret ID!) before you leave this page. You will need it in the next step.
This step is done for you when creating the app. All you have to do is copy down the client secret. You will need it in the next step.
Now we need to set the configuration options for your provider within LibreNMS itself. Please replace the values in the examples below with the values you collected earlier:
The format of the configuration string is auth.socialite.configs.*provider name*.*value*
Now you are done with setting up the OAuth provider! If it doesn't work, please double check your configuration values by using the config:get command below.
Since most Socialite Providers don't provide Authorization only Authentication it is possible to set the default User Role for Authorized users. Appropriate care should be taken.
none: No Access: User has no access
normal: Normal User: You will need to assign device / port permissions for users at this level.
global-read: Global Read: Read only Administrator.
admin: Administrator: This is a global read/write admin account.
Socialite can specifiy scopes that should be included with in the authentication request. (see Larvel docs )
For example, if Okta is configured to expose group information it is possible to use these group names to configure User Roles.
This requires configuration in Okta. You can set the 'Groups claim type' to 'Filter' and supply a regex of which groups should be returned which can be mapped below.
First enable sending the 'groups' claim (along with the normal openid, profile, and email claims). Be aware that the scope name must match the claim name. For identity providers where the scope does not match (e.g. Keycloak: roles -> groups) you need to configure a custom scope.
settings/auth/socialite
lnms config:set auth.socialite.scopes.+ groups\n
Then setup mappings from the returned claim arrays to the User levels you want
Depending on what your identity provider (Google, Azure, ...) supports, the configuration could look different from what you see next so please use this as a rough guide. It is up the IdP to provide the relevant details that you will need for configuration.
GoogleAzure
Go to https://admin.google.com/ac/apps/unified
Press \"DOWNLOAD METADATA\" and save the file somewhere accessible by your LibreNMS server
ACS URL = https://your-librenms-url/auth/saml2/callback Entity ID = https://your-librenms-url/auth/saml2 Name ID format = PERSISTANT Name ID = Basic Information > Primary email
First name = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Last name = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Primary email = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
"},{"location":"Extensions/OAuth-SAML/#manually-configuring-the-identity-provider-with-a-certificate-string","title":"Manually configuring the Identity Provider with a certificate string","text":"
"},{"location":"Extensions/OAuth-SAML/#manually-configuring-the-identity-provider-with-a-certificate-file","title":"Manually configuring the Identity Provider with a certificate file","text":"
You most likely will need to set SESSION_SAME_SITE_COOKIE=none in .env if you use SAML2! If you get an error with http code 419, you should try to remove SESSION_SAME_SITE_COOKIE=none from your .env.
Note
Don't forget to run lnms config:clear after you modify .env to flush the config cache
If you have a need to, then you can override redirect url with the following commands:
OAuthSAML2
Replace github and the relevant URL below with your identity provider details. lnms config:set auth.socialite.configs.github.redirect https://demo.librenms.org/auth/github/callback
From here you can configure the settings for any identity providers you have configured along with some bespoke options.
Redirect Login page: This setting will skip your LibreNMS login and take the end user straight to the first idP you configured.
Allow registration via provider: If this setting is disabled, new users signing in via the idP will not be authenticated. This setting allows a local user to be automatically created which permits their login.
Integrating LibreNMS with Oxidized brings the following benefits:
Config viewing: Current, History, and Diffs all under the Configs tab of each device
Automatic addition of devices to Oxidized: Including filtering and grouping to ease credential management
Configuration searching (Requires oxidized-web 0.8.0 or newer)
First you will need to install Oxidized following their documentation.
Then you can procede to the LibreNMS Web UI and go to Oxidized Settings in the External Settings section of Global Settings. Enable it and enter the url to your oxidized instance.
To have devices automatically added, you will need to configure oxidized to pull them from LibreNMS Feeding Oxidized Note: this means devices will be controlled by the LibreNMS API, and not router.db, passwords will still need to be in the oxidized config file.
LibreNMS will automatically map the OS to the Oxidized model name if they don't match. this means you shouldn't need to use the model_map config option within Oxidized.
This is a straight forward use of Oxidized, it relies on you having a working Oxidized setup which is already taking config snapshots for your devices. When you have that, you only need the following config to enable the display of device configs within the device page itself:
Oxidized supports various ways to utilise credentials to login to devices, you can specify global username/password within Oxidized, Group level username/password or per device. LibreNMS currently supports sending groups back to Oxidized so that you can then define group credentials within Oxidized. To enable this support please switch on 'Enable the return of groups to Oxidized':
external/oxidized
lnms config:set oxidized.group_support true\n
You can set a default group that devices will fall back to with:
If you're running SELinux, you'll need to allow httpd to connect outbound to the network, otherwise Oxidized integration in the web UI will silently fail:
Oxidized has support for feeding devices into it via an API call, support for Oxidized has been added to the LibreNMS API. A sample config for Oxidized is provided below.
You will need to configure default credentials for your devices in the Oxidized config, LibreNMS doesn't provide login credentials at this time.
LibreNMS is able to reload the Oxidized list of nodes, each time a device is added to LibreNMS. To do so, edit the option in Global Settings>External Settings>Oxidized Integration or add the following to your config.
To return an override to Oxidized you can do this by providing the override key, followed by matching a lookup for a host (or hosts), and finally by defining the overriding value itself. LibreNMS does not check for the validity of these attributes but will deliver them to Oxidized as defined.
Matching of hosts can be done using hostname, sysname, os, location, sysDescr, hardware, purpose or notes and including either a 'match' key and value, or a 'regex' key and value. The order of matching is:
hostname
sysName
sysDescr
hardware
os
location
ip
purpose
notes
To match on the device hostnames or sysNames that contain 'lon-sw' or if the location contains 'London' then you would set the following:
This allows extending the configuration further by providing a completely flexible model for custom flags and settings, for example, below shows the ability to add an ssh_proxy host within Oxidized simply by adding the below to your configuration:
Or of course, any custom value that could be needed or wanted can be applied, for example, setting a \"myAttribute\" to \"Super cool value\" for any configured and enabled \"routeros\" device.
If you have devices which you do not wish to appear in Oxidized then you can edit those devices in Device -> Edit -> Misc and enable \"Exclude from Oxidized?\"
The use of custom ssh and telnet ports can be set through device settings misc tab, and can be passed on to oxidized with the following vars_map
Using the Oxidized REST API and Syslog Hooks, Oxidized can trigger configuration downloads whenever a configuration change event has been logged. An example script to do this is included in ./scripts/syslog-notify-oxidized.php. Oxidized can spawn a new worker thread and perform the download immediately with the following configuration
You can perform basic validation of the Oxidized configuration by going to the Overview -> Tools -> Oxidized link and in the Oxidized config validation page, paste your yaml file into the input box and click 'Validate YAML'.
We check for yaml syntax errors and also actual config values to ensure they are used in the correct location.
"},{"location":"Extensions/Oxidized/#accessing-configuration-of-a-disabledremoved-device","title":"Accessing configuration of a disabled/removed device","text":"
When you're disabling or removing a device from LibreNMS, the configuration will no longer be available via the LibreNMS web interface. You can gain access to these configurations directly in the Git repository of Oxidized (if using Git for version control).
1: Check in your Oxidized where are stored your Git repositories:
/home/oxidized/.config/oxidized/config\n
2: Go the correct Git repository for the needed device (the .git one) and get the list of devices using this command:
git ls-files -s\n
3: Save the object ID of the device, and run the command to get the file content:
LibreNMS has integration with PeeringDB to match up your BGP sessions with the peering exchanges you are connected to.
To enable the integration please do so within the WebUI
external/peeringdb
lnms config:set peeringdb.enabled true\n
Data will be collated the next time daily.sh is run or you can manually force this by running php daily.php -f peeringdb, the initial collection is delayed for a random amount of time to avoid overloading the PeeringDB API.
Once enabled you will have an additional menu item under Routing -> PeeringDB
"},{"location":"Extensions/Plugin-System/","title":"Developing for the Plugin System","text":"
With plugins you can extend LibreNMS with special functions that are specific to your setup or are not relevant or interesting for all community members.
You are able to intervene in defined places in the behavior of the website, without it coming to problems with future updates.
This documentation will give you a basis for writing a plugin for LibreNMS. An example plugin is included in the LibreNMS distribution.
"},{"location":"Extensions/Plugin-System/#version-2-plugin-system-structure","title":"Version 2 Plugin System structure","text":"
Plugins in version 2 need to be installed into app/Plugins
Note: Plugins are disabled when the have an error, to show errors instead set plugins.show_errors
The above structure is checked before a plugin can be installed.
All file/folder names are case sensitive and must match the structure.
Only the blade files that are really needed need to be created. A plugin manager will then load a hook that has a basic functionality.
If you want to customize the basic behavior of the hooks, you can create a class in 'app/Plugins/PluginName' and overload the hook methods.
device-overview.blade.php :: This is called in the Device Overview page. You receive the $device as a object per default, you can do your work here and display your results in a frame.
port-tab.blade.php :: This is called in the Port page, in the \"Plugins\" menu_option that will appear when your plugin gets enabled. In this blade, you can do your work and display your results in a frame.
menu.blade.php :: For a menu entry
page.blade.pho :: Here is a good place to add a own LibreNMS page without dependence with a device. A good place to create your own lists with special requirements and behavior.
settings.blade.php :: If you need your own settings and variables, you can have a look in the ExamplePlugin.
PHP code should run inside your hooks method and not your blade view. The built in hooks support authorize and data methods.
These methods are called with Dependency Injection Hooks with relevant database models will include them in these calls. Additionally, the settings argument may be included to inject the plugin settings into the method.
You can overrid the data method to supply data to your view. You should also do any processing here. You can do things like access the database or configuration settings and more.
In the data method we are injecting settings here to count how many we have for display in the menu entry blade view. Note that you must specify a default value (= [] here) for any arguments that don't exist on the parent method.
class Menu extends MenuEntryHook\n{\n public function data(array $settings = []): array\n {\n return [\n 'count' => count($settings),\n ];\n }\n}\n
By default hooks are always shown, but you may control when the user is authorized to view the hook content.
As an example, you could imagine that the device-overview.blade.php should only be displayed when the device is in maintanence mode and the current user has the admin role.
class DeviceOverview extends DeviceOverviewHook\n{\n public function authorize(User $user, Device $device): bool\n {\n return $user->can('admin') && $device->isUnderMaintenance();\n }\n}\n
You may create a full plugin that can publish multiple routes, views, database migrations and more. Create a package according to the Laravel documentation you may call any of the supported hooks to tie into LibreNMS.
https://laravel.com/docs/packages
This is untested, please come to discord and share any expriences and update this documentation!
"},{"location":"Extensions/Plugin-System/#version-1-plugin-system-structure-legacy-version","title":"Version 1 Plugin System structure (legacy version)","text":"
The above structure is checked before a plugin can be installed.
All files / folder names are case sensitive and must match.
PluginName - This is a directory and needs to be named as per the plugin you are creating.
PluginName.php :: This file is used to process calls into the plugin from the main LibreNMS install. Here only functions within the class for your plugin that LibreNMS calls will be executed. For a list of currently enabled system hooks, please see further down. The minimum code required in this file is (replace Test with the name of your plugin):
<?php\n\nclass Test {\n}\n\n?>\n
PluginName.inc.php :: This file is the main included file when browsing to the plugin itself. You can use this to display / edit / remove whatever you like. The minimum code required in this file is:
System hooks are called as functions within your plugin class. The following system hooks are currently available:
menu() :: This is called to build the plugin menu system and you can use this to link to your plugin (you don't have to).
public static function menu() {\n echo('<li><a href=\"plugin/p='.get_class().'\">'.get_class().'</a></li>');\n }\n
device_overview_container($device) :: This is called in the Device Overview page. You receive the $device as a parameter, can do your work here and display your results in a frame.
public static function device_overview_container($device) {\n echo('<div class=\"container-fluid\"><div class=\"row\"> <div class=\"col-md-12\"> <div class=\"panel panel-default panel-condensed\"> <div class=\"panel-heading\"><strong>'.get_class().' Plugin </strong> </div>');\n echo(' Example plugin in \"Device - Overview\" tab <br>');\n echo('</div></div></div></div>');\n }\n
port_container($device, $port) :: This is called in the Port page, in the \"Plugins\" menu_option that will appear when your plugin gets enabled. In this function, you can do your work and display your results in a frame.
public static function port_container($device, $port) {\n echo('<div class=\"container-fluid\"><div class=\"row\"> <div class=\"col-md-12\"> <div class=\"panel panel-default panel-condensed\"> <div class=\"panel-heading\"><strong>'.get_class().' plugin in \"Port\" tab</strong> </div>');\n echo ('Example display in Port tab</br>');\n echo('</div></div></div></div>');\n }\n
It is possible to create graphs of the Proxmox VMs that run on your monitored machines. Currently, only traffic graphs are created. One for each interface on each VM. Possibly, IO graphs will be added later on.
The ultimate goal is to be able to create traffic bills for VMs, no matter on which physical machine that VM runs.
Then in LibreNMS active the librenms-agent and proxmox application flag for the device you are monitoring. You should now see an application in LibreNMS, as well as a new menu-item in the topmenu, allowing you to choose which cluster you want to look at.
"},{"location":"Extensions/Proxmox/#note-if-you-want-to-use-use-xinetd-instead-of-systemd","title":"Note, if you want to use use xinetd instead of systemd","text":"
Its possible to use the librenms-agent started by xinetd instead of systemd. One use case is if you are forced to use a old Proxmox installation. After installing the librenms-agent (see above) please copy enable the xinetd config, then restart the xinetd service:
"},{"location":"Extensions/RRDCached/","title":"Setting up RRDCached","text":"
This document will explain how to set up RRDCached for LibreNMS.
Since version 1.5, rrdtool / rrdcached now supports creating rrd files over rrdcached. If you have rrdcached 1.5.5 or above, you can also tune over rrdcached. To enable this set the following config:
poller/rrdtool
lnms config:set rrdtool_version '1.5.5'\n
This setting has to be the exact version of rrdtool you are running.
NOTE: This feature requires your client version of rrdtool to be 1.5.5 or newer, in addition to your rrdcached version.
"},{"location":"Extensions/RRDCached/#distributed-poller-support-matrix","title":"Distributed Poller Support Matrix","text":"
Shared FS: Is a shared filesystem required?
Features: Supported features in the version indicated.
Check to see if the graphs are being drawn in LibreNMS. This might take a few minutes. After at least one poll cycle (5 mins), check the LibreNMS disk I/O performance delta. Disk I/O can be found under the menu Devices>All Devices>[localhost hostname]>Health>Disk I/O.
Depending on many factors, you should see the Ops/sec drop by ~30-40%.
According to the man page, under \"SECURITY CONSIDERATIONS\", rrdcached has no authentication or security except for running under a unix socket. If you choose to use a network socket instead of a unix socket, you will need to secure your rrdcached installation. To do so you can proxy rrdcached using nginx to allow only specific IPs to connect.
Using the same setup above, using nginx version 1.9.0 or later, you can follow this setup to proxy the default rrdcached port to the local unix socket.
(You can use ./conf.d for your configuration as well)
mkdir /etc/nginx/streams-{available,enabled}
add the following to your nginx.conf file:
#/etc/nginx/nginx.conf\n...\nstream {\n include /etc/nginx/streams-enabled/*;\n}\n
Replace $LibreNMS_IP with the ip of the server that will be using rrdcached. You can specify more than one allow statement. This will bind nginx to TCP 42217 (the default rrdcached port), allow the specified IPs to connect, and deny all others.
next, we'll symlink the config to streams-enabled: ln -s /etc/nginx/streams-{available,enabled}/rrd
When we create rrd files for ports, we currently do so with a max value of 12500000000 (100G). Because of this if a device sends us bad data back then it can appear as though a 100M port is doing 40G+ which is impossible. To counter this you can enable the rrdtool tune option which will fix the max value to the interfaces physical speed (minimum of 10M).
To enable this you can do so in three ways!
Globally under Global Settings -> Poller -> Datastore: RRDTool
For the actual device, Edit Device -> Misc
For each port, Edit Device -> Port Settings
Now when a port interface speed changes (this can happen because of a physical change or just because the device has misreported) the max value is set. If you don't want to wait until a port speed changes then you can run the included script:
./scripts/tune_port.php -h <hostname> -p <ifName>
Wildcards are supported using *, i.e:
./scripts/tune_port.php -h local* -p eth*
This script will then perform the rrdtool tune on each port found using the provided ifSpeed for that port.
Librenms can generate a list of hosts that can be monitored by RANCID. We assume you have currently a running Rancid, and you just need to create and update the file 'router.db'
To generate the config file (maybe even add a cron to schedule this). We've assumed a few locations for Rancid, the config file you want to call it and where LibreNMS is:
cd /opt/librenms/scripts/\nphp ./gen_rancid.php > /the/path/where/is/rancid/core/router.db\n
Test config: sudo /usr/lib/rancid/bin/clogin -f /var/lib/rancid/.cloginrc <device hostname>
NOTE: IF you run into a 'diffie-hellmen' kind of error, then it is because your Linux distro is using newer encryption methods etc. This is basically just letting you know that the device you tested on is running an outdated encryption type. I recommend updating downstream device if able. If not, the following should fix:
sudo vi /etc/ssh/ssh_config
Add:
KexAlgorithms diffie-hellman-group1-sha1
Re-try logging into your device again
Upon success, run rancid:
sudo su -c /var/lib/rancid/bin/rancid-run -s /bin/bash -l rancid
If you have machines that you want to monitor but are not reachable directly, you can use SNMPD Proxy. This will use the reachable SNMPD to proxy requests to the unreachable SNMPD.
'hereweare.example.com'. Use the following config:
On 'hereweare.example.com':
view all included .1\n com2sec -Cn ctx_unreachable readonly <poller-ip> unreachable\n access MyROGroup ctx_unreachable any noauth prefix all none none\n proxy -Cn ctx_unreachable -v 2c -c private unreachable.example.com .1.3\n
On 'unreachable.example.com':
view all included .1 80\n com2sec readonly <hereweare.example.com ip address> private\n group MyROGroup v1 readonly\n group MyROGroup v2c readonly\n group MyROGroup usm readonly\n access MyROGroup \"\" any noauth exact all none none\n
You can now poll community 'private' on 'unreachable.example.com' via community 'unreachable' on host 'hereweare.example.com'. Please note that requests on 'unreachable.example.com' will be coming from 'hereweare.example.com', not your poller.
Currently, LibreNMS supports a lot of trap handlers. You can check them on GitHub here. To add more see Adding new SNMP Trap handlers. Traps are handled via snmptrapd.
snmptrapd is an SNMP application that receives and logs SNMP TRAP and INFORM messages.
The default is to listen on UDP port 162 on all IPv4 interfaces. Since 162 is a privileged port, snmptrapd must typically be run as root.
Make the folder /etc/systemd/system/snmptrapd.service.d/ and edit the file /etc/systemd/system/snmptrapd.service.d/mibs.conf and add the following content.
You may want to tweak to add vendor directories for devices you care about. In the example below, standard and cisco directories are defined, and only IF-MIB is loaded.
In Ubuntu 18 is service located by default in /etc/systemd/system/multi-user.target.wants/snmptrapd.service
Here is a list of snmptrapd options:
Option Description -a Ignore authenticationFailure traps. [OPTIONAL] -f Do not fork from the shell -n Use numeric addresses instead of attempting hostname lookups (no DNS) [OPTIONAL] -m MIBLIST: use MIBLIST (FILE1-MIB:FILE2-MIB). ALL = Load all MIBS in DIRLIST. (usually fails) -M DIRLIST: use DIRLIST as the list of locations to look for MIBs. Option is not recursive, so you need to specify each DIR individually, separated by :. (For example: /opt/librenms/mibs:/opt/librenms/mibs/cisco:/opt/librenms/mibs/edgecos)
Good practice is to avoid -m ALL because then it will try to load all the MIBs in DIRLIST, which will typically fail (snmptrapd cannot load that many mibs). Better is to specify the exact MIB files defining the traps you are interested in, for example for LinkDown and LinkUp as well as BGP traps, use -m IF-MIB:BGP4-MIB. Multiple files can be added, separated with :.
If you want to test or store original TRAPS in log then:
Create a folder for storing traps for example in file traps.log
sudo mkdir /var/log/snmptrap\n
Add the following config to your snmptrapd.service after ExecStart=/usr/sbin/snmptrapd -f -m ALL -M /opt/librenms/mibs
-tLf /var/log/snmptrap/traps.log\n
On SELinux, you need to configure SELinux for SNMPd to communicate to LibreNMS:
cat > snmptrap.te << EOF\nmodule snmptrap 1.0;\n\nrequire {\n type httpd_sys_rw_content_t;\n type snmpd_t;\n class file { append getattr open read };\n class capability dac_override;\n}\n\n#============= snmpd_t ==============\n\nallow snmpd_t httpd_sys_rw_content_t:file { append getattr open read };\nallow snmpd_t self:capability dac_override;\nEOF\ncheckmodule -M -m -o snmptrap.mod snmptrap.te\nsemodule_package -o snmptrap.pp -m snmptrap.mod\nsemodule -i snmptrap.pp\n
After successfully configuring the service, reload service files, enable, and start the snmptrapd service:
The easiest test is to generate a trap from your device. Usually, changing the configuration on a network device, or plugging/unplugging a network cable (LinkUp, LinkDown) will generate a trap. You can confirm it using a with tcpdump, tshark or wireshark.
You can also generate a trap using the snmptrap command from the LibreNMS server itself (if and only if the LibreNMS server is monitored).
"},{"location":"Extensions/SNMP-Trap-Handler/#how-to-send-snmp-v2-trap","title":"How to send SNMP v2 Trap","text":"
"},{"location":"Extensions/SNMP-Trap-Handler/#why-we-need-uptime","title":"Why we need Uptime","text":"
When you send a trap, it must of course conform to a set of standards. Every trap needs an uptime value. Uptime is how long the system has been running since boot. Sometimes this is the operating system, other devices might use the SNMP engine uptime. Regardless, a value will be sent.
So what value should you type in the commands below? Oddly enough, simply supplying no value by using two single quotes '' will instruct the command to obtain the value from the operating system you are executing this on.
You can configure generic event logging for snmp traps. This will log an event of the type trap for received traps. These events can be used for alerting. By default, only the TrapOID is logged. But you can enable the \"detailed\" variant, and all the data received with the trap will be logged.
The parameter can be found in General Settings / External / SNMP Traps Integration.
Services within LibreNMS provides the ability to leverage Nagios plugins to perform additional monitoring outside of SNMP. Services can also be used in conjunction with your SNMP monitoring for larger monitoring functionality.
"},{"location":"Extensions/Services/#setting-up-services","title":"Setting up Services","text":"
Services must be tied to a device to function properly. A good generic option is to use localhost, but it is suggested to attach the check to the device you are monitoring.
Note: Plugins will only load if they are prefixed with check_. The check_ prefix is stripped out when displaying in the \"Add Service\" GUI \"Type\" dropdown list.
Service Templates within LibreNMS provides the same ability as Nagios does with Host Groups. Known as Device Groups in LibreNMS. They are applied devices that belong to the specified Device Group.
Use the Apply buttons to manually create or update Services for the Service Template. Use the Remove buttons to manually remove Services for the Service Template.
After you Edit a Service Template, and then use Apply, all relevant changes are pushed to existing Services previously created.
You can also enable Service Templates Auto Discovery to have Services added / removed / updated on regular discover intervals.
When a Device is a member of multiple Device Groups, templates from all of those Device Groups are applied.
If a Device is added or removed from a Device Group, when the Apply button is used or Auto Discovery runs Services will be added / removed as appropriate.
Service Templates are tied into Device Groups, you need at least one Device Group to be able to add Service Templates - You can define a dummy one. The Device Group does not need members to add Service Templates.
"},{"location":"Extensions/Services/#service-auto-discovery","title":"Service Auto Discovery","text":"
To automatically create services for devices with available checks.
You need to enable the discover services within config.php with the following:
$config['discover_services'] = true;\n
"},{"location":"Extensions/Services/#service-templates-auto-discovery","title":"Service Templates Auto Discovery","text":"
To automatically create services for devices with configured Service Templates.
You need to enable the discover services within config.php with the following:
Service checks are now distributable if you run a distributed setup. To leverage this, use the dispatch service. Alternatively, you could also replace check-services.php with services-wrapper.py in cron instead to run across all polling nodes.
If you need to debug the output of services-wrapper.py then you can add -d to the end of the command - it is NOT recommended to do this in cron.
Now you can add services via the main Services link in the navbar, or via the 'Add Service' link within the device, services page.
Note that some services (procs, inodes, load and similar) will always poll the local LibreNMS server it's running on, regardless of which device you add it to.
By default, the check-services script will collect all performance data that the Nagios script returns and display each datasource on a separate graph. LibreNMS expects scripts to return using Nagios convention for the response message structure: AEN200
However for some modules it would be better if some of this information was consolidated on a single graph. An example is the ICMP check. This check returns: Round Trip Average (rta), Round Trip Min (rtmin) and Round Trip Max (rtmax). These have been combined onto a single graph.
If you find a check script that would benefit from having some datasources graphed together, please log an issue on GitHub with the debug information from the script, and let us know which DS's should go together. Example below:
./check-services.php -d\n -- snip --\n Nagios Service - 26\n Request: /usr/lib/nagios/plugins/check_icmp localhost\n Perf Data - DS: rta, Value: 0.016, UOM: ms\n Perf Data - DS: pl, Value: 0, UOM: %\n Perf Data - DS: rtmax, Value: 0.044, UOM: ms\n Perf Data - DS: rtmin, Value: 0.009, UOM: ms\n Response: OK - localhost: rta 0.016ms, lost 0%\n Service DS: {\n \"rta\": \"ms\",\n \"pl\": \"%\",\n \"rtmax\": \"ms\",\n \"rtmin\": \"ms\"\n }\n OK u:0.00 s:0.00 r:40.67\n RRD[update /opt/librenms/rrd/localhost/services-26.rrd N:0.016:0:0.044:0.009]\n -- snip --\n
Service check is skipped when the associated device is not pingable, and an appropriate entry is populated in the event log. Service check is polled if it's IP address parameter is not equal to associated device's IP address, even when the associated device is not pingable.
To override the default logic and always poll service checks, you can disable ICMP testing for any device by switching Disable ICMP Test setting (Edit -> Misc) to ON.
Service checks will never be polled on disabled devices.
In most cases, only Nagios plugins that run against a remote host with the -H option are available as services. However, if you're remote host is running the Check_MK agent you may be able to use MRPE to monitor Nagios plugins that only execute locally as services.
For example, consider the fairly common check_cpu.sh Nagios plugin. If you added..
...to /etc/check_mk/mrpe.cfg on your remote host, you should be able to check its output by configuring a service using the check_mrpe script.
Add check_mrpe to the Nagios plugins directory on your LibreNMS server and make it executable.
In LibreNMS, add a new service to the desired device with the type mrpe.
Enter the IP address of the remote host and in parameters enter -a cpu_check (this should match the name used at the beginning of the line in the mrpe.cfg file).
All installation steps assume a clean configuration - if you have an existing smokeping setup, you'll need to adapt these steps somewhat.
"},{"location":"Extensions/Smokeping/#install-and-integrate-smokeping-backend-rhel-centos-and-alike","title":"Install and integrate Smokeping Backend - RHEL, CentOS and alike","text":"
Smokeping is available via EPEL, which if you're running LibreNMS, you probably already have. If you want to do something like run Smokeping on a seperate host and ship data via RRCached though, here's the install command:
Once installed, you should need a cron script installed to make sure that the configuration file is updated. You can find an example in misc/librenms-smokeping-rhel.example. Put this into /etc/cron.d/hourly, and mark it executable:
*** Targets ***\n\nprobe = FPing\n\nmenu = Top\ntitle = Network Latency Grapher\nremark = Welcome to the SmokePing website of <b>Insert Company Name Here</b>. \\\n Here you will learn all about the latency of our network.\n\n@include /etc/smokeping/librenms-targets.conf\n
Note there may be other stanza's (possibly *** Slaves ***) between the *** Probes *** and *** Targets *** stanza's - leave these intact.
Leave everything else untouched. If you need to add other configuration, make sure it comes after the LibreNMS configuration, and keep in mind that Smokeping does not allow duplicate modules, and cares about the configuration file sequence.
Once you're happy, manually kick off the cron once, then enable and start smokeping:
"},{"location":"Extensions/Smokeping/#install-and-integrate-smokeping-backend-ubuntu-debian-and-alike","title":"Install and integrate Smokeping Backend - Ubuntu, Debian and alike","text":"
Smokeping is available via the default repositories.
sudo apt-get install smokeping\n
Once installed, you should need a cron script installed to make sure that the configuration file is updated. You can find an example in misc/librenms-smokeping-debian.example. Put this into /etc/cron.d/hourly, and mark it executable:
Strip everything from /etc/smokeping/config.d/Targets and replace with:
*** Targets ***\n\nprobe = FPing\n\nmenu = Top\ntitle = Network Latency Grapher\nremark = Welcome to the SmokePing website of <b>Insert Company Name Here</b>. \\\n Here you will learn all about the latency of our network.\n\n@include /etc/smokeping/config.d/librenms-targets.conf\n
Leave everything else untouched. If you need to add other configuration, make sure it comes after the LibreNMS configuration, and keep in mind that Smokeping does not allow duplicate modules, and cares about the configuration file sequence.
"},{"location":"Extensions/Smokeping/#configure-librenms-all-operating-systems","title":"Configure LibreNMS - All Operating Systems","text":"
dir should match the location that smokeping writes RRD's to pings should match the default smokeping value, default 20 probes should be the number of processes to spread pings over, default 2
These settings can also be set in the Web UI.
"},{"location":"Extensions/Smokeping/#configure-smokepings-web-ui-optional","title":"Configure Smokeping's Web UI - Optional","text":"
This section covers the required configuration for your web server of choice. This covers the required configuration for either Apache or Nginx.
LibreNMS does not need the Web UI - you can find the graphs in LibreNMS on the latency tab.
"},{"location":"Extensions/Smokeping/#apache-configuration-ubuntu-debian-and-alike","title":"Apache Configuration - Ubuntu, Debian and alike","text":"
Edit the General configuration file's Owner and contact, and cgiurl hostname details:
After creating the symlink, restart Apache with sudo systemctl apache2 restart
You should be able to load the Smokeping web interface at http://yourhost/cgi-bin/smokeping.cgi
"},{"location":"Extensions/Smokeping/#nginx-configuration-rhel-centos-and-alike","title":"Nginx Configuration - RHEL, CentOS and alike","text":"
This section assumes you have configured LibreNMS with Nginx as specified in Configure Nginx.
Note, you need to install fcgiwrap for CGI wrapper interact with Nginx
yum install fcgiwrap\n
Then create a new configuration file for fcgiwrap in /etc/nginx/fcgiwrap.conf
# Include this file on your nginx.conf to support debian cgi-bin scripts using\n# fcgiwrap\nlocation /cgi-bin/ {\n # Disable gzip (it makes scripts feel slower since they have to complete\n # before getting gzipped)\n gzip off;\n\n # Set the root to /usr/lib (inside this location this means that we are\n # giving access to the files under /usr/lib/cgi-bin)\n #root /usr/lib;\n root /usr/share/nginx;\n\n # Fastcgi socket\n fastcgi_pass unix:/var/run/fcgiwrap.socket;\n\n # Fastcgi parameters, include the standard ones\n include /etc/nginx/fastcgi_params;\n\n # Adjust non standard parameters (SCRIPT_FILENAME)\n fastcgi_param SCRIPT_FILENAME /usr/lib$fastcgi_script_name;\n} \n
Be sure to create the folder cgi-bin folder with required permissions (755)
mkdir /usr/share/nginx/cgi-bin\n
Create fcgiwrap systemd service in /usr/lib/systemd/system/fcgiwrap.service
If images/js/css don't load, you might have to add
location ^~ /smokeping/css {\n alias /usr/share/smokeping/htdocs/css/;\n gzip off;\n}\nlocation ^~ /smokeping/js {\n alias /usr/share/smokeping/htdocs/js/;\n gzip off;\n}\nlocation ^~ /smokeping/images {\n alias /opt/librenms/rrd/smokeping/images;\n gzip off;\n}\n
After saving the configuration file, verify your Nginx configuration file syntax is OK with sudo nginx -t, then restart Nginx with sudo systemctl restart nginx
You should be able to load the Smokeping web interface at http://yourlibrenms/smokeping
"},{"location":"Extensions/Smokeping/#nginx-configuration-ubuntu-debian-and-alike","title":"Nginx Configuration - Ubuntu, Debian and alike","text":"
This section assumes you have configured LibreNMS with Nginx as specified in Configure Nginx.
Note, you need to install fcgiwrap for CGI wrapper interact with Nginx
apt install fcgiwrap\n
Then configure Nginx with the default configuration
After saving the configuration file, verify your Nginx configuration file syntax is OK with sudo nginx -t, then restart Nginx with sudo systemctl restart nginx
You should be able to load the Smokeping web interface at http://yourlibrenms/smokeping
You can use the purpose-made htpasswd utility included in the apache2-utils package (Nginx password files use the same format as Apache). You can install it on Ubuntu with
apt install apache2-utils\n
After that you need to create password for your user
htpasswd -c /etc/nginx/.htpasswd USER\n
You can verify your user and password with
cat /etc/nginx/.htpasswd\n
Then you just need to add to your config auth_basic parameters
location ^~ /smokeping/ {\n alias /usr/share/smokeping/www/;\n index smokeping.cgi;\n gzip off;\n auth_basic \"Private Property\";\n auth_basic_user_file /etc/nginx/.htpasswd;\n }\n
There is a problem writing to the RRD directory. This is somewhat out of scope of LibreNMS, but make sure that file permissions and SELinux labels allow the smokeping user to write to the directory.
If you're using RRDCacheD, make sure that the permissions are correct there too, and that if you're using -B that the smokeping RRD's are inside the base directory; update the smokeping rrd directory if required.
It's not recommended to run RRDCachedD without the -B switch.
"},{"location":"Extensions/Smokeping/#share-rrdcached-with-librenms","title":"Share RRDCached with LibreNMS","text":"
Move the RRD's and give smokeping access rights to the LibreNMS RRD directory:
If you have SELinux on, see next section before starting smokeping. Finally restart the smokeping service:
sudo systemctl start smokeping\n
Remember to update your config with the new locations.
"},{"location":"Extensions/Smokeping/#configure-selinux-to-allow-smokeping-to-write-in-librenms-directory-on-centos-rhel","title":"Configure SELinux to allow smokeping to write in LibreNMS directory on Centos / RHEL","text":"
If you are using RRDCached with the -B switch and smokeping RRD's inside the LibreNMS RRD base directory, you can install this SELinux profile:
"},{"location":"Extensions/Smokeping/#probe-fping-missing-missing-from-the-probes-section","title":"Probe FPing missing missing from the probes section","text":"
Take a look at the instructions again - something isn't correct in your configuration.
"},{"location":"Extensions/Smokeping/#section-or-variable-already-exists","title":"Section or variable already exists","text":"
Most likely, content wasn't fully removed from the *** Probes ****** Targets*** stanza's as instructed. If you're trying to integrate LibreNMS, smokeping and another source of configuration, you're probably trying to redefine a module (e.g. '+ FPing' more than once) or stanza. Otherwise, look again at the instructions.
"},{"location":"Extensions/Smokeping/#mandatory-variable-probe-not-defined","title":"Mandatory variable 'probe' not defined","text":"
The target block must have a default probe. If you follow the instructions you will have one. If you're trying to integrate LibreNMS, smokeping and another source of configuration, you need to make sure there are no duplicate or missing definitions.
"},{"location":"Extensions/Smokeping/#file-usrsbinsendmail-does-not-exist","title":"File '/usr/sbin/sendmail' does not exist`","text":"
If you got this error at the end of the installation, simply edit or comment out the sendmail entry in the configuration:
To run LibreNMS under a subdirectory on your Apache server, the directives for the LibreNMS directory are placed in the base server configuration, or in a virtual host container of your choosing. If using a virtual host, place the directives in the file where the virtual host is configured. If using the base server on RHEL distributions (CentOS, Scientific Linux, etc.) the directives can be placed in /etc/httpd/conf.d/librenms.conf. For Debian distributions (Ubuntu, etc.) place the directives in /etc/apache2/sites-available/default.
#These directives can be inside a virtual host or in the base server configuration\nAllowEncodedSlashes On\nAlias /librenms /opt/librenms/html\n\n<Directory \"/opt/librenms/html\">\n AllowOverride All\n Options FollowSymLinks MultiViews\n</Directory>\n
The RewriteBase directive in html/.htaccess must be rewritten to reference the subdirectory name. Assuming LibreNMS is running at http://example.com/librenms/, you will need to change RewriteBase / to RewriteBase /librenms.
Finally, set APP_URL=/librenms/ in .env and lnms config:set base_url '/librenms/'.
This section explain different ways to recieve and process syslog with LibreNMS. Except of graylog, all Syslogs variants store their logs in the LibreNMS database. You need to enable the Syslog extension in config.php:
$config['enable_syslog'] = 1;\n
A Syslog integration gives you a centralized view of information within the LibreNMS (device view, traps, event). Further more you can trigger alerts based on syslog messages (see rule collections)."},{"location":"Extensions/Syslog/#traditional-syslog-server","title":"Traditional Syslog server","text":""},{"location":"Extensions/Syslog/#syslog-ng","title":"syslog-ng","text":"Debian / UbuntuCentOS / RedHat
apt-get install syslog-ng-core\n
yum install syslog-ng\n
Once syslog-ng is installed, create the config file (/etc/syslog-ng/conf.d/librenms.conf) and paste the following:
If no messages make it to the syslog tab in LibreNMS, chances are you experience an issue with SELinux. If so, create a file mycustom-librenms-rsyslog.te , with the following content:
module mycustom-librenms-rsyslog 1.0;\n\nrequire {\n type syslogd_t;\n type httpd_sys_rw_content_t;\n type ping_exec_t;\n class process execmem;\n class dir { getattr search write };\n class file { append getattr execute open read };\n}\n\n#============= syslogd_t ==============\nallow syslogd_t httpd_sys_rw_content_t:dir { getattr search write };\nallow syslogd_t httpd_sys_rw_content_t:file { open read append getattr };\nallow syslogd_t self:process execmem;\nallow syslogd_t ping_exec_t:file execute;\n
If you prefer rsyslog, here are some hints on how to get it working.
Add the following to your rsyslog config somewhere (could be at the top of the file in the step below, could be in rsyslog.conf if you are using remote logs for something else on this host)
# Listen for syslog messages on UDP:514\n$ModLoad imudp\n$UDPServerRun 514\n
Create a file called /etc/rsyslog.d/30-librenms.confand add the following depending on your version of rsyslog.
If your rsyslog server is receiving messages relayed by another syslog server, you may try replacing %fromhost% with %hostname%, since fromhost is the host the message was received from, not the host that generated the message. The fromhost property is preferred as it avoids problems caused by devices sending incorrect hostnames in syslog messages.
Next, create a logstash configuration file (ex. /etc/logstash/conf.d/logstash-simple.conf), and add the following:
input {\nsyslog {\n port => 514\n }\n}\n\n\noutput {\n exec {\n command => \"echo `echo %{host},,,,%{facility},,,,%{priority},,,,%{severity},,,,%{facility_label},,,,``date --date='%{timestamp}' '+%Y-%m-%d %H:%M:%S'``echo ',,,,%{message}'``echo ,,,,%{program} | sed 's/\\x25\\x7b\\x70\\x72\\x6f\\x67\\x72\\x61\\x6d\\x7d/%{facility_label}/'` | sed 's/,,,,/||/g' | /opt/librenms/syslog.php &\"\n }\n elasticsearch {\n hosts => [\"10.10.10.10:9200\"]\n index => \"syslog-%{+YYYY.MM.dd}\"\n }\n}\n
Replace 10.10.10.10 with your primary elasticsearch server IP, and set the incoming syslog port. Alternatively, if you already have a logstash config file that works except for the LibreNMS export, take only the \"exec\" section from output and add it.
"},{"location":"Extensions/Syslog/#remote-logstash-or-any-json-source","title":"Remote Logstash (or any json source)","text":"
If you have a large logstash / elastic installation for collecting and filtering syslogs, you can simply pass the relevant logs as json to the LibreNMS API \"syslog sink\". This variant may be more flexible and secure in transport. It does not require any major changes to existing ELK setup. You can also pass simple json kv messages from any kind of application or script (example below) to this sink.
For long term or advanced aggregation searches you might still use Kibana/Grafana/Graylog etc. It is recommended to keep config['syslog_purge'] short.
A minimal Logstash http output configuration can look like this:
output {\n....\n #feed it to LibreNMS\n http {\n http_method => \"post\"\n url => \"https://sink.librenms.org/api/v0/syslogsink/ # replace with your librenms host\n format => \"json_batch\" # put multiple syslogs in on HTTP message\n retry_failed => false # if true, logstash is blocking if the API is unavailable, be careful! \n headers => [\"X-Auth-Token\",\"xxxxxxxLibreNMSApiToken]\n\n # optional if your mapping is not already done before or does not match. \"msg\" and \"host\" is mandatory. \n # you might also use out the clone {} function to duplicate your log stream and a dedicated log filtering/mapping etc.\n # mapping => {\n # \"host\"=> \"%{host}\"\n # \"program\" => \"%{program}\"\n # \"facility\" => \"%{facility_label}\"\n # \"priority\" => \"%{syslog5424_pri}\"\n # \"level\" => \"%{facility_label}\" \n # \"tag\" => \"%{topic}\"\n # \"msg\" => \"%{message}\"\n # \"timestamp\" => \"%{@timestamp}\"\n # }\n }\n}\n
Below are sample configurations for a variety of clients. You should understand the config before using it as you may want to make some slight changes. Further configuration hints may be found in the file Graylog.md.
Replace librenms.ip with IP or hostname of your LibreNMS install.
Replace any variables in with the relevant information."},{"location":"Extensions/Syslog/#syslog","title":"syslog","text":"
set system syslog host librenms.ip authorization any\nset system syslog host librenms.ip daemon any\nset system syslog host librenms.ip kernel any\nset system syslog host librenms.ip user any\nset system syslog host librenms.ip change-log any\nset system syslog host librenms.ip source-address <management ip>\nset system syslog host librenms.ip exclude-hostname\nset system syslog time-format\n
info-center loghost librenms.ip\ninfo-center timestamp debugging short-date without-timezone // Optional\ninfo-center timestamp log short-date // Optional\ninfo-center timestamp trap short-date // Optional\n//This is optional config, especially if the device is in public ip and you dont'want to get a lot of messages of ACL\ninfo-center filter-id bymodule-alias VTY ACL_DENY\ninfo-center filter-id bymodule-alias SSH SSH_FAIL\ninfo-center filter-id bymodule-alias SNMP SNMP_FAIL\ninfo-center filter-id bymodule-alias SNMP SNMP_IPLOCK\ninfo-center filter-id bymodule-alias SNMP SNMP_IPUNLOCK\ninfo-center filter-id bymodule-alias HTTP ACL_DENY\n
log date-format iso // Required so syslog-ng/LibreNMS can correctly interpret the log message formatting.\nlog host x.x.x.x\nlog host x.x.x.x level <errors> // Required. A log-level must be specified for syslog messages to send.\nlog host x.x.x.x level notices program imish // Useful for seeing all commands executed by users.\nlog host x.x.x.x level notices program imi // Required for Oxidized Syslog hook log message.\nlog host source <eth0>\n
If you have permitted udp and tcp 514 through any firewall then that should be all you need. Logs should start appearing and displayed within the LibreNMS web UI.
Trigger external scripts based on specific syslog patterns being matched with syslog hooks. Add the following to your LibreNMS config.php to enable hooks:
$config['enable_syslog_hooks'] = 1;\n
The below are some example hooks to call an external script in the event of a configuration change on Cisco ASA, IOS, NX-OS and IOS-XR devices. Add to your config.php file to enable.
Note: At least software version 5.4.8-2.1 is required. log host x.x.x.x level notices program imi may also be required depending on configuration. This is to ensure the syslog hook log message gets sent to the syslog server.
The cleanup is run by daily.sh and any entries over X days old are automatically purged. Values are in days. See here for more Clean Up Options Link
"},{"location":"Extensions/Syslog/#matching-syslogs-to-hosts-with-different-names","title":"Matching syslogs to hosts with different names","text":"
In some cases, you may get logs that aren't being associated with the device in LibreNMS. For example, in LibreNMS the device is known as \"ne-core-01\", and that's how DNS resolves. However, the received syslogs are for \"loopback.core-nw\".
To fix this issue, you can configure LibreNMS to translate the incoming syslog hostname into another hostname, so that the logs get associated with the correct device.
Over the last couple of years, the primary attack vector for internet accounts has been static passwords. Therefore static passwords are no longer sufficient to protect unauthorized access to accounts. Two Factor Authentication adds a variable part in authentication procedures. A user is now required to supply a changing 6-digit passcode in addition to their password to obtain access to the account.
LibreNMS has a RFC4226 conformant implementation of both Time and Counter based One-Time-Passwords. It also allows the administrator to configure a throttle time to enforce after 3 failures exceeded. Unlike RFC4226 suggestions, this throttle time will not stack on the amount of failures.
In general, these two types do not differ in algorithmic terms. The types only differ in the variable being used to derive the passcodes from. The underlying HMAC-SHA1 remains the same for both types, security advantages or disadvantages of each are discussed further down.
Like the name suggests, this type uses the current Time or a subset of it to generate the passcodes. These passcodes solely rely on the secrecy of their Secretkey in order to provide passcodes. An attacker only needs to guess that Secretkey and the other variable part is any given time, presumably the time upon login. RFC4226 suggests a resynchronization attempt in case the passcode mismatches, providing the attacker a range of up to +/- 3 Minutes to create passcodes.
This type uses an internal counter that needs to be in sync with the server's counter to successfully authenticate the passcodes. The main advantage over timebased OTP is the attacker doesn't only need to know the Secretkey but also the server's Counter in order to create valid passcodes. RFC4226 suggests a resynchronization attempt in case the passcode mismatches, providing the attacker a range of up to +4 increments from the actual counter to create passcodes.
Enable 'Two-Factor' Via Global Settings in the Web UI under Authentication -> General Authentication Settings.
Optionally enter a throttle timer in seconds. This will unlock an account after this time once it has failed 3 attempt to authenticate. Set to 0 (default) to disable this feature, meaning accounts will remain locked after 3 attempts and will need an administrator to clear.
If Two-Factor is enabled, the Settings -> Manage Users grid will show a '2FA' column containing a green tick for users with active 2FA.
There is no functionality to mandate 2FA for users.
If a user has failed 3 attempts, their account can be unlocked or 2FA disabled by editing the user from the Manage Users table.
If a throttle timer is set, it will unlock accounts after this time. If set to the default of 0, accounts will need to be manually unlocked by an administrator after 3 failed attempts.
Locked accounts will report to the user stating to wait for the throttle time period, or to contact the administrator if no timer set.
This document explains how to install Varnish Reverse Proxy for LibreNMS.
Varnish is caching software that sits logically between an HTTP client and an HTTP server. Varnish caches HTTP responses from the HTTP server. If an HTTP request can not be responded to by the Varnish cache it directs the request to the HTTP Server. This type of HTTP caching is called a reverse proxy server. Caching your HTTP server can decrease page load times significantly.
In this example we will assume your Apache 2.4.X HTTP server is working and configured to process HTTP requests on port 80. If not, please see Installing LibreNMS
Using a web browser navigate to :6081 or 127.0.0.1:6081. You should see a Varnish error message, this shows that Varnish is working. Example error message:
Now we need to configure Varnish to listen to HTTP requests on port 80 and relay those requests to the Apache HTTP server on port 8080 (see block diagram).
Stop Varnish.
systemctl stop varnish\n
Create a back-up of varnish.params just in case you make a mistake.
# Set this to 1 to make systemd reload try to switch vcl without restart.\nRELOAD_VCL=1\n\n# Main configuration file. You probably want to change it.\nVARNISH_VCL_CONF=/etc/varnish/librenms.vcl\n\n# Default address and port to bind to. Blank address means all IPv4\n# and IPv6 interfaces, otherwise specify a host name, an IPv4 dotted\n# quad, or an IPv6 address in brackets.\nVARNISH_LISTEN_ADDRESS=192.168.1.10\nVARNISH_LISTEN_PORT=80\n\n# Admin interface listen address and port\nVARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1\nVARNISH_ADMIN_LISTEN_PORT=6082\n\n# Shared secret file for admin interface\nVARNISH_SECRET_FILE=/etc/varnish/secret\n\n# Backend storage specification, see Storage Types in the varnishd(5)\n# man page for details.\nVARNISH_STORAGE=\"malloc,512M\"\n\n# Default TTL used when the backend does not specify one\nVARNISH_TTL=120\n\n# User and group for the varnishd worker processes\nVARNISH_USER=varnish\nVARNISH_GROUP=varnish\n\n# Other options, see the man page varnishd(1)\nDAEMON_OPTS=\"-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300\"\n
"},{"location":"Extensions/Varnish/#configure-apache-for-varnish","title":"Configure Apache for Varnish","text":"
Edit librenms.conf and modify the Apache Virtual Host listening port.
Modify: <VirtualHost *:80> to <VirtualHost *:8080>
vim /etc/httpd/conf.d/librenms.conf\n
Varnish can not share a port with Apache. Change the Apache listening port to 8080.
Paste example VCL config, read config comments for more information.
#\n# This is an example VCL file for Varnish.\n#\n# It does not do anything by default, delegating control to the\n# builtin VCL. The builtin VCL is called when there is no explicit\n# return statement.\n#\n# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/\n# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.\n\n# Marker to tell the VCL compiler that this VCL has been adapted to the\n# new 4.0 format.\nvcl 4.0;\n\n# Default backend definition. Set this to point to your Apache server.\nbackend librenms {\n .host = \"127.0.0.1\";\n .port = \"8080\";\n}\n\n# In this example our objective is to cache static content with Varnish and temporarily\n# cache dynamic content in the client web browser.\n\nsub vcl_recv {\n # HTTP requests from client web browser.\n # Here we remove any cookie HTTP requests for the 'librenms.domain.net' host\n # containing the matching file extensions. We don't have to match by host if you\n # only have LibreNMS running on Apache.\n # If the cookies are not removed from the HTTP request then Varnish will not cache\n # the files. 'else' function is set to 'pass', or don't cache anything that doesn't\n # match.\n\n if (req.http.host ~ \"^librenms.domain.net\") {\n set req.backend_hint = librenms;\n if (req.url ~ \"\\.(png|gif|jpg|jpeg|ico|pdf|js|css|svg|eot|otf|woff|woff2|ttf)$\") {\n unset req.http.Cookie;\n }\n\n else{\n return(pass);\n }\n }\n}\n\nsub vcl_backend_response {\n # 'sub vcl_backend_response' is the same function as 'sub vcl_fetch' in Varnish 3, however,\n # the syntax is slightly different\n # This function happens after we read the response headers from the backend (Apache).\n # First function 'if (bereq.url ~ \"\\' removes cookies from the Apache HTTP responses\n # that match the file extensions that are between the quotes, and cache the files for 24 hours.\n # This assumes you update LibreNMS once a day, otherwise restart Varnish to clear cache.\n # Second function 'if (bereq.url ~ \"^/' removes the Pragma no-cache statements and sets the age\n # of how long the client browser will cache the matching urls.\n # LibreNMS graphs are updated every 300 seconds, 'max-age=300' is set to match this behavior.\n # We could cache these URLs in Varnish but it would add to the complexity of the config.\n\n if (bereq.http.host ~ \"^librenms.domain.net\") {\n if (bereq.url ~ \"\\.(png|gif|jpg|jpeg|ico|pdf|js|css|svg|eot|otf|woff|woff2|ttf)$\") {\n unset beresp.http.Set-cookie;\n set beresp.ttl = 24h;\n }\n\n if (bereq.url ~ \"^/graph.php\" || \"^/device/\" || \"^/iftype/\" || \"^/customers/\" || \"^/health/\" || \"^/apps/\" || \"^/(plugin)$\" || \"^/(alert)$\" || \"^/eventlog/\" || \"^/graphs/\" || \"^/ports/\" ) {\n unset beresp.http.Pragma;\n set beresp.http.Cache-Control = \"max-age=300\";\n }\n }\n}\n\nsub vcl_deliver {\n # Happens when we have all the pieces we need, and are about to send the\n # response to the client.\n # You can do accounting or modifying the final object here.\n\n return (deliver);\n}\n
Reload rules to remove the temporary port rule we added earlier.
firewall-cmd --reload\n
Varnish caching does not take effect immediately. You will need to browse the LibreNMS website to build up the cache.
Use the command varnishstat to monitor Varnish caching. Over time you should see 'MAIN.cache_hit' and 'MAIN.client_req' increase. With the above VCL the hit to request ratio is approximately 84%.
The Network Maps and Dependency Maps all use a common configuration for the vis.js library, which affects the way the maps are rendered, as well as the way that users can interact with the maps. This configuration can be adjusted by following the instructions below.
This link will show you all the options and explain what they do.
You may also access the dynamic configuration interface example here from within LibreNMS by adding the following to config.php
You may want to disable the automatic page refresh while you're tweaking your configuration, as the refresh will reset the dynamic configuration UI to the values currently saved in config.php This can be done by clicking on the Settings Icon then Refresh Pause.
Once you've achieved your desired map appearance, click the generate options button at the bottom to be given the necessary parameters to add to your config.php file. You will need to paste the generated config into config.php the format will need to look something like this. Note that the configurator will output the config with var options you will need to strip them out and at the end of the config you need to add an }'; see the example below.
Extract to your LibreNMS plugins directory /opt/librenms/html/plugins so you should see something like /opt/librenms/html/plugins/Weathermap/ The best way to do this is via git. Go to your install directory and then /opt/librenms/html/plugins enter:
Now you should see Weathermap Overview -> Plugins -> Weathermap Create your maps, please note when you create a MAP, please click Map Style, ensure Overlib is selected for HTML Style and click submit. Also, ensure you set an output image filename and output HTML filename in Map Properties. I'd recommend you use the output folder as this is excluded from git updates (i.e. use output/mymap.png and output/mymap.html).
Optional: If your install is in another directory than standard, set $basehref within map-poller.php.
Automatically generate weathermaps from a LibreNMS database using WeatherMapper.
"},{"location":"Extensions/Weathermap/#adding-your-network-weathermaps-to-the-dashboards","title":"Adding your Network Weathermaps to the Dashboards","text":"
Once you have created your Network Weather Map you can add it to a dashboard page by doing the following.
World Map Widget, requires you to have properly formatted addresses in sysLocation or sysLocation override. As part of the standard poller these addresses will be Geocoded by Google and stored in the database.
Location resolution happens as follows
If device['location'] contains [lat, lng] (note the square brackets), that is used
If there is a location overide for the device in the WebUI and it contains [lat, lng] (note the square brackets), that is used.
Attempt to resolve lat, lng using lnms config:set geoloc.engine
Properly formatted addresses in sysLocation or sysLocation override, under device settings.
Example:
[40.424521, -86.912755]\n
or
1100 Congress Ave, Austin, TX 78701 (3rd floor cabinet)\n
Information inside parentheses is ignored during GEO lookup
Initial Latitude / Longitude: The map will be centered on those coordinates.
Initial Zoom: Initial zoom of the map. More information about zoom levels.
Grouping radius: Markers are grouped by area. This value define the maximum size of grouping areas.
Show devices: Show devices based on status.
Example Settings:
"},{"location":"Extensions/World-Map/#device-overview-world-map-settings","title":"Device Overview World Map Settings","text":"
If a device has a location with a valid latitude and logitude, the device overview page will have a panel showing the device on a world map. The following settings affect this map:
# Does the world map start opened, or does the user need to clivk to view\nlnms config:set device_location_map_open false\n# Do we show all other devices on the map as well\nlnms config:set device_location_map_show_devices false\n# Do we show a network map based on device dependencies\nlnms config:set device_location_map_show_device_dependencies false\n
lnms config:set map.engine leaflet\nlnms config:set leaflet.default_lat \"51.981074\"\nlnms config:set leaflet.default_lng \"5.350342\"\nlnms config:set leaflet.default_zoom 8\n# Device grouping radius in KM default 80KM\nlnms config:set leaflet.group_radius 1\n# Enable network map on world map\nlnms config:set network_map_show_on_worldmap true\n# Use CDP/LLDP for network map, or device dependencies\nlnms config:set network_map_worldmap_link_type xdp/depends\n# Do not show devices that have notifications disabled\nlnms config:set network_map_worldmap_show_disabled_alerts false\n
Further custom options are available to load different maps of the world, set default coordinates of where the map will zoom and the zoom level by default. An example of this is:
Your metric path can be prefixed if required, otherwise the metric path for Graphite will be in the form of hostname.measurement.fieldname, interfaces will be stored as hostname.ports.ifName.fieldname.
The same data then stored within rrd will be sent to Graphite and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/InfluxDB/","title":"Enabling support for InfluxDB","text":"
Before we get started it is important that you know and understand that InfluxDB support is currently alpha at best. All it provides is the sending of data to a InfluxDB install. Due to the current changes that are constantly being made to InfluxDB itself then we cannot guarantee that your data will be ok so enabling this support is at your own risk!
No credentials are needed if you don't use InfluxDB authentication.
The same data then stored within rrd will be sent to InfluxDB and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/InfluxDBv2/","title":"Enabling support for InfluxDBv2","text":"
Before we get started it is important that you know and understand that InfluxDBv2 support is currently alpha at best. All it provides is the sending of data to a InfluxDBv2 bucket. Due to the current changes that are constantly being made to InfluxDB itself then we cannot guarantee that your data will be ok so enabling this support is at your own risk!
It is also important to understand that InfluxDBv2 only supports the InfluxDBv2 API used in InfluxDB version 2.0 or higher. If you are looking to send data to any other version of InfluxDB than you should use the InfluxDB datastore instead.
The same data stored within rrd will be sent to InfluxDB and recorded. You can then create graphs within Grafana or InfluxDB to display the information you need.
Please note that polling will slow down when the poller isn't able to reach or write data to InfluxDBv2.
"},{"location":"Extensions/metrics/OpenTSDB/","title":"Enabling support for OpenTSDB","text":"
This module sends all metrics to OpenTSDB server. You need something like Grafana for graphing.
The same data than the one stored within rrd will be sent to OpenTSDB and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/Prometheus/","title":"Enabling support for Prometheus","text":"
Please be aware Prometheus support is alpha at best, It hasn't been extensively tested and is still in development All it provides is the sending of data to a a Prometheus PushGateway. Please be careful when enabling this support you use it at your own risk!
"},{"location":"Extensions/metrics/Prometheus/#requirements-older-versions-may-work-but-havent-been-tested","title":"Requirements (Older versions may work but haven't been tested","text":"
Prometheus >= 2.0
PushGateway >= 0.4.0
Grafana
PHP-CURL
The setup of the above is completely out of scope here and we aren't really able to provide any help with this side of things.
"},{"location":"Extensions/metrics/Prometheus/#what-you-dont-get","title":"What you don't get","text":"
Pretty graphs, this is why at present you need Grafana. You need to build your own graphs within Grafana.
Support for Prometheus or Grafana, we would highly recommend that you have some level of experience with these.
RRD will continue to function as normal so LibreNMS itself should continue to function as normal.
The same data then stored within rrd will be sent to Prometheus and recorded. You can then create graphs within Grafana to display the information you need.
LibreNMS wouldn't be what it is today without the use of some other amazing projects. We list below what we make use of including the license compliance.
"},{"location":"General/Acknowledgement/#3rd-party-gplv3-compliant","title":"3rd Party GPLv3 Compliant","text":"
Bootstrap: MIT
Font Awesome: MIT License
Jquery Bootgrid: MIT License
Pace: Open License
Twitter typeahead: Open License
Vis: MIT / Apache 2.0
TCPDF: LGPLv3
Bootstrap 3 Datepicker:MIT
Bootstrap Dropdown Hover Plugin: MIT
Bootstrap Switch: Apache 2.0
Handlebars: Open License
Cycle2: MIT/GPL
Jquery: MIT
Jquery UI: MIT
Jquery QRCode: MIT
Mktree: Open License
Moment: MIT
Tag Manager: MIT
TW Sack: GPLv3
Gridster: MIT
Pure PHP radius class: GPLv3
GeSHi - Generic Syntax Highlighter: GPLv2+
MalaysiaMap.svg - By Exiang CC BY 3.0, via Wikimedia Commons
Code for UBNT Devices Mark Gibbons mgibbons@oemcomp.com Initial code base submitted via PR721
Jquery LazyLoad: MIT License
influxdb-php: MIT License
influxdb-client-php: MIT License
HTML Purifier: LGPL v2.1
Symfony Yaml: MIT
PHPMailer: LGPL v2.1
pbin: GPLv2 (or later - see script header)
CorsSlim: MIT
Confluence HTTP Authenticator
Graylog SSO Authentication Plugin
Select2: MIT License
JustGage: MIT
jQuery.extendext: MIT
doT: MIT
jQuery-queryBuilder: MIT
sql-parser: MIT (Currently a custom build is used)
"},{"location":"General/Acknowledgement/#3rd-party-gplv3-non-compliant","title":"3rd Party GPLv3 Non-compliant","text":"
"},{"location":"General/Callback-Stats-and-Privacy/","title":"Submitting Stats","text":""},{"location":"General/Callback-Stats-and-Privacy/#stats-data-and-your-privacy","title":"Stats data and your privacy","text":"
This document has been put together to explain what LibreNMS does when it calls back home to report some anonymous statistics.
Let's start off by saying, all of the code that processes the data and submits it is included in the standard LibreNMS branch you've installed, the code that accepts this data and in turn generates some pretty graphs is all open source and available on GitHub. Please feel free to review the code, comment on it and suggest changes / improvements. Also, don't forget - by default installations DO NOT call back home, you need to opt into this.
Above all we respect users privacy which is why this system has been designed like it has.
Now onto the bit you're interested in, what is submitted and what we do with that data.
"},{"location":"General/Callback-Stats-and-Privacy/#what-is-submitted","title":"What is submitted","text":"
All data is anonymous.
Generic statistics are taken from the database, these include things like device count, device type, device OS, port types, port speeds, port count and BGP peer count. Take a look at the code for full details.
Pairs of sysDescr and sysObjectID from devices with a small amount of sanitation to prevent things like hostnames from being submitted.
We record version numbers of php, mysql, net-snmp and rrdtool
A random UUID is generated on your own install.
That's it!
Your IP isn't logged, even via our web service accepting the data. We don't need to know who you are so we don't ask.
"},{"location":"General/Callback-Stats-and-Privacy/#what-we-do-with-the-data","title":"What we do with the data","text":"
We store it, not for long - 3 months at the moment although this could change.
We use it to generate pretty graphs for people to see.
We use it to help prioritise issues and features that need to be worked on.
We use sysDescr and sysObjectID to create unit tests and improve OS discovery
"},{"location":"General/Callback-Stats-and-Privacy/#how-do-i-enable-stats-submission","title":"How do I enable stats submission?","text":"
If you're happy with all of this - please consider switching the call back system on, you can do this within the About LibreNMS page within your control panel. In the Statistics section you will find a toggle switch to enable / disable the feature. If you've previously had it switched on and want to opt out and remove your data, click the 'Clear remote stats' button and on the next submission all the data you've sent us will be removed!
"},{"location":"General/Callback-Stats-and-Privacy/#questions","title":"Questions?","text":""},{"location":"General/Callback-Stats-and-Privacy/#how-often-is-data-submitted","title":"How often is data submitted?","text":"
We submit the data once a day according to running daily.sh via cron. If you disable this then opting in will not have any affect.
"},{"location":"General/Callback-Stats-and-Privacy/#where-can-i-see-the-data-i-submitted","title":"Where can I see the data I submitted?","text":"
You can't see the data raw, but we collate all of the data together and provide a dynamic site so you can see the results of all contributed stats here
"},{"location":"General/Callback-Stats-and-Privacy/#i-want-my-data-removed","title":"I want my data removed.","text":"
That's easy, simply press 'Clear remote stats' in the About LibreNMS page of your control panel, the next time the call back script is run it will remove all the data we have.
"},{"location":"General/Callback-Stats-and-Privacy/#i-clicked-the-clear-remote-stats-button-by-accident","title":"I clicked the 'Clear remote stats' button by accident.","text":"
No problem, before daily.sh runs again - just opt back in, all of your existing data will stay.
Hopefully this answers the questions you might have on why and what we are doing here, if not, please pop into our discord server or community forum and ask any questions you like.
Bump phpseclib/phpseclib from 3.0.21 to 3.0.34 (#15600) - dependabot
"},{"location":"General/Changelog/#old-changelogs","title":"Old Changelogs","text":""},{"location":"General/Releases/","title":"Choosing a release","text":"
We try to ensure that breaking changes aren't introduced by utilising various automated code testing, syntax testing and unit testing along with manual code review. However bugs can and do get introduced as well as major refactoring to improve the quality of the code base.
We have two branches available for you to use. The default is the master branch.
Our master branch is our dev branch, this is actively commited to and it's not uncommon for multiple commits to be merged in daily. As such sometimes changes will be introduced which will cause unintended issues. If this happens we are usually quick to fix or revert those changes.
We appreciate everyone that runs this branch as you are in essence secondary testers to the automation and manually testing that is done during the merge stages.
You can configure your install (this is the default) to use this branch by setting lnms config:set update_channel master and ensuring you switch to the master branch with:
With this in mind, we provide a monthly stable release which is released on or around the last Sunday of the month. Code pull requests (aside from Bug fixes) are stopped days leading up to the release to ensure that we have a clean working branch at that point.
The changelog is also updated and will reference the release number and date so you can see what changes have been made since the last release.
To switch to using stable branches you can set lnms config:set update_channel release
This will pause updates until the next stable release, at that time LibreNMS will update to the stable release and continue to only update to stable releases. Downgrading is not supported on LibreNMS and will likely cause bugs.
Like any good software we take security seriously. However, bugs do make it into the software along with the history of the code base we inherited. It's how we deal with identified vulnerabilities that should show that we take things seriously.
"},{"location":"General/Security/#securing-your-install","title":"Securing your install","text":"
As with any system of this nature, we highly recommend that you restrict access to the install via a firewall or VPN.
Once you have enabled HTTPS for your install, you should set SESSION_SECURE_COOKIE=true in your .env file. This will require cookies to be transferred by secure protocol and prevent any MiM attacks against it.
When using a reverse proxy, you may restrict the hosts allowed to forward headers to LibreNMS. By default this allows all proxies, due to legacy reasons.
Set APP_TRUSTED_PROXIES in your .env to an empty string or the urls to the proxies allowed to forward.
Like anyone, we appreciate the work people put in to find flaws in software and welcome anyone to do so with LibreNMS, this will lead to better quality and more secure software for everyone.
If you think you've found a vulnerability and want to discuss it with some of the core team then you can contact us on Discord and we will endeavour to get back to as quick as we can, this is usually within 24 hours.
We are happy to attribute credit to the findings, but we ask that we're given a chance to patch any vulnerability before public disclosure so that our users can update as soon as a fix is available.
"},{"location":"General/Updating/","title":"Updating an Install","text":"
By default, LibreNMS is set to automatically update. If you have disabled this feature then you can perform a manual update.
LibreNMS by default performs updates on a daily basis. This can be disabled in the WebUI Global Settings under System -> Updates, or using lnms
Warning
You should never remove daily.sh from the cronjob! This does database cleanup and other processes in addition to updating.
settings/system/updates
lnms config:set update false\n
"},{"location":"General/Welcome-to-Observium-users/","title":"Welcome to Observium users","text":"
LibreNMS is a fork of Observium. The reason for the fork has nothing to do with Observium's move to community vs. paid versions. It is simply that we have different priorities and values to the Observium development team. We decided to fork (reluctantly) because we like using Observium, but we want to collaborate on a community-based project with like-minded IT professionals. See README.md and the references there for more information about the kind of community we're trying to promote.
LibreNMS was forked from the last GPL-licensed version of Observium.
Thanks to one of our users, Dan Brown, who has written a migration script, you can easily move your Observium install over to LibreNMS. This also takes care of moving from one CPU architecture to another. Give it a try :)
How LibreNMS will be different from Observium:
We will have an inclusive community, where it's OK to ask stupid questions, and OK to ask for things that aren't on the roadmap. If you'd like to see something added, add or comment on the relevant issue in our Community forum.
Development decisions will be community-driven. We want to make software that fulfills its users' needs.
There are no plans for a paid version, and we don't anticipate this ever changing.
There are no current plans for paid support, but this may be added later if there is sufficient demand.
We use git for version control and GitHub for hosting to make it as easy and painless as possible to create forked or private versions.
Reasons why you might want to use Observium instead of LibreNMS:
You have a financial investment in Observium and aren't concerned about community contributions.
You don't like the GNU General Public License, version 3 or the philosophy of Free Software/copyleft in general.
Reasons why you might want to use LibreNMS instead of Observium:
You want to work with others on the project, knowing that your investment of time and effort will not be wasted.
You want to add and experiment with features that are not a priority for the Observium developers. See CONTRIBUTING for more details.
You want to make use of the additional features LibreNMS can offer.
All images can be downloaded from GitHub. The tags follow the main LibreNMS repo. When a new LibreNMS release is available we will push new images out running that version. Please do note that if you download an older release with a view to running that specific version, you will need to disable updates lnms config:set update false.
If you are using the VirtualBox image then to access your newly imported VM, these ports are forwarded from your machine to the VM: 8080 for WebUI and 2023 for SSH. Remember to edit/remove them if you change (and you should) the VM network configuration.
If you would like to help with these images whether it's add additional features or default software / settings then you can do so on GitHub.
"},{"location":"Installation/Install-LibreNMS/","title":"Install LibreNMS","text":""},{"location":"Installation/Install-LibreNMS/#prepare-linux-server","title":"Prepare Linux Server","text":"
You should have an installed Linux server running one of the supported OS. Make sure you select your server's OS in the tabbed options below. Choice of web server is your preference, NGINX is recommended.
Connect to the server command line and follow the instructions below.
Note
These instructions assume you are the root user. If you are not, prepend sudo to the shell commands (the ones that aren't at mysql> prompts) or temporarily become a user with root privileges with sudo -s or sudo -i.
Please note the minimum supported PHP version is 8.1
su - librenms\n./scripts/composer_wrapper.php install --no-dev\nexit\n
Sometimes when there is a proxy used to gain internet access, the above script may fail. The workaround is to install the composer package manually. For a global installation:
See https://php.net/manual/en/timezones.php for a list of supported timezones. Valid examples are: \"America/New_York\", \"Australia/Brisbane\", \"Etc/UTC\". Ensure date.timezone is set in php.ini to your preferred time zone.
NOTE: Change the 'password' below to something secure.
CREATE DATABASE librenms CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;\nCREATE USER 'librenms'@'localhost' IDENTIFIED BY 'password';\nGRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'localhost';\nexit\n
Change listen to a unique path that must match your webserver's config (fastcgi_pass for NGINX and SetHandler for Apache) :
listen = /run/php-fpm-librenms.sock\n
If there are no other PHP web applications on this server, you may remove www.conf to save some resources. Feel free to tune the performance settings in librenms.conf to meet your needs.
"},{"location":"Installation/Install-LibreNMS/#configure-web-server","title":"Configure Web Server","text":"Ubuntu 24.04Ubuntu 22.04Ubuntu 20.04CentOS 8Debian 12 NGINX
vi /etc/nginx/conf.d/librenms.conf\n
Add the following config, edit server_name as required:
NOTE: If this is the only site you are hosting on this server (it should be :)) then you will need to disable the default site. rm -f /etc/httpd/conf.d/welcome.conf
semanage fcontext -a -t httpd_sys_content_t '/opt/librenms/html(/.*)?'\nsemanage fcontext -a -t httpd_sys_rw_content_t '/opt/librenms/(rrd|storage)(/.*)?'\nsemanage fcontext -a -t httpd_log_t \"/opt/librenms/logs(/.*)?\"\nsemanage fcontext -a -t httpd_cache_t '/opt/librenms/cache(/.*)?'\nsemanage fcontext -a -t bin_t '/opt/librenms/librenms-service.py'\nrestorecon -RFvv /opt/librenms\nsetsebool -P httpd_can_sendmail=1\nsetsebool -P httpd_execmem 1\nchcon -t httpd_sys_rw_content_t /opt/librenms/.env\n
Allow fping
Create the file http_fping.tt with the following contents. You can create this file anywhere, as it is a throw-away file. The last step in this install procedure will install the module in the proper location.
NOTE: Keep in mind that cron, by default, only uses a very limited set of environment variables. You may need to configure proxy variables for the cron invocation. Alternatively adding the proxy settings in config.php is possible too. The config.php file will be created in the upcoming steps. Review the following URL after you finished librenms install steps: https://docs.librenms.org//Support/Configuration/#proxy-support
"},{"location":"Installation/Install-LibreNMS/#enable-the-scheduler","title":"Enable the scheduler","text":"
LibreNMS keeps logs in /opt/librenms/logs. Over time these can become large and be rotated out. To rotate out the old logs you can use the provided logrotate config file:
Now head to the web installer and follow the on-screen instructions.
http://librenms.example.com/install
The web installer might prompt you to create a config.php file in your librenms install location manually, copying the content displayed on-screen to the file. If you have to do this, please remember to set the permissions on config.php after you copied the on-screen contents to the file. Run:
That's it! You now should be able to log in to http://librenms.example.com/. Please note that we have not covered HTTPS setup in this example, so your LibreNMS install is not secure by default. Please do not expose it to the public Internet unless you have configured HTTPS and taken appropriate web server hardening steps.
"},{"location":"Installation/Install-LibreNMS/#add-the-first-device","title":"Add the first device","text":"
We now suggest that you add localhost as your first device from within the WebUI.
We hope you enjoy using LibreNMS. If you do, it would be great if you would consider opting into the stats system we have, please see this page on what it is and how to enable it.
If you would like to help make LibreNMS better there are many ways to help. You can also back LibreNMS on Open Collective.
"},{"location":"Installation/Migrating-from-Observium/","title":"Migrating from Observium","text":"
A LibreNMS user, Dan, has kindly provided full details and scripts to be able to migrate from Observium to LibreNMS.
We have mirrored the scripts he's provided with consent, these are available in the scripts\\Migration folder of your installation..
There are two versions of the scripts available for you to download: - One converts the RRDs to XML and then back to RRD files when they hit the destination. This is a requirement if you are moving from x86 to x64. - Assuming you\u2019re moving servers that are on the same architecture, we can skip that step and just SCP the original RRD files.
For everything to work as originally intended, you\u2019ll need four files.\u00a0Put all four files on both servers, the scripts default to /tmp/:
nodelist.txt \u2013 this file contains the list of hosts you would like to move. This must match exactly to the hostname Observium uses
mkdir.sh \u2013 this script creates the necessary directories on your LibreNMS server
destwork.sh \u2013 depending on the version you choose, this script will add the device to LibreNMS and possibly convert from XML to RRD
convert.sh \u2013 convert is the main script we\u2019ll be calling. All of the magic happens here.
Feel free to crack open the scripts and modify them to suit you. Each file has a handful of variables you\u2019ll need to set for your conversion. They should be self-explanatory, but please leave a comment if you have trouble.
All four files have been placed in the tmp directory of both servers
I would strongly suggest you start with just one or two hosts and see how things work. For me, 10 standard sized devices took about 20 minutes with the RRD to XML conversion. Every environment will be different, so start slow and work your way up to full automation.
First thing we will want to do is exchange SSH keys so that we can automate the login process used by the scripts. Perform these steps on your Observium server:
ssh-keygen -t rsa
Accept the defaults and enter a passphrase if you wish. Then:
ssh-copy-id librenms
Where librenms is the hostname or IP of your destination server.
The nodelist.txt file contains a list of hosts we want to migrate from Observium. These names must match the name of the RRD folder on Observium. You can get those names by running the following \u2013
ls /opt/observium/rrd/
Also important, the nodelist.txt file must be on\u00a0both your Observium and LibreNMS server. Once you have your list, edit nodelist.txt with\u00a0nano:
nano /tmp/nodelist.txt
And replace the dummy data with the hosts you are converting. CTRL+X and then Y to save your modifications. Make the same changes on the LibreNMS server.
Now that we have nodelist.txt setup correctly, it is time to set the variables in all three shell scripts. We are going to start with convert.sh. Edit it with nano:
nano /tmp/convert.sh
and change the variables to suit your environment. Here is a quick list of them:
DEST \u2013 This should be the IP or hostname of your LibreNMS server
L_RRDPATH \u2013 This signifies the location of the LibreNMS RRD directory. The default value is the default install location
O_RRDPATH \u2013 Location of the Observium RRD directory. The default value is the default install location
MKDIR \u2013 Location of the mkdir.sh script
DESTSCRIPT \u2013 Location of the destwork.sh script
NODELIST \u2013 Location of the nodelist.txt file
Next, edit the destwork.sh script:
nano /tmp/destwork.sh
"},{"location":"Support/","title":"How to get Help","text":"
We now have support for polling data at intervals to fit your needs.
Please be aware of the following:
If you just want faster up/down alerts, Fast Ping is a much easier path to that goal.
You must also change your cron entry for poller-wrapper.py for this to work (if you change from the default 300 seconds).
Your polling MUST complete in the time you configure for the heartbeat step value. See /poller in your WebUI for your current value.
This will only affect RRD files created from the moment you change your settings.
This change will affect all data storage mechanisms such as MySQL, RRD and InfluxDB. If you decrease the values then please be aware of the increase in space use for MySQL and InfluxDB.
It's highly recommended to configure some performance optimizations. Keep in mind that all your devices will write all graphs every minute to the disk and that every device has many graphs. The most important thing is probably the RRDCached configuration that can save a lot of write IOPS.
To make the changes, please navigate to /settings/poller/rrdtool/ within your WebUI. Select RRDTool Setup and then update the two values for step and heartbeat intervals:
Step is how often you want to insert data, so if you change to 1 minute polling then this should be 60.
Heartbeat is how long to wait for data before registering a null value, i.e 120 seconds.
We provide a basic script to convert the default rrd files we generate to utilise your configured step and heartbeat values. Please do ensure that you backup your RRD files before running this just in case. The script runs on a per device basis or all devices at once.
The rrd files must be accessible from the server you run this script from.
./scripts/rrdstep.php
This will provide the help information. To run it for localhost just run:
Using the web interface, go to Devices and click Add Device. Enter the details required for the device that you want to add and then click 'Add Host'. As an example, if your device is configured to use the community my_company using snmp v2c then you would enter: SNMP Port defaults to 161.
By default Hostname will be used for polling data. If you want to get polling Device data via a specific IP-Address (e.g. Management IP) fill out the optional field Overwrite IP with it's IP-Address.
Using the command line via ssh you can add a new device by changing to the directory of your LibreNMS install and typing (be sure to put the correct details).
Please note that if the community contains special characters such as $ then you will need to wrap it in '. I.e: 'Pa$$w0rd'.
"},{"location":"Support/Adding-a-Device/#ping-only-device","title":"Ping Only Device","text":"
You can add ping only devices into LibreNMS through the WebUI or CLI. When adding the device switch the SNMP button to \"off\". Device will be added into LibreNMS as Ping Only Device and will show ICMP Response Graph.
Hostname: IP address or DNS name.
Hardware: Optional you can type in whatever you like.
OS: Optional this will add the Device's OS Icon.
Via CLI this is done with ./lnms device:add [-P|--ping-only] yourhostname
A How-to video can be found here: How to add ping only devices
"},{"location":"Support/Adding-a-Device/#automatic-discovery-and-api","title":"Automatic Discovery and API","text":"
If you would like to add devices automatically then you will probably want to read the Auto-discovery Setup guide.
You may also want to add devices programmatically, if so, take a look at our API documentation
This script provides CLI access to the \"delete port\" function of the WebUI. This might come in handy when trying to clean up old ports after large changes within the network or when hacking on the poller/discovery functions.
LibreNMS Port purge tool\n-p port_id Purge single port by it's port-id\n-f file Purge a list of ports, read port-ids from _file_, one on each line\n A filename of - means reading from STDIN.\n
"},{"location":"Support/CLI-Tools/#querying-port-ids-from-the-database","title":"Querying port IDs from the database","text":"
One simple way to obtain port IDs is by querying the SQL database.
If you wanted to query all deleted ports from the database, you could to this with the following query:
echo 'SELECT port_id, hostname, ifDescr FROM ports, devices WHERE devices.device_id = ports.device_id AND deleted = 1' | mysql -h your_DB_server -u your_DB_user -p --skip-column-names your_DB_name\n
When you are sure that the list of ports is correct and you want to delete all of them, you can write the list into a file and call purge-ports.php with that file as input:
echo 'SELECT port_id FROM ports, devices WHERE devices.device_id = ports.device_id AND deleted = 1' | mysql -h your_DB_server -u your_DB_user -p --skip-column-names your_DB_name > ports_to_delete\n./purge-port.php -f ports_to_delete\n
As the number of devices starts to grow in your LibreNMS install, so will things such as the RRD files, MySQL database containing eventlogs, Syslogs and performance data etc. Your LibreNMS install could become quite large so it becomes necessary to clean up those entries. With Cleanup Options, you can stay in control.
These options rely on daily.sh running from cron as per the installation instructions.
These options will ensure data within LibreNMS over X days old is automatically purged. You can alter these individually, values are in days.
NOTE: Please be aware that rrd_purge is NOT set by default. This option will remove any RRD files that have not been updated for the set amount of days automatically - only enable this if you are comfortable with that happening. (All active RRD files are updated every polling period.)
The config is stored in two places: Database: This applies to all pollers and can be set with either lnms config:set or in the Web UI. Database config takes precedence over config.php. config.php: This applies to the local poller only. Configs set here will be disabled in the Web UI to prevent unexpected behaviour.
The documentation has not been updated to reflect using lnms config:set to set config items, but it will work for all settings. Not all settings have been defined in LibreNMS, but they can still be set with the --ignore-checks option. Without that option input is checked for correctness, that does not mean it is not possible to set bad values. Please report missing settings.
lnms config:get will fetch the current config settings (composite of database, config.php, and defaults). lnms config:set will set the config setting in the database. Calling lnms config:set on a setting with no value will reset it to the default value.
If you set up bash completion, you can use tab completion to find config settings.
"},{"location":"Support/Configuration/#getting-a-list-of-all-current-values","title":"Getting a list of all current values","text":"
To get a complete list of all the current values, you can use the command lnms config:get --dump. The output may not be desirable, so you can use the jq package to pretty print it. Then it would be lnms config:get --dump | jq.
This feature is primarily for docker images and other automation. When installing LibreNMS for the first time with a new database you can place yaml key value files in database/seeders/config to pre-populate the config database.
A lot of these are self explanatory so no further information may be provided. Any extensions that have dedicated documentation page will be linked to rather than having the config provided.
timeout (fping parameter -t): Amount of time that fping waits for a response to its first request (in milliseconds). See note below
count (fping parameter -c): Number of request packets to send to each target.
interval (fping parameter -p): Time in milliseconds that fping waits between successive packets to an individual target.
tos (fpingparameter -O): Set the type of service flag (TOS). Value can be either decimal or hexadecimal (0xh) format. Can be used to ensure that ping packets are queued in following QOS mecanisms in the network. Table is accessible in the TOS Wikipedia page.
NOTE: Setting a higher timeout value than the interval value can lead to slowing down poller. Example:
timeout: 3000
count: 3
interval: 500
In this example, interval will be overwritten by the timeout value of 3000 which is 3 seconds. As we send three icmp packets (count: 3), each one is delayed by 3 seconds which will result in fping taking > 6 seconds to return results.
You can disable the fping / icmp check that is done for a device to be determined to be up on a global or per device basis. We don't advise disabling the fping / icmp check unless you know the impact, at worst if you have a large number of devices down then it's possible that the poller would no longer complete in 5 minutes due to waiting for snmp to timeout.
Globally disable fping / icmp check:
lnms config:set icmp_check false\n
If you would like to do this on a per device basis then you can do so under Device -> Edit -> Misc -> Disable ICMP Test? On
You can override a large number of visual elements by creating your own css stylesheet and referencing it here, place any custom css files into html/css/custom so they will be ignored by auto updates. You can specify as many css files as you like, the order they are within your config will be the order they are loaded in the browser.
You can override the default logo with yours, place any custom images files into html/images/custom so they will be ignored by auto updates.
lnms config:set page_refresh 300\n
Set how often pages are refreshed in seconds. The default is every 5 minutes. Some pages don't refresh at all by design.
lnms config:set front_page default\n
You can create your own front page by adding a blade file in resources/views/overview/custom/ and setting front_page to it's name. For example, if you create resources/views/overview/custom/foobar.blade.php, set front_page to foobar.
webui/dashboard
lnms config:set webui.default_dashboard_id 0\n
Allows the specification of a global default dashboard page for any user who has not set one in their user preferences. Should be set to dashboard_id of an existing dashboard that is shared or shared(read). Otherwise, the system will automatically create each user an empty dashboard called Default on their first login.
lnms config:set login_message \"Unauthorised access or use shall render the user liable to criminal and/or civil prosecution.\"\n
This is the default message on the login page displayed to users.
lnms config:set public_status true\n
If this is set to true then an overview will be shown on the login page of devices and the status.
lnms config:set show_locations true # Enable Locations on menu\nlnms config:set show_locations_dropdown true # Enable Locations dropdown on menu\nlnms config:set show_services false # Disable Services on menu\nlnms config:set int_customers true # Enable Customer Port Parsing\nlnms config:set summary_errors false # Show Errored ports in summary boxes on the dashboard\nlnms config:set customers_descr '[\"cust\"]' # The description to look for in ifDescr. Can have multiple '[\"cust\",\"cid\"]'\nlnms config:set transit_descr '[\"transit\"]' # Add custom transit descriptions (array)\nlnms config:set peering_descr '[\"peering\"]' # Add custom peering descriptions (array)\nlnms config:set core_descr '[\"core\"]' # Add custom core descriptions (array)\nlnms config:set custom_descr '[\"This is Custom\"]' # Add custom interface descriptions (array)\nlnms config:set int_transit true # Enable Transit Types\nlnms config:set int_peering true # Enable Peering Types\nlnms config:set int_core true # Enable Core Port Types\nlnms config:set int_l2tp false # Disable L2TP Port Types\n
Enable / disable certain menus from being shown in the WebUI.
You are able to adjust the number and time frames of the quick select time options for graphs and the mini graphs shown per row.
This is a simple template to control the display of device names by default. You can override this setting per-device.
You may enter any free-form text including one or more of the following template replacements:
Template Replacement {{ $hostname }} The hostname or IP of the device that was set when added *default {{ $sysName_fallback }} The hostname or sysName if hostname is an IP {{ $sysName }} The SNMP sysName of the device, falls back to hostname/IP if missing {{ $ip }} The actual polled IP of the device, will not display a hostname
For example, {{ $sysName_fallback }} ({{ $ip }}) will display something like server (192.168.1.1)
Interface types that aren't graphed in the WebUI. The default array contains more items, please see misc/config_definitions.json for the full list.
lnms config:set enable_clear_discovery true\n
Administrators are able to clear the last discovered time of a device which will force a full discovery run within the configured 5 minute cron window.
lnms config:set enable_footer true\n
Disable the footer of the WebUI by setting enable_footer to 0.
You can enable the old style network map (only available for individual devices with links discovered via xDP) by setting:
lnms config:set gui.network-map.style old\n
lnms config:set percentile_value 90\n
Show the Xth percentile in the graph instead of the default 95th percentile.
webui/graph
lnms config:set shorthost_target_length 15\n
The target maximum hostname length when applying the shorthost() function. You can increase this if you want to try and fit more of the hostname in graph titles. The default value is 12 However, this can possibly break graph generation if this is very long.
You can enable dynamic graphs within the WebUI under Global Settings -> Webui Settings -> Graph Settings.
Graphs will be movable/scalable without reloading the page:
You can enable stacked graphs instead of the default inverted graphs. Enabling them is possible via webui Global Settings -> Webui Settings -> Graph settings -> Use stacked graphs
The following setting controls how hosts are added. If a host is added as an ip address it is checked to ensure the ip is not already present. If the ip is present the host is not added. If host is added by hostname this check is not performed. If the setting is true hostnames are resolved and the check is also performed. This helps prevents accidental duplicate hosts.
lnms config:set addhost_alwayscheckip false # true - check for duplicate ips even when adding host by name.\n # false- only check when adding host by ip.\n
By default we allow hosts to be added with duplicate sysName's, you can disable this with the following config:
discovery/general
lnms config:set allow_duplicate_sysName false\n
"},{"location":"Support/Configuration/#global-poller-and-discovery-modules","title":"Global poller and discovery modules","text":"
Enable or disable discovery or poller modules.
This setting has an order of precedence Device > OS > Global. So if the module is set at a more specific level, it will override the less specific settings.
What type of mail transport to use for delivering emails. Valid options for email_backend are mail, sendmail or smtp. The varying options after that are to support the different transports.
Rancid configuration, rancid_configs is an array containing all of the locations of your rancid files. Setting rancid_ignorecomments will disable showing lines that start with #
Specify the location of the collectd rrd files. Note that the location in config.php should be consistent with the location set in /etc/collectd.conf and etc/collectd.d/rrdtool.conf
Specify the location of the collectd unix socket. Using a socket allows the collectd graphs to be flushed to disk before being drawn. Be sure that your web server has permissions to write to this socket.
Next it will attempt to look up the sysLocation with a map engine provided you have configured one under $config['geoloc']['engine']. The information has to be accurate or no result is returned, when it does it will ignore any information inside parentheses, allowing you to add details that would otherwise interfeeer with the lookup.
Example:
1100 Congress Ave, Austin, TX 78701 (3rd floor)\nGeocoding lookup is:\n1100 Congress Ave, Austin, TX 78701\n
If you just want to set GPS coordinates on a location, you should visit Devices > Geo Locations > All Locations and edit the coordinates there.
Exact Matching:
lnms config:set location_map '{\"Under the Sink\": \"Under The Sink, The Office, London, UK\"}'\n
Regex Matching:
lnms config:set location_map_regex '{\"/Sink/\": \"Under The Sink, The Office, London, UK\"}'\n
Regex Match Substitution:
lnms config:set location_map_regex_sub '{\"/Sink/\": \"Under The Sink, The Office, London, UK [lat, long]\"}'\n
If you have an SNMP SysLocation of \"Rack10,Rm-314,Sink\", Regex Match Substition yields \"Rack10,Rm-314,Under The Sink, The Office, London, UK [lat, long]\". This allows you to keep the SysLocation string short and keeps Rack/Room/Building information intact after the substitution.
The above are examples, these will rewrite device snmp locations so you don't need to configure full location within snmp.
"},{"location":"Support/Configuration/#interfaces-to-be-ignored","title":"Interfaces to be ignored","text":"
Interfaces can be automatically ignored during discovery by modifying bad_if* entries in a default array, unsetting a default array and customizing it, or creating an OS specific array. The preferred method for ignoring interfaces is to use an OS specific array. The default arrays can be found in misc/config_definitions.json. OS specific definitions (includes/definitions/_specific_os_.yaml) can contain bad_if* arrays, but should only be modified via pull-request as manipulation of the definition files will block updating:
good_if is matched against ifDescr value. This can be a bad_if value as well which would stop that port from being ignored. i.e. if bad_if and good_if both contained FastEthernet then ports with this value in the ifDescr will be valid.
"},{"location":"Support/Configuration/#interfaces-to-be-rewritten","title":"Interfaces to be rewritten","text":"
Entries defined in rewrite_if are being replaced completely. Entries defined in rewrite_if_regexp only replace the match. Matches are compared case-insensitive.
"},{"location":"Support/Configuration/#entity-sensors-to-be-ignored","title":"Entity sensors to be ignored","text":"
Some devices register bogus sensors as they are returned via SNMP but either don't exist or just don't return data. This allows you to ignore those based on the descr field in the database. You can either ignore globally or on a per os basis.
lnms config:set bad_entity_sensor_regex.+ '/Physical id [0-9]+/'\nlnms config:set os.ios.bad_entity_sensor_regex '[\"/Physical id [0-9]+/\"]'\n
Vendors may give some limit values (or thresholds) for the discovered sensors. By default, when no such value is given, both high and low limit values are guessed, based on the value measured during the initial discovery.
When it is preferred to have no high and/or low limit values at all if these are not provided by the vendor, the guess method can be disabled:
lnms config:set sensors.guess_limits false\n
"},{"location":"Support/Configuration/#ignoring-health-sensors","title":"Ignoring Health Sensors","text":"
It is possible to filter some sensors from the configuration:
Enable this to switch on support for libvirt along with libvirt_protocols to indicate how you connect to libvirt. You also need to:
Generate a non-password-protected ssh key for use by LibreNMS, as the user which runs polling & discovery (usually librenms).
On each VM host you wish to monitor:
Configure public key authentication from your LibreNMS server/poller by adding the librenms public key to ~root/.ssh/authorized_keys.
(xen+ssh only) Enable libvirtd to gather data from xend by setting (xend-unix-server yes) in /etc/xen/xend-config.sxp and restarting xend and libvirtd.
To test your setup, run virsh -c qemu+ssh://vmhost/system list or virsh -c xen+ssh://vmhost list as your librenms polling user.
LibreNMS has a standard for device sensors they are split into categories. This doc is to help users understand device sensors in general, if you need help with developing sensors for a device please see the Contributing + Developing section.
The High and Low values of these sensors can be edited in Web UI by going to the device settings -> Health. There you can set your own custom High and Low values. List of these sensors can be found here Link
Note Some values are defined by the manufactures and others are auto calculated when you add the device into librenms. Keep in mind every environment is different and may require user input.
Some Wireless have High and Low values of these sensors can be edited in Web UI by going to the device settings -> Wireless Sensors There you can set your own custom High and Low values. List of these sensors can be found here Link
Note Some values are defined by the manufactures and others are auto calculated when you add the device into librenms. Keep in mind every environment is different and may require user input.
These alert rules can be found inside the Alert Rules Collection. The alert rules below are the default alert rules, there are more device-specific alert rules in the alerts collection.
Sensor Over Limit Alert Rule: Will alert on any sensor value that is over the limit.
Sensor Under Limit Alert Rule: Will alert on any sensor value that is under the limit.
Remember you can set these limits inside device settings in the Web UI.
State Sensor Critical: Will alert on any state that returns critical = 2
State Sensor Warning: Will alert on any state that returns warning = 1
Wireless Sensor Over Limit Alert Rule: Will Alert on sensors that listed in device settings under Wireless.
Wireless Sensor Under Limit Alert Rule: Will Alert on sensors that listed in device settings under Wireless.
You can use this feature to run Debug on Discovery, Poller, SNMP, Alerts. This output information could be helpful for you in troubleshooting a device or when requesting help.
This feature can be found by going to the device that you are troubleshooting in the webui, clicking on the settings icon menu on far right and selecting Capture.
-h <device id> | <device hostname wildcard> Poll single device\n-h odd Poll odd numbered devices (same as -i 2 -n 0)\n-h even Poll even numbered devices (same as -i 2 -n 1)\n-h all Poll all devices\n-h new Poll all devices that have not had a discovery run before\n--os <os_name> Poll devices only with specified operating system\n--type <type> Poll devices only with specified type\n-i <instances> -n <number> Poll as instance <number> of <instances>\n Instances start at 0. 0-3 for -n 4\n\nDebugging and testing options:\n-d Enable debugging output\n-v Enable verbose debugging output\n-m Specify module(s) to be run. Comma separate modules, submodules may be added with /\n
-h Use this to specify a device via either id or hostname (including wildcard using *). You can also specify odd and even. all will run discovery against all devices whilst new will poll only those devices that have recently been added or have been selected for rediscovery.
-i This can be used to stagger the discovery process.
-d Enables debugging output (verbose output but with most sensitive data masked) so that you can see what is happening during a discovery run. This includes things like rrd updates, SQL queries and response from snmp.
-v Enables verbose debugging output with all data in tact.
-m This enables you to specify the module you want to run for discovery.
We have a discovery-wrapper.py script which is based on poller-wrapper.py by Job Snijders. This script is currently the default.
If you need to debug the output of discovery-wrapper.py then you can add -d to the end of the command - it is NOT recommended to do this in cron.
You also may use -m to pass a list of comma-separated modules. Please refer to Command options of discovery.php. Example: /opt/librenms/discovery-wrapper.py 1 -m bgp-peers
If you want to switch back to discovery.php then you can replace:
These are the default discovery config items. You can globally disable a module by setting it to 0. If you just want to disable it for one device then you can do this within the WebUI -> Device -> Settings -> Modules.
"},{"location":"Support/Discovery%20Support/#os-based-discovery-config","title":"OS based Discovery config","text":"
You can enable or disable modules for a specific OS by using lnms config:set OS based settings have preference over global. Device based settings have preference over all others
Discover performance improvement can be achieved by deactivating all modules that are not supported by specific OS.
E.g. to deactivate spanning tree but activate discovery-arp module for linux OS
os: Os detection. This module will pick up the OS of the device.
ports: This module will detect all ports on a device excluding ones configured to be ignored by config options.
ports-stack: Same as ports except for stacks.
xdsl: Module to collect more metrics for xDSL interfaces.
entity-physical: Module to pick up the devices hardware support.
processors: Processor support for devices.
mempools: Memory detection support for devices.
cisco-vrf-lite: VRF-Lite detection and support.
ipv4-addresses: IPv4 Address detection
ipv6-addresses: IPv6 Address detection
route: This module will load the routing table of the device. The default route limit is 1000 (configurable with lnms config:set routes.max_number 1000), with history data.
sensors: Sensor detection such as Temperature, Humidity, Voltages + More
storage: Storage detection for hard disks
hr-device: Processor and Memory support via HOST-RESOURCES-MIB.
discovery-protocols: Auto discovery module for xDP, OSPF and BGP.
arp-table: Detection of the ARP table for the device.
fdb-table: Detection of the Forwarding DataBase table for the device, with history data.
discovery-arp: Auto discovery via ARP.
junose-atm-vp: Juniper ATM support.
bgp-peers: BGP detection and support.
vlans: VLAN detection and support.
cisco-mac-accounting: MAC Address account support.
cisco-pw: Pseudowires wires detection and support.
vrf: VRF detection and support.
cisco-cef: CEF detection and support.
slas: SLA detection and support.
vminfo: Detection of vm guests for VMware ESXi and libvert
To provide debugging output you will need to run the discovery process with the -d flag. You can do this either against all modules, single or multiple modules:
Using -d shouldn't output much sensitive information, -v will so it is then advisable to sanitise the output before pasting it somewhere as the debug output will contain snmp details amongst other items including port descriptions.
The information in this document is direct from users, it's a place for people to share their setups so you have an idea of what may be required for your install.
To obtain the device, port and sensor counts you can run:
select count(*) from devices;\nselect count(*) from ports where `deleted` = 0;\nselect count(*) from sensors where `sensor_deleted` = 0;\n
LibreNMS MySQL Type Virtual Virtual OS CentOS 7 CentOS 7 CPU 2 Sockets, 4 Cores 1 Socket, 2 Cores Memory 2GB 2GB Disk Type Raid 1, SSD Raid 1, SSD Disk Space 18GB 30GB Devices 20 - Ports 133 - Health sensors 47 - Load < 0.1 < 0.1"},{"location":"Support/Example-Hardware-Setup/#vente-privee","title":"Vente-Priv\u00e9e","text":"
NOC
LibreNMS MariaDB Type Dell R430 Dell R430 OS Debian 7 (dotdeb) Debian 7 (dotdeb) CPU 2 Sockets, 14 Cores 1 Socket, 2 Cores Memory 256GB 256GB Disk Type Raid 10, SSD Raid 10, SSD Disk Space 1TB 1TB Devices 1028 - Ports 26745 - Health sensors 6238 - Load < 0.5 < 0.5"},{"location":"Support/Example-Hardware-Setup/#kkrumm","title":"KKrumm","text":"
Home
LibreNMS MySQL Type VM Same Server OS CentOS 7 CPU 2 Sockets, 4 Cores Memory 4GB Disk Type Raid 10, SAS Drives Disk Space 40 GB Devices 12 Ports 130 Health sensors 44 Load < 2.5"},{"location":"Support/Example-Hardware-Setup/#kkrumm_1","title":"KKrumm","text":"
Work
LibreNMS MySQL Type HP Proliantdl380gen8 Same Server OS CentOS 7 CPU 2 Sockets, 24 Cores Memory 32GB Disk Type Raid 10, SAS Drives Disk Space 250 GB Devices 390 Ports 16167 Health sensors 3223 Load < 14.5"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85","title":"CppMonkey(KodApa85)","text":"
Home
LibreNMS MariaDB Type i5-4690K Same Workstation OS Ubuntu 18.04.2 CPU 4 Cores Memory 16GB Disk Type Hybrid SATA Disk Space 2 TB Devices 14 Ports 0 Health sensors 70 Load < 0.5"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85_1","title":"CppMonkey(KodApa85)","text":"
Dev
Running in Ganeti
LibreNMS MariaDB Type VM Same VM OS CentOS 7.5 CPU 2 Cores Memory 4GB Disk Type M.2 Disk Space 40 GB Devices 38 Ports 1583 Health sensors 884 Load < 1.0"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85_2","title":"CppMonkey(KodApa85)","text":"
Work NOC
Running in Ganeti Cluster with 2x Dell PER730xd - 64GB, Dual E5-2660 v3
LibreNMS MariaDB Type VM VM OS Debian Stretch Debian Stretch CPU 4 Cores 2 Cores Memory 8GB 4GB Disk Type Raid 6, SAS Drives Disk Space 100 GB 40GB Devices 179 Ports 14495 Health sensors 2329 Load < 2.5 < 1.5"},{"location":"Support/Example-Hardware-Setup/#lazydk","title":"LaZyDK","text":"
Home
LibreNMS MariaDB Type VM - QNAP TS-453 Pro Same Server OS Ubuntu 16.04 CPU 1 vCore Memory 2GB Disk Type Raid 1, SATA Drives Disk Space 10 GB Devices 26 Ports 228 Health sensors 117 Load < 0.92"},{"location":"Support/Example-Hardware-Setup/#sirmaple","title":"SirMaple","text":"
Home
LibreNMS MariaDB Type VM Same Server OS Debian 11 CPU 4 vCore Memory 4GB Disk Type Raid 1, SSD Disk Space 50 GB Devices 41 Ports 317 Health sensors 243 Load < 3.15"},{"location":"Support/Example-Hardware-Setup/#vvelox","title":"VVelox","text":"
Home / Dev
LibreNMS MariaDB Type Supermicro X7SPA-HF Same Server OS FreeBSD 12-STABLE CPU Intel Atom D525 Memory 4GB Disk Type Raid 1, SATA Disk Space 1TB Devices 17 Ports 174 Health sensors 76 Load < 3"},{"location":"Support/Example-Hardware-Setup/#sourcedoctor","title":"SourceDoctor","text":"
Home / Dev
Running in VMWare Workstation Pro
LibreNMS MariaDB Type VM Same Server OS Debian Buster CPU 2 vCore Memory 2GB Disk Type Raid 5, SSD Disk Space 20GB Devices 35 Ports 245 Health sensors 101 Load < 1"},{"location":"Support/Example-Hardware-Setup/#lazyb0nes","title":"lazyb0nes","text":"
Lab
LibreNMS MariaDB Type VM Same Server OS RHEL 7.7 CPU 32 cores Memory 64GB Disk Type Flash San Array Disk Space 400GB Devices 670 Ports 25678 Health sensors 2457 Load 10.92"},{"location":"Support/Example-Hardware-Setup/#dagb","title":"dagb","text":"
Work
Running in VMware.
LibreNMS MariaDB Type Virtual Same Server OS CentOS 7 CPU 12 Cores Xeon 6130 Memory 8GB Disk Type SAN (SSD) Disk Space 26GB/72GB/7GB (logs/RRDs/db) Devices 650 Ports 34300 Health sensors 10500 Load 5.5 (45%)"},{"location":"Support/FAQ/","title":"FAQ","text":""},{"location":"Support/FAQ/#getting-started","title":"Getting started","text":""},{"location":"Support/FAQ/#how-do-i-install-librenms","title":"How do I install LibreNMS?","text":"
This is currently well documented within the doc folder of the installation files.
Please see the following doc
"},{"location":"Support/FAQ/#how-do-i-add-a-device","title":"How do I add a device?","text":"
You have two options for adding a new device into LibreNMS.
1: Using the command line via ssh you can add a new device by changing to the directory of your LibreNMS install and typing:
lnms device:add [hostname or ip]\n
To see all options run: lnms device:add -h
Please note that if the community contains special characters such as $ then you will need to wrap it in '. I.e: 'Pa$$w0rd'.
2: Using the web interface, go to Devices and then Add Device. Enter the details required for the device that you want to add and then click 'Add Host'.
"},{"location":"Support/FAQ/#how-do-i-get-help","title":"How do I get help?","text":"
Getting Help
"},{"location":"Support/FAQ/#what-are-the-supported-oses-for-installing-librenms-on","title":"What are the supported OSes for installing LibreNMS on?","text":"
Supported is quite a strong word :) The 'officially' supported distros are:
Ubuntu / Debian
Red Hat / CentOS
Gentoo
However we will always aim to help wherever possible so if you are running a distro that isn't one of the above then give it a try anyway and if you need help then jump on the discord server.
"},{"location":"Support/FAQ/#do-you-have-a-demo-available","title":"Do you have a demo available?","text":"
We do indeed, you can find access to the demo here
"},{"location":"Support/FAQ/#support","title":"Support","text":""},{"location":"Support/FAQ/#how-does-librenms-use-mibs","title":"How does LibreNMS use MIBs?","text":"
LibreNMS does not parse MIBs to discover sensors for devices. LibreNMS uses static discovery definitions written in YAML or PHP. Therefore, updating a MIB alone will not improve OS support, the definitions must be updated. LibreNMS only uses MIBs to make OIDs easier to read.
"},{"location":"Support/FAQ/#why-do-i-get-blank-pages-sometimes-in-the-webui","title":"Why do I get blank pages sometimes in the WebUI?","text":"
You can enable debug information by setting APP_DEBUG=true in your .env. (Do not leave this enabled, it could leak private data)
If the page you are trying to load has a substantial amount of data in it then it could be that the php memory limit needs to be increased in config.php.
"},{"location":"Support/FAQ/#why-do-i-not-see-any-graphs","title":"Why do I not see any graphs?","text":"
The easiest way to check if all is well is to run ./validate.php as librenms from within your install directory. This should give you info on why things aren't working.
One other reason could be a restricted snmpd.conf file or snmp view which limits the data sent back. If you use net-snmp then we suggest using the included snmpd.conf file.
"},{"location":"Support/FAQ/#how-do-i-debug-pages-not-loading-correctly","title":"How do I debug pages not loading correctly?","text":"
A debug system is in place which enables you to see the output from php errors, warnings and notices along with the MySQL queries that have been run for that page.
You can enable debug information by setting APP_DEBUG=true in your .env. (Do not leave this enabled, it could leak private data) To see additional information, run ./scripts/composer_wrapper.php install, to install additional debug tools. This will add a debug bar at the bottom of every page that will show you detailed debug information.
"},{"location":"Support/FAQ/#how-do-i-debug-the-discovery-process","title":"How do I debug the discovery process?","text":"
Please see the Discovery Support document for further details.
"},{"location":"Support/FAQ/#how-do-i-debug-the-poller-process","title":"How do I debug the poller process?","text":"
Please see the Poller Support document for further details.
"},{"location":"Support/FAQ/#why-do-i-get-a-lot-apache-or-rrdtool-zombies-in-my-process-list","title":"Why do I get a lot apache or rrdtool zombies in my process list?","text":"
If this is related to your web service for LibreNMS then this has been tracked down to an issue within php which the developers aren't fixing. We have implemented a work around which means you shouldn't be seeing this. If you are, please report this in issue 443.
"},{"location":"Support/FAQ/#why-do-i-see-traffic-spikes-in-my-graphs","title":"Why do I see traffic spikes in my graphs?","text":"
This occurs either when a counter resets or the device sends back bogus data making it look like a counter reset. We have enabled support for setting a maximum value for rrd files for ports.
Before this all rrd files were set to 100G max values, now you can enable support to limit this to the actual port speed.
rrdtool tune will change the max value when the interface speed is detected as being changed (min value will be set for anything 10M or over) or when you run the included script (./scripts/tune_port.php) - see RRDTune doc
SNMP ifInOctets and ifOutOctets are counters, which means they start at 0 (at device boot) and count up from there. LibreNMS records the value every 5 minutes and uses the difference between the previous value and the current value to calculate rate. (Also, this value resets to 0 when it hits the max value)
Now, when the value is not recorded for awhile RRD (our time series storage) does not record a 0, it records the last value, otherwise, there would be even worse problems. Then finally we get the current ifIn/OutOctets value and record that. Now, it appears as though all of the traffic since it stopped getting values have occurred in the last 5 minute interval.
So whenever you see spikes like this, it means we have not received data from the device for several polling intervals. The cause can vary quite a bit: bad snmp implementations, intermittent network connectivity, broken poller, and more.
"},{"location":"Support/FAQ/#why-do-i-see-gaps-in-my-graphs","title":"Why do I see gaps in my graphs?","text":"
This is most commonly due to the poller not being able to complete it's run within 300 seconds. Check which devices are causing this by going to /poll-log/ within the Web interface.
When you find the device(s) which are taking the longest you can then look at the Polling module graph under Graphs -> Poller -> Poller Modules Performance. Take a look at what modules are taking the longest and disabled un used modules.
If you poll a large number of devices / ports then it's recommended to run a local recursive dns server such as pdns-recursor.
Running RRDCached is also highly advised in larger installs but has benefits no matter the size.
"},{"location":"Support/FAQ/#how-do-i-change-the-ip-hostname-of-a-device","title":"How do I change the IP / hostname of a device?","text":"
There is a host rename tool called renamehost.php in your librenms root directory. When renaming you are also changing the device's IP / hostname address for monitoring.
Usage:
./renamehost.php <old hostname> <new hostname>\n
You can also rename a device in the Web UI by going to the device, then clicking settings Icon -> Edit.
"},{"location":"Support/FAQ/#my-device-doesnt-finish-polling-within-300-seconds","title":"My device doesn't finish polling within 300 seconds","text":"
We have a few things you can try:
Disable unnecessary polling modules under edit device.
Set a max repeater value within the snmp settings for a device. What to set this to is tricky, you really should run an snmpbulkwalk with -Cr10 through -Cr50 to see what works best. 50 is usually a good choice if the device can cope.
"},{"location":"Support/FAQ/#things-arent-working-correctly","title":"Things aren't working correctly?","text":"
Run ./validate.php as librenms from within your install.
Re-run ./validate.php once you've resolved any issues raised.
You have an odd issue - we'd suggest you join our discord server to discuss.
"},{"location":"Support/FAQ/#what-do-the-values-mean-in-my-graphs","title":"What do the values mean in my graphs?","text":"
The values you see are reported as metric values. Thanks to a post on Reddit here are those values:
10^-18 a - atto\n10^-15 f - femto\n10^-12 p - pico\n10^-9 n - nano\n10^-6 u - micro\n10^-3 m - milli\n0 (no unit)\n10^3 k - kilo\n10^6 M - mega\n10^9 G - giga\n10^12 T - tera\n10^15 P - peta\n
"},{"location":"Support/FAQ/#why-does-a-device-show-as-a-warning","title":"Why does a device show as a warning?","text":"
This is indicating that the device has rebooted within the last 24 hours (by default). If you want to adjust this threshold then you can do so by setting $config['uptime_warning'] = '86400'; in config.php. The value must be in seconds.
"},{"location":"Support/FAQ/#why-do-i-not-see-all-interfaces-in-the-overall-traffic-graph-for-a-device","title":"Why do I not see all interfaces in the Overall traffic graph for a device?","text":"
By default numerous interface types and interface descriptions are excluded from this graph. The excluded defaults are:
"},{"location":"Support/FAQ/#how-do-i-migrate-my-librenms-install-to-another-server","title":"How do I migrate my LibreNMS install to another server?","text":"
If you are moving from one CPU architecture to another then you will need to dump the rrd files and re-create them. If you are in this scenario then you can use Dan Brown's migration scripts.
If you are just moving to another server with the same CPU architecture then the following steps should be all that's needed:
Install LibreNMS as per our normal documentation; you don't need to run through the web installer or building the sql schema.
Stop cron by commenting out all lines in /etc/cron.d/librenms
Dump the MySQL database librenms from your old server (mysqldump librenms -u root -p > librenms.sql)...
and import it into your new server (mysql -u root -p librenms < librenms.sql).
Copy the rrd/ folder to the new server.
Copy the .env and config.php files to the new server.
Check for modified files (eg specific os, ...) with git status and migrate them.
Ensure ownership of the copied files and folders (substitute your user if necessary) - chown -R librenms:librenms /opt/librenms
Delete old pollers on the GUI (gear icon --> Pollers --> Pollers)
Validate your installation (/opt/librenms/validate.php)
Re-enable cron by uncommenting all lines in /etc/cron.d/librenms
"},{"location":"Support/FAQ/#why-is-my-edgerouter-device-not-detected","title":"Why is my EdgeRouter device not detected?","text":"
If you have service snmp description set in your config then this will be why, please remove this. For some reason Ubnt have decided setting this value should override the sysDescr value returned which breaks our detection.
If you don't have that set then this may be then due to an update of EdgeOS or a new device type, please create an issue.
"},{"location":"Support/FAQ/#why-are-some-of-my-disks-not-showing","title":"Why are some of my disks not showing?","text":"
If you are monitoring a linux server then net-snmp doesn't always expose all disks via hrStorage (HOST-RESOURCES-MIB). We have additional support which will retrieve disks via dskTable (UCD-SNMP-MIB). To expose these disks you need to add additional config to your snmpd.conf file. For example, to expose /dev/sda1 which may be mounted as /storage you can specify:
disk /dev/sda1
Or
disk /storage
Restart snmpd and LibreNMS should populate the additional disk after a fresh discovery.
"},{"location":"Support/FAQ/#why-are-my-disks-reporting-an-incorrect-size","title":"Why are my disks reporting an incorrect size?","text":"
There is a known issue for net-snmp, which causes it to report incorrect disk size and disk usage when the size of the disk (or raid) are larger then 16TB, a workaround has been implemented but is not active on Centos 6.8 by default due to the fact that this workaround breaks official SNMP specs, and as such could cause unexpected behaviour in other SNMP tools. You can activate the workaround by adding to /etc/snmp/snmpd.conf :
realStorageUnits 0
"},{"location":"Support/FAQ/#what-does-mean-ignore-alert-tag-on-device-component-service-and-port","title":"What does mean \\\"ignore alert tag\\\" on device, component, service and port?","text":"
Tag device, component, service and port to ignore alerts. Alert checks will still run. However, ignore tag can be read in alert rules. For example on device, if devices.ignore = 0 or macros.device = 1 condition is is set and ignore alert tag is on, the alert rule won't match. The alert rule is ignored.
"},{"location":"Support/FAQ/#how-do-i-clean-up-alerts-from-my-switches-and-routers-about-ports-being-down-or-changing-speed","title":"How do I clean up alerts from my switches and routers about ports being down or changing speed","text":"
Some properties used for alerting (ending in _prev) are only updated when a change is detected, and not every time the poller runs. This means that if you make a permanant change to your network such as removing a device, performing a major firmware upgrade, or downgrading a WAN connection, you may be stuck with some unresolvable alerts.
If a port will be permantly down, it's best practice to configure it to be administratively down on the device to prevent malicious access. You can then only run alerts on ports with ifAdminStatus = up. Otherwise, you'll need to reset the device port state history.
On the device generating alerts, use the cog button to go to the edit device page. At the top of the device settings pane is a button labelled Reset Port State - this will clear the historic state for all ports on that device, allowing any active alerts to clear.
"},{"location":"Support/FAQ/#why-cant-normal-and-global-view-users-see-oxidized","title":"Why can't Normal and Global View users see Oxidized?","text":"
Configs can often contain sensitive data. Because of that only global admins can see configs.
"},{"location":"Support/FAQ/#what-is-the-demo-user-for","title":"What is the Demo User for?","text":"
Demo users allow full access except adding/editing users and deleting devices and can't change passwords.
"},{"location":"Support/FAQ/#why-does-modifying-default-alert-template-fail","title":"Why does modifying 'Default Alert Template' fail?","text":"
This template's entry could be missing in the database. Please run this from the LibreNMS directory:
"},{"location":"Support/FAQ/#why-would-alert-un-mute-itself","title":"Why would alert un-mute itself?","text":"
If alert un-mutes itself then it most likely means that the alert cleared and is then triggered again. Please review eventlog as it will tell you in there.
"},{"location":"Support/FAQ/#how-do-i-change-the-device-type","title":"How do I change the Device Type?","text":"
You can change the Device Type by going to the device you would like to change, then click on the Gear Icon -> Edit. If you would like to define custom types, we suggest using Device Groups. They will be listed in the menu similarly to device types.
"},{"location":"Support/FAQ/#editing-large-device-groups-gives-error-messages","title":"Editing large device groups gives error messages","text":"
If the device group contains large amount of devices, editing it from the UI might cause errors on the form even when all the data seems correct. This is caused by PHP's max_input_vars-variable. You should be able to confirm that this is the case by inspecting the PHP's error logs.
With the basic installation on Ubuntu 22.04 LTS with Nginx and PHP 8.1 FPM this value can be tuned by editing the file /etc/php/8.1/fpm/php.ini and adjusting the value of max_input_vars to be at least the size of the large group. In larger installations a value such as 10000 should suffice.
"},{"location":"Support/FAQ/#where-do-i-update-my-database-credentials","title":"Where do I update my database credentials?","text":"
If you've changed your database credentials then you will need to update LibreNMS with those new details. Please edit .env
"},{"location":"Support/FAQ/#my-reverse-proxy-is-not-working","title":"My reverse proxy is not working","text":"
Make sure your proxy is passing the proper variables. At a minimum: X-Forwarded-For and X-Forwarded-Proto (X-Forwarded-Port if needed)
You also need to Set the proxy or proxies as trusted
If you are using a subdirectory on the reverse proxy and not on the actual web server, you may need to set APP_URL and $config['base_url'].
"},{"location":"Support/FAQ/#my-alerts-arent-being-delivered-on-time","title":"My alerts aren't being delivered on time","text":"
If you're running MySQL/MariaDB on a separate machine or container make sure the timezone is set properly on both the LibreNMS and MySQL/MariaDB instance. Alerts will be delivered according to MySQL/MariaDB's time, so a mismatch between the two can cause alerts to be delivered late if LibreNMS is on a timezone later than MySQL/MariaDB.
You should probably have a look in the documentation concerning the new template syntax. Since version 1.42, syntax changed, and you basically need to convert your templates to this new syntax (including the titles).
"},{"location":"Support/FAQ/#how-do-i-use-trend-prediction-in-graphs","title":"How do I use trend prediction in graphs","text":"
As of Ver. 1.55 a new feature has been added where you can view a simple linear prediction in port graphs.
It doesn't work on non-port graphs or consolidated graphs at the time this FAQ entry was written.
To view a prediction:
Click on any port graph of any network device
Select a From date to your liking (not earlier than the device was actually added to LNMS), and then select a future date in the To field.
Click update
You should now see a linear prediction line on the graph.
"},{"location":"Support/FAQ/#how-do-i-move-only-the-db-to-another-server","title":"How do I move only the DB to another server?","text":"
There is already a reference how to move your whole LNMS installation to another server. But the following steps will help you to split up an \"All-in-one\" installation to one LibreNMS installation with a separate database install. *Note: This section assumes you have a MySQL/MariaDB instance
Stop the apache and mysql service in you LibreNMS installation.
Edit out all the cron entries in /etc/cron.d/librenms.
Dump your librenmsdatabase on your current install by issuing mysqldump librenms -u root -p > librenms.sql.
Stop and disable the MySQL server on your current install.
On your new server make sure you create a new database with the standard install command, no need to add a user for localhost though.
Copy this over to your new database server and import it with mysql -u root -p librenms < librenms.sql.
Enter to mysql and add permissions with the following two commands:
GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'IP_OF_YOUR_LNMS_SERVER' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;\nGRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'FQDN_OF_YOUR_LNMS_SERVER' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;\nFLUSH PRIVILEGES;\nexit;\n
Enable and restart MySQL server.
Edit your config.php file to point the install to the new database server location.
Very important: On your LibreNMS server, inside your install directory is a .env file, in it you need to edit the DBHOST paramater to point to your new server location.
After all this is done, enable all the cron entries again and start apache.
"},{"location":"Support/FAQ/#what-are-the-optional-requirements-message-when-i-add-snmpv3-devices","title":"What are the \"optional requirements message\" when I add SNMPv3 devices?","text":"
When you add a device via the WebUI you may see a little message stating \"Optional requirements are not met so some options are disabled\". Do not panic. This simply means your system does not contain openssl >= 1.1 and net-snmp >= 5.8, which are the minimum specifications needed to be able to use SHA-224|256|384|512 as auth algorithms. For crypto algorithms AES-192, AES-256 you need net-snmp compiled with --enable-blumenthal-aes.
"},{"location":"Support/FAQ/#developing","title":"Developing","text":""},{"location":"Support/FAQ/#how-do-i-add-support-for-a-new-os","title":"How do I add support for a new OS?","text":"
Please see Supporting a new OS if you are adding all the support yourself, i.e. writing all of the supporting code. If you are only able to supply supporting info, and would like the help of others to write up the code, please follow the below steps.
"},{"location":"Support/FAQ/#what-information-do-you-need-to-add-a-new-os","title":"What information do you need to add a new OS?","text":"
Please open a feature request in the community forum and provide the output of Discovery, Poller, and Snmpwalk as separate non-expiring https://p.libren.ms/ links :
Please use preferably the command line to obtain the information. Especially, if snmpwalk results in a large amount of data. Replace the relevant information in these commands such as HOSTNAME and COMMUNITY. Use snmpwalk instead of snmpbulkwalk for v1 devices.
These commands will automatically upload the data to LibreNMS servers.
You can use the links provided by these commands within the community post.
If possible please also provide what the OS name should be if it doesn't exist already, as well as any useful link (MIBs from vendor, logo, etc etc)
"},{"location":"Support/FAQ/#what-can-i-do-to-help","title":"What can I do to help?","text":"
Thanks for asking, sometimes it's not quite so obvious and everyone can contribute something different. So here are some ways you can help LibreNMS improve.
Code. This is a big thing. We want this community to grow by the software developing and evolving to cater for users needs. The biggest area that people can help make this happen is by providing code support. This doesn't necessarily mean contributing code for discovering a new device:
Web UI, a new look and feel has been adopted but we are not finished by any stretch of the imagination. Make suggestions, find and fix bugs, update the design / layout.
Poller / Discovery code. Improving it (we think a lot can be done to speed things up), adding new device support and updating old ones.
The LibreNMS main website, this is hosted on GitHub like the main repo and we accept use contributions here as well :)
Hardware. We don't physically need it but if we are to add device support, it's made a whole lot easier with access to the kit via SNMP.
If you've got MIBs, they are handy as well :)
If you know the vendor and can get permission to use logos that's also great.
Bugs. Found one? We want to know about it. Most bugs are fixed after being spotted and reported by someone, I'd love to say we are amazing developers and will fix all bugs before you spot them but that's just not true.
Feature requests. Can't code / won't code. No worries, chuck a feature request into our community forum with enough detail and someone will take a look. A lot of the time this might be what interests someone, they need the same feature or they just have time. Please be patient, everyone who contributes does so in their own time.
Documentation. Documentation can always be improved and every little bit helps. Not all features are currently documented or documented well, there's spelling mistakes etc. It's very easy to submit updates through the GitHub website, no git experience needed.
Be nice, this is the foundation of this project. We expect everyone to be nice. People will fall out, people will disagree but please do it so in a respectable way.
Ask questions. Sometimes just by asking questions you prompt deeper conversations that can lead us to somewhere amazing so please never be afraid to ask a question.
"},{"location":"Support/FAQ/#how-can-i-test-another-users-branch","title":"How can I test another users branch?","text":"
LibreNMS can and is developed by anyone, this means someone may be working on a new feature or support for a device that you want. It can be helpful for others to test these new features, using Git, this is made easy.
cd /opt/librenms\n
Firstly ensure that your current branch is in good state:
git status\n
If you see nothing to commit, working directory clean then let's go for it :)
Let's say that you want to test a users (f0o) new development branch (issue-1337) then you can do the following:
With a lot of configuration possibilities, manually editing config.php means it's not uncommon that mistakes get made. It's also impossible to validate user input in config.php when you're just using a text editor :)
So, to try and help with some of the general issues people come across we've put together a simple validation tool which at present will:
Validate config.php from a php perspective including whitespace where it shouldn't be.
Connection to your MySQL server to verify credentials.
Checks if you are running the older alerting system.
Checks your rrd directory setup if not running rrdcached.
Checks disk space for where /opt/librenms is installed.
Checks location to fping
Tests MySQL strict mode being enabled
Tests for files not owned by librenms user (if configured)
Optionally you can also pass -m and a module name for that to be tested. Current modules are:
mail - This will validate your mail transport configuration.
dist-poller - This will test your distributed poller configuration.
rrdcheck - This will test your rrd files to see if they are unreadable or corrupted (source of broken graphs).
You can run validate.php as root by executing ./validate.php within your install directory.
The output will provide you either a clean bill of health or a list of things you need to fix:
OK - This is a good thing, you can skip over these :)
WARN - You probably want to check this out.
FAIL - This is going to need your attention!
"},{"location":"Support/Install%20Validation/#validate-from-the-webui","title":"Validate from the WebUI","text":"
You can validate your LibreNMS install from the WebUI, using the nav bar and clicking on the little Gear Icon -> Validate Config.
It's advisable after 24 hours of running MySQL that you run MySQL Tuner which will make suggestions on things you can change specific to your setup.
One recommendation we can make is that you set the following in my.cnf under a [mysqld] group:
innodb_flush_log_at_trx_commit = 0\n
You can also set this to 2. This will have the possibility that you could lose up to 1 second on mysql data in the event MySQL crashes or your server does but it provides an amazing difference in IO use.
Review the graph of poller module time take under gear > pollers > performance to see what modules are consuming poller time. This data is shown per device under device > graphs > poller.
Disable polling (and discovery) modules that you do not need. You can do this globally in config.php like:
Disable OSPF polling
poller/poller_modules
lnms config:set poller_modules.ospf false\n
You can disable modules globally then re-enable the module per device or the opposite way. For a list of modules please see Poller modules
"},{"location":"Support/Performance/#snmp-max-repeaters","title":"SNMP Max Repeaters","text":"
We have support for SNMP Max repeaters which can be handy on devices where we poll a lot of ports or bgp sessions for instance and where snmpwalk or snmpbulkwalk is used. This needs to be enabled on a per device basis under edit device -> snmp -> Max repeaters.
You can also set this globally with the config option $config['snmp']['max_repeaters'] = X;.
It's advisable to test the time taken to snmpwalk IF-MIB or something similar to work out what the best value is. To do this run the following but replace -REPEATERS- with varying numbers from 10 upto around 50. You will also need to set the correct snmp version, hostname and community string:
NOTE: Do not go blindly setting this value as you can impact polling negatively.
"},{"location":"Support/Performance/#snmp-max-oids","title":"SNMP Max OIDs","text":"
For sensors polling we now do bulk snmp gets to speed things up. By default this is ten but you can overwrite this per device under edit device -> snmp -> Max OIDs.
You can also set this globally with the config option $config['snmp']['max_oid'] = X;.
NOTE: It is advisable to monitor sensor polling when you change this to ensure you don't set the value too high.
If your devices are slow to respond then you will need to increase the timeout value and potentially the interval value. However if your network is stable, you can increase poller performance by dropping the count value to 1 and/or the timeout+millsec value to 200 or 300:
This will mean that we no longer delay each icmp packet sent (we send 3 in total by default) by 0.5 seconds. With only 1 icmp packet being sent then we will receive a response quicker. The defaults mean it will take at least 1 second for a response no matter how quick the icmp packet is returned.
poller-wrapper.py defaults to using 16 threads, this isn't necessarily optimal. A general rule of thumb is 2 threads per core but we suggest that you play around with lowering / increasing the number until you get the optimal value. Note KEEP in MIND that this doesn't always help, it depends on your system and CPU. So be careful. This can be changed by going to the cron job for librenms. Usually in /etc/cron.d/librenms and changing the \"16\"
Please also see Dispatcher Service"},{"location":"Support/Performance/#recursive-dns","title":"Recursive DNS","text":"
If your install uses hostnames for devices and you have quite a lot then it's advisable to setup a local recursive dns instance on the LibreNMS server. Something like pdns-recursor can be used and then configure /etc/resolv.conf to use 127.0.0.1 for queries.
"},{"location":"Support/Performance/#per-port-polling-experimental","title":"Per port polling - experimental","text":"
By default the polling ports module will walk ifXEntry + some items from ifEntry regardless of the port. So if a port is marked as deleted because you don't want to see them or it's disabled then we still collect data. For the most part this is fine as the walks are quite quick. However for devices with a lot of ports and good % of those are either deleted or disabled then this approach isn't optimal. So to counter this you can enable 'selected port polling' per device within the edit device -> misc section or by globally enabling it (not recommended): $config['polling']['selected_ports'] = true;. This is truly not recommended, as it has been proven to affect cpu usage of your poller negatively. You can also set it for a specific OS: $config['os']['ios']['polling']['selected_ports'] = true;.
Running ./scripts/collect-port-polling.php will poll your devices with both full and selective polling, display a table with the difference and optionally enable or disable selected ports polling for devices which would benefit from a change. Note that it doesn't continuously re-evaluate this, it will only be updated when the script is run. There are a number of options:
-h <device id> | <device hostname wildcard> Poll single device or wildcard hostname\n-e <percentage> Enable/disable selected ports polling for devices which would benefit <percentage> from a change\n
If you want to run this script to have it set selected port polling on devices where a change of 10% or more is evaluated, run it with ./scripts/collect-port-polling.php -e 10. But note: it will not blindly use only the 10%. There is a second condition that the change has to be more than one second in polling time."},{"location":"Support/Performance/#web-interface","title":"Web interface","text":""},{"location":"Support/Performance/#http2","title":"HTTP/2","text":"
If you are running https then you should enable http/2 support in whatever web server you use:
For Nginx (1.9.5 and above) change listen 443 ssl; to listen 443 ssl http2; in the Virtualhost config.
For Apache (2.4.17 and above) set Protocols h2 http/1.1 in the Virtualhost config.
A lot of performance can be gained from setting up php-opcache correctly.
Note: Memory based caching with PHP cli will increase memory usage and slow things down. File based caching is not as fast as memory based and is more likely to have stale cache issues.
Some distributions allow separate cli, mod_php and php-fpm configurations, we can use this to set the optimal config.
"},{"location":"Support/Performance/#for-web-servers-using-mod_php-and-php-fpm","title":"For web servers using mod_php and php-fpm","text":"
Update your web PHP opcache.ini. Possible locations: /etc/php/8.1/fpm/conf.d/opcache.ini, /etc/php.d/opcache.ini, or /etc/php/conf.d/opcache.ini.
Create a cache directory that is writable by the librenms user first: sudo mkdir -p /tmp/cache && sudo chmod 775 /tmp/cache && sudo chown -R librenms /tmp/cache
Update your PHP opcache.ini. Possible locations: /etc/php/8.1/cli/conf.d/opcache.ini, /etc/php.d/opcache.ini, or /etc/php/conf.d/opcache.ini.
If you are having caching issues, you can clear the file based opcache with rm -rf /tmp/cache.
Debian 12 users, be aware php 8.2 current stable version (8.2.7) creates segmentation faults when opcache uses file cache. Issue should be this one https://github.com/php/php-src/issues/10914 Using sury packages or disabling file cache solves the issue
Description:\n Poll data from device(s) as defined by discovery\n\nUsage:\n device:poll [options] [--] <device spec>\n\nArguments:\n device spec Device spec to poll: device_id, hostname, wildcard (*), odd, even, all\n\nOptions:\n -m, --modules=MODULES Specify single module to be run. Comma separate modules, submodules may be added with /\n -x, --no-data Do not update datastores (RRD, InfluxDB, etc)\n -h, --help Display help for the given command. When no command is given display help for the list command\n -q, --quiet Do not output any message\n -V, --version Display this application version\n --ansi|--no-ansi Force (or disable --no-ansi) ANSI output\n -n, --no-interaction Do not ask any interactive question\n --env[=ENV] The environment the command should run under\n -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug\n
These are the default poller config items. You can globally disable a module by setting it to 0. If you just want to disable it for one device then you can do this within the WebUI Device -> Edit -> Modules.
"},{"location":"Support/Poller%20Support/#os-based-poller-config","title":"OS based Poller config","text":"
You can enable or disable modules for a specific OS by add corresponding line in config.php OS based settings have preference over global. Device based settings have preference over all others
Poller performance improvement can be achieved by deactivating all modules that are not supported by specific OS.
E.g. to deactivate spanning tree but activate unix-agent module for linux OS
To provide debugging output you will need to run the poller process with the -vv flag. You can do this either against all modules, single or multiple modules:
Using -vv shouldn't output much sensitive information, -vvv will so it is then advisable to sanitise the output before pasting it somewhere as the debug output will contain snmp details amongst other items including port descriptions.
The output will contain:
DB Updates
RRD Updates
SNMP Response
"},{"location":"Support/Remote-Monitoring-VPN/","title":"Remote monitoring using tinc VPN","text":"
This article describes how to use tinc to connect several remote sites and their subnets to your central monitoring server. This will let you connect to devices on remote private IP ranges through one gateway on each site, routing them securely back to your LibreNMS installation.
"},{"location":"Support/Remote-Monitoring-VPN/#configuring-the-monitoring-server","title":"Configuring the monitoring server","text":"
tinc should be available on nearly all Linux distributions via package management. If you are running something different, just take a look at tinc's homepage to find an appropriate version for your operating system: https://www.tinc-vpn.org/download/
I am going to describe the setup for Debian-based systems, but there are virtually no differences for e.g. CentOS or similar.
First make sure your firewall accepts connections on port 655 UDP and TCP.
Then install tinc via apt-get install tinc.
Create the following directory structure to hold all your configuration files: mkdir -p /etc/tinc/myvpn/hosts \"myvpn\" is your VPN network's name and can be chosen freely.
Create your main configuration file: vim /etc/tinc/myvpn/tinc.conf
Name = monitoring\nAddressFamily = ipv4\nDevice = /dev/net/tun\n
Next we need network up- and down scripts to define a few network settings for inside our VPN: vim /etc/tinc/myvpn/tinc-up
#!/bin/sh\nifconfig $INTERFACE 10.6.1.1 netmask 255.255.255.0\nip route add 10.6.1.1/24 dev $INTERFACE\nip route add 10.0.0.0/22 dev $INTERFACE\nip route add 10.100.0.0/22 dev $INTERFACE\nip route add 10.200.0.0/22 dev $INTERFACE\n
In this example we have 10.6.1.1 as the VPN IP address for the monitoring server on a /24 subnet. $INTERFACE will be automatically substituted with the name of the VPN, \"myvpn\" in this case. Then we have a route for the VPN subnet, so we can reach other sites via their VPN address. The last 3 lines designate the remote subnets. In the example I want to reach devices on three different remote private /22 subnets and be able to monitor devices on them from this server, so I set up routes for each of those remote sites in my tinc-up script.
The tinc-down script is relatively simple as it just removes the custom interface, which should get rid of the routes as well: vim /etc/tinc/myvpn/tinc-down
#!/bin/sh\nifconfig $INTERFACE down\n
Make sure your scripts scan be executed: chmod +x /etc/tinc/myvpn/tinc-*
As a last step we need a host configuration file. This should be named the same as the \"Name\" you defined in tinc.conf: vim /etc/tinc/myvpn/hosts/monitoring
Subnet = 10.6.1.1/32\n
On the monitoring server we will just fill in the subnet and not define its external IP address to make sure it listens on all available external interfaces.
It's time to use tinc to create our key-pair: tincd -n myvpn -K
Now the file /etc/tinc/myvpn/hosts/monitoring should have an RSA public key appended to it and your private key should reside in /etc/tinc/myvpn/rsa_key.priv.
To make sure that the connection will be restored after each reboot, you can add your VPN name to /etc/tinc/nets.boot.
Now you can start tinc with tincd -n myvpn and it will listen for your remote sites to connect to it.
"},{"location":"Support/Remote-Monitoring-VPN/#remote-site-configuration","title":"Remote site configuration","text":"
Essentially the same steps as for your central monitoring server apply for all remote gateway devices. These can be routers, or just any computer or VM running on the remote subnet, able to reach the internet with the ability to forward IP packets externally.
Create main configuration: vim /etc/tinc/myvpn/tinc.conf
Name = remote1\nAddressFamily = ipv4\nDevice = /dev/net/tun\nConnectTo = monitoring\n
Create up script: vim /etc/tinc/myvpn/tinc-up
#!/bin/sh\nifconfig $INTERFACE 10.6.1.2 netmask 255.255.255.0\nip route add 10.6.1.2/32 dev $INTERFACE\n
Create down script: vim /etc/tinc/myvpn/tinc-down
#!/bin/sh\nifconfig $INTERFACE down\n
Make executable: chmod +x /etc/tinc/myvpn/tinc*
Create device configuration: vim /etc/tinc/myvpn/hosts/remote1
Address = 198.51.100.2\nSubnet = 10.0.0.0/22\n
This defines the device IP address outside of the VPN and the subnet it will expose.
Copy over the monitoring server's host configuration (including the embedded public key) and add it's external IP address: vim /etc/tinc/myvpn/hosts/monitoring
Address = 203.0.113.6\nSubnet = 10.6.1.1/32\n\n-----BEGIN RSA PUBLIC KEY-----\nVeDyaqhKd4o2Fz...\n
Generate this device's keys: tincd -n myvpn -K
Copy over this devices host file including the embedded public key to your monitoring server.
Add the name for the VPN to/etc/tinc/nets.boot if you want to autostart the connection upon reboot.
Start tinc: tincd -n myvpn
These steps can basically be repeated for every remote site just choosing different names and other internal IP addresses. In my case I connected 3 remote sites running behind Ubiquiti EdgeRouters. Since those devices let me install software through Debian's package management it was very easy to set up. Just create the necessary configuration files and network scripts on each device and distribute the host configurations including the public keys to each device that will actively connect back.
Now you can add all devices you want to monitor in LibreNMS using their internal IP address on the remote subnets or using some form of name resolution. I opted to declare the most important devices in my /etc/hosts file on the monitoring server.
As an added bonus tinc is a mesh VPN, so in theory you could specify several \"ConnectTo\" on each device and they should hold connections even if one network path goes down.
# SNMPv2c\n\nsnmp-server community <YOUR-COMMUNITY> RO\nsnmp-server contact <YOUR-CONTACT>\nsnmp-server location <YOUR-LOCATION>\n\n# SNMPv3\n\nsnmp-server group <GROUP-NAME> v3 priv\nsnmp-server user <USER-NAME> <GROUP-NAME> v3 auth sha <AUTH-PASSWORD> priv aes 128 <PRIV-PASSWORD>\nsnmp-server contact <YOUR-CONTACT>\nsnmp-server location <YOUR-LOCATION>\n\n# Note: The following is also required if using SNMPv3 and you want to populate the FDB table, STP info and others.\n\nsnmp-server group <GROUP-NAME> v3 priv context vlan- match prefix\n
Note: If the device is unable to find the SNMP user, reboot the ASA. Once rebooted, continue the steps as normal.
Upgrade to the latest available manufacturer firmware which applies to your hardware revision. Refer to the release notes. For devices which can use the Lx releases, do install LD.
After rebooting the card (safe for connected load), configure Network, System and Access Control. Save config for each step.
Configure SNMP. The device defaults to both SNMP v1 and v3 enabled, with default credentials. Disable what you do not need. SNMP v3 works, but uses MD5/DES. You may have to add another section to your SNMP credentials table in LibreNMS. Save.
In some cases of advanced routing one may need to set explicitly the source IP address from which the SNMP daemon will reply - /snmp set src-address=<SELF_IP_ADDRESS>
Note that you need to allow SNMP on the needed interfaces. To do that you need to create a network \"Interface Mgmt\" profile for standard interface and allow SNMP under \"Device > Management > Management Interface Settings\" for out of band management interface.
One may also configure SNMP from the command line, which is useful when you need to configure more than one firewall for SNMP monitoring. Log into the firewall(s) via ssh, and perform these commands for basic SNMPv3 configuration:
username@devicename> configure\nusername@devicename# set deviceconfig system service disable-snmp no\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso oid 1.3.6.1\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso option include\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso mask 0xf0\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv authpwd YOUR_AUTH_SECRET\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv privpwd YOUR_PRIV_SECRET\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv view pa\nusername@devicename# set deviceconfig system snmp-setting snmp-system location \"Yourcity, Yourcountry [60.4,5.31]\"\nusername@devicename# set deviceconfig system snmp-setting snmp-system contact noc@your.org\nusername@devicename# commit\nusername@devicename# exit\n
If you use the HTTP interface: 1. Access the legacy web admin page and log in 1. Go to System > Advanced Configuration 1. Go to the sub-tab \"SNMP\" > \"Community\" 1. Click \"Add Community Group\" 1. Enter your SNMP community, ip address and click submit 1. Go to System > Summary 1. Go to the sub-tab \"Description\" 1. Enter your System Name, System Location and System Contact. 1. Click submit 1. Click \"Save Configuration\"
Log on to your ESX server by means of ssh. You may have to enable the ssh service in the GUI first. From the CLI, execute the following commands:
esxcli system snmp set --authentication SHA1\nesxcli system snmp set --privacy AES128\nesxcli system snmp hash --auth-hash YOUR_AUTH_SECRET --priv-hash YOUR_PRIV_SECRET --raw-secret\n
esxcli system snmp set --users <username>/f3d8982fc28e8d1346c26eee49eb2c4a5950c934/0596ab30b315576a4e9f7d7bde65bf49b749e335/priv\nesxcli system snmp set -L \"Yourcity, Yourcountry [60.4,5.3]\"\nesxcli system snmp set -C noc@your.org\nesxcli system snmp set --enable true\n
Note: In case of snmp timeouts, disable the firewall with esxcli network firewall set --enabled false If snmp timeouts still occur with firewall disabled, migrate VMs if needed and reboot ESXi host.
Replace your snmpd.conf file by the example below and edit it with appropriate community in \"RANDOMSTRINGGOESHERE\".
vi /etc/snmp/snmpd.conf\n
# Change RANDOMSTRINGGOESHERE to your preferred SNMP community string\ncom2sec readonly default RANDOMSTRINGGOESHERE\n\ngroup MyROGroup v2c readonly\nview all included .1 80\naccess MyROGroup \"\" any noauth exact all none none\n\nsyslocation Rack, Room, Building, City, Country [GPSX,Y]\nsyscontact Your Name <your@email.address>\n\n#Distro Detection\nextend distro /usr/bin/distro\n#Hardware Detection (uncomment to enable)\n#extend hardware '/bin/cat /sys/devices/virtual/dmi/id/product_name'\n#extend manufacturer '/bin/cat /sys/devices/virtual/dmi/id/sys_vendor'\n#extend serial '/bin/cat /sys/devices/virtual/dmi/id/product_serial'\n
NOTE: On some systems the snmpd is running as its own user, which means it can't read /sys/devices/virtual/dmi/id/product_serial which is mode 0400. One solution is to include @reboot chmod 444 /sys/devices/virtual/dmi/id/product_serial in the crontab for root or equivalent.
Non-x86 or SMBIOS-based systems, such as ARM-based Raspberry Pi units should query device tree locations for this metadata, for example:
extend hardware '/bin/cat /sys/firmware/devicetree/base/model'\nextend serial '/bin/cat /sys/firmware/devicetree/base/serial-number'\n
The LibreNMS server include a copy of this example here:
/opt/librenms/snmpd.conf.example\n
The binary /usr/bin/distro must be copied from the original source repository:
Make sure the agent listens to all interfaces by adding the following line inside snmpd.conf:
agentAddress udp:161,udp6:161\n
This line simply means listen to connections across all interfaces IPv4 and IPv6 respectively
Uncomment and change the following line to give read access to the username created above (rouser is what LibreNMS uses) :
#rouser authPrivUser priv\n
Change the following details inside snmpd.conf
syslocation Rack, Room, Building, City, Country [GPSX,Y]\nsyscontact Your Name <your@email.address>\n
Save and exit the file
"},{"location":"Support/SNMP-Configuration-Examples/#restart-the-snmpd-service","title":"Restart the snmpd service","text":""},{"location":"Support/SNMP-Configuration-Examples/#centos-6-red-hat-6","title":"CentOS 6 / Red hat 6","text":"
service snmpd restart\n
"},{"location":"Support/SNMP-Configuration-Examples/#centos-7-red-hat-7","title":"CentOS 7 / Red hat 7","text":"
"},{"location":"Support/SNMP-Configuration-Examples/#arch-linux-snmpd-v2","title":"Arch Linux (snmpd v2)","text":"
Install SNMP Package pacman -S net-snmp
create SNMP folder mkdir /etc/snmp/
set community echo rocommunity read_only_community_string >> /etc/snmp/snmpd.conf
set contact echo syscontact Firstname Lastname >> /etc/snmp/snmpd.conf
set location echo syslocation L69 4RX >> /etc/snmp/snmpd.conf
enable startup systemctl enable snmpd.service
start snmp systemctl restart snmpd.service
"},{"location":"Support/SNMP-Configuration-Examples/#windows-server-2008-r2","title":"Windows Server 2008 R2","text":"
Log in to your Windows Server 2008 R2
Start \"Server Manager\" under \"Administrative Tools\"
Click \"Features\" and then click \"Add Feature\"
Check (if not checked) \"SNMP Service\", click \"Next\" until \"Install\"
Start \"Services\" under \"Administrative Tools\"
Edit \"SNMP Service\" properties
Go to the security tab
In \"Accepted community name\" click \"Add\" to add your community string and permission
In \"Accept SNMP packets from these hosts\" click \"Add\" and add your LibreNMS server IP address
Validate change by clicking \"Apply\"
"},{"location":"Support/SNMP-Configuration-Examples/#windows-server-2012-r2-and-newer","title":"Windows Server 2012 R2 and newer","text":""},{"location":"Support/SNMP-Configuration-Examples/#gui","title":"GUI","text":"
Log in to your Windows Server 2012 R2 or newer
Start \"Server Manager\" under \"Administrative Tools\"
Click \"Manage\" and then \"Add Roles and Features\"
Continue by pressing \"Next\" to the \"Features\" menu
Install (if not installed) \"SNMP Service\"
Start \"Services\" under \"Administrative Tools\"
Edit \"SNMP Service\" properties
Go to the security tab
In \"Accepted community name\" click \"Add\" to add your community string and permission
In \"Accept SNMP packets from these hosts\" click \"Add\" and add your LibreNMS server IP address
#Allow read-access with the following SNMP Community String:\nrocommunity public\n\n# all other settings are optional but recommended.\n\n# Location of the device\nsyslocation data centre A\n\n# Human Contact for the device\nsyscontact SysAdmin\n\n# System Name of the device\nsysName SystemName\n\n# the system OID for this device. This is optional but recommended,\n# to identify this as a MAC OS system.\nsysobjectid 1.3.6.1.4.1.8072.3.2.16\n
To use Wireless Sensors on AsuswrtMerlin, an agent of sorts is required. The purpose of the agent is to execute on the client (AsuswrtMerlin) side, to ensure that the needed Wireless Sensor information is returned for SNMP queries (from LibreNMS).
Two items are required on the AsuswrtMerlin side - scripts to generate the necessary information (for SNMP replies), and an SNMP extend configuration update (to return the information vs. the expected query).
1: Install the scripts:
Copy the scripts from librenms-agent/snmp/Openwrt - preferably inside /etc/librenms on AsuswrtMerlin (and add this directory to /etc/sysupgrade.conf, to survive firmware updates).
The only file that needs to be edited is wlInterfaces.txt, which is a mapping from the wireless interfaces, to the desired display name in LibreNMS. For example,
wlan0,wl-2.4G\nwlan1,wl-5.0G\n
2: Update the AsuswrtMerlin SNMP configuration, adding extend support for the Wireless Sensor queries:
vi /etc/config/snmpd, adding the following entries (assuming the scripts are installed in /etc/librenms, and are executable), and update the network interfaces as needed to match the hardware,
config extend\n option name interfaces\n option prog \"/bin/cat /etc/librenms/wlInterfaces.txt\"\nconfig extend\n option name clients-wlan0\n option prog \"/etc/librenms/wlClients.sh wlan0\"\nconfig extend\n option name clients-wlan1\n option prog \"/etc/librenms/wlClients.sh wlan1\"\nconfig extend\n option name clients-wlan\n option prog \"/etc/librenms/wlClients.sh\"\nconfig extend\n option name frequency-wlan0\n option prog \"/etc/librenms/wlFrequency.sh wlan0\"\nconfig extend\n option name frequency-wlan1\n option prog \"/etc/librenms/wlFrequency.sh wlan1\"\nconfig extend\n option name rate-tx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 tx min\"\nconfig extend\n option name rate-tx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 tx avg\"\nconfig extend\n option name rate-tx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 tx max\"\nconfig extend\n option name rate-tx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 tx min\"\nconfig extend\n option name rate-tx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 tx avg\"\nconfig extend\n option name rate-tx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 tx max\"\nconfig extend\n option name rate-rx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 rx min\"\nconfig extend\n option name rate-rx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 rx avg\"\nconfig extend\n option name rate-rx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 rx max\"\nconfig extend\n option name rate-rx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 rx min\"\nconfig extend\n option name rate-rx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 rx avg\"\nconfig extend\n option name rate-rx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 rx max\"\nconfig extend\n option name noise-floor-wlan0\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan0\"\nconfig extend\n option name noise-floor-wlan1\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan1\"\nconfig extend\n option name snr-wlan0-min\n option prog \"/etc/librenms/wlSNR.sh wlan0 min\"\nconfig extend\n option name snr-wlan0-avg\n option prog \"/etc/librenms/wlSNR.sh wlan0 avg\"\nconfig extend\n option name snr-wlan0-max\n option prog \"/etc/librenms/wlSNR.sh wlan0 max\"\nconfig extend\n option name snr-wlan1-min\n option prog \"/etc/librenms/wlSNR.sh wlan1 min\"\nconfig extend\n option name snr-wlan1-avg\n option prog \"/etc/librenms/wlSNR.sh wlan1 avg\"\nconfig extend\n option name snr-wlan1-max\n option prog \"/etc/librenms/wlSNR.sh wlan1 max\"\n
NOTE, any of the scripts above can be tested simply by running the corresponding command.
NOTE, to check the output data from any of these extensions, on the LibreNMS machine, run (for example),
snmpwalk -v 2c -c public -Osqnv <openwrt-host> 'NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"frequency-wlan0\"'
NOTE, on the LibreNMS machine, ensure that snmp-mibs-downloader is installed.
NOTE, on the AsuswrtMerlin machine, ensure that distro is installed (i.e. that the OS is correctly detected!).
3: Restart the snmp service on AsuswrtMerlin:
service snmpd restart
And then wait for discovery and polling on LibreNMS!
The pCOWeb card is used to interface the pCO system to networks that use the HVAC protocols based on the Ethernet physical standard such as SNMP. The problem with this card is that the implementation is based on the final manufacturer of the HVAC (Heating, Ventilation and Air Conditioning) and not based on a standard given by Carel. So each pCOweb card has a different configuration that needs a different MIB depending on the manufacturers implementation.
The main problem is that LibreNMS will by default discover this card as pCOweb and not as your real manufacturer like it should. A solution was found to bypass this issue, but it's LibreNMS independent and you need to first configure your pCOWeb through the admin interface.
"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#accessing-the-pcoweb-card","title":"Accessing the pCOWeb card","text":"
Log on to the configuration page of the pCOWeb card. The pCOWeb interface is not always found when accessing the ip directly but rather a subdirectory. If you cant directly reach the configuration page try <ip address>/config. The default username and password is admin/fadmin. Modern browsers require you to enter this 2 or 3 times.
"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#configuring-the-pcoweb-card-snmp-for-librenms","title":"Configuring the pCOweb card SNMP for LibreNMS","text":"
First you need to configure your SNMP card using the admin interface. An SNMP tab in the configuration menu leaves you the choice to choose a System OID and a Enterprise OID. This is a little tricky but based on this information we defined a \"standard\" for all implementation of Carel products with LibreNMS.
The base Carel OID is 1.3.6.1.4.1.9839. To this OID we will add the final manufacturer Enterprise OID. You can find all enterprise OID following this link. This will allow us to create a specific support for this device. Librenms uses this value to detect which HVAC device is connected to the pCOWeb card.
Example for the Rittal IT Chiller that uses a pCOweb card:
Base Carel OID : 1.3.6.1.4.1.9839
Rittal (the manufacturer) base enterprise OID : 2606
Adding value to identify this device in LibreNMS : 1
Complete System OID for a Rittal Chiller using a Carel pCOweb card: 1.3.6.1.4.1.9839.2606.1
Use 9839 as Enterprise OID
The way this works is that the pCOWeb card pretends to be another device. In reality the pCOWeb card just inserts the \"enterprise OID\" in place of the vendor id in the OID.
In the table below you can find the values needed for devices which are already supported.
LibreNMS is ready for the devices listed in this table. You only need to configure your pCOweb card with the accorded System OID and Enterprise OID:
Manufacturer Description System OID Enterprise OID Rittal IT Chiller 1.3.6.1.4.1.9839.2606.1 9839 Rittal LCP DX 3311 1.3.6.1.4.1.9839.2606.3311 9839.2606"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#unsupported-devices","title":"Unsupported devices","text":"
After constructing the correct System OID for your SNMP card, you can start the LibreNMS new OS implementation and use this new OID as sysObjectID for the YAML definition file.
To gather Port IP info & routing info for Fortigates, disable the append-index feature. This feature appends VDOM to the index, breaking standard MIBs.
config system snmp sysinfo\n set append-index disable\nend\n
To use Wireless Sensors on Openwrt, an agent of sorts is required. The purpose of the agent is to execute on the client (Openwrt) side, to ensure that the needed Wireless Sensor information is returned for SNMP queries (from LibreNMS).
Two items are required on the Openwrt side - scripts to generate the necessary information (for SNMP replies), and an SNMP extend configuration update (to return the information vs. the expected query).
1: Install the scripts:
Copy the scripts from librenms-agent repository - preferably inside /etc/librenms on Openwrt (and add this directory to /etc/sysupgrade.conf, to survive firmware updates):
The only file that needs to be edited is wlInterfaces.txt, which is a mapping from the wireless interfaces, to the desired display name in LibreNMS. For example,
wlan0,wl-2.4G\nwlan1,wl-5.0G\n
2: Update the Openwrt SNMP configuration, adding extend support for the OS detection and the Wireless Sensor queries:
vi /etc/config/snmpd, adding the following entries (assuming the scripts are installed in /etc/librenms, and are executable), and update the network interfaces as needed to match the hardware,
config extend\n option name distro\n option prog '/etc/librenms/distro'\nconfig extend\n option name hardware\n option prog '/bin/cat'\n option args '/sys/firmware/devicetree/base/model'\nconfig extend\n option name interfaces\n option prog \"/bin/cat /etc/librenms/wlInterfaces.txt\"\nconfig extend\n option name clients-wlan0\n option prog \"/etc/librenms/wlClients.sh wlan0\"\nconfig extend\n option name clients-wlan1\n option prog \"/etc/librenms/wlClients.sh wlan1\"\nconfig extend\n option name clients-wlan\n option prog \"/etc/librenms/wlClients.sh\"\nconfig extend\n option name frequency-wlan0\n option prog \"/etc/librenms/wlFrequency.sh wlan0\"\nconfig extend\n option name frequency-wlan1\n option prog \"/etc/librenms/wlFrequency.sh wlan1\"\nconfig extend\n option name rate-tx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 tx min\"\nconfig extend\n option name rate-tx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 tx avg\"\nconfig extend\n option name rate-tx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 tx max\"\nconfig extend\n option name rate-tx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 tx min\"\nconfig extend\n option name rate-tx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 tx avg\"\nconfig extend\n option name rate-tx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 tx max\"\nconfig extend\n option name rate-rx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 rx min\"\nconfig extend\n option name rate-rx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 rx avg\"\nconfig extend\n option name rate-rx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 rx max\"\nconfig extend\n option name rate-rx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 rx min\"\nconfig extend\n option name rate-rx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 rx avg\"\nconfig extend\n option name rate-rx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 rx max\"\nconfig extend\n option name noise-floor-wlan0\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan0\"\nconfig extend\n option name noise-floor-wlan1\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan1\"\nconfig extend\n option name snr-wlan0-min\n option prog \"/etc/librenms/wlSNR.sh wlan0 min\"\nconfig extend\n option name snr-wlan0-avg\n option prog \"/etc/librenms/wlSNR.sh wlan0 avg\"\nconfig extend\n option name snr-wlan0-max\n option prog \"/etc/librenms/wlSNR.sh wlan0 max\"\nconfig extend\n option name snr-wlan1-min\n option prog \"/etc/librenms/wlSNR.sh wlan1 min\"\nconfig extend\n option name snr-wlan1-avg\n option prog \"/etc/librenms/wlSNR.sh wlan1 avg\"\nconfig extend\n option name snr-wlan1-max\n option prog \"/etc/librenms/wlSNR.sh wlan1 max\"\n
NOTE, any of the scripts above can be tested simply by running the corresponding command.
NOTE, to check the output data from any of these extensions, on the LibreNMS machine, run (for example),
snmpwalk -v 2c -c public -Osqnv <openwrt-host> 'NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"frequency-wlan0\"'
NOTE, on the LibreNMS machine, ensure that snmp-mibs-downloader is installed.
NOTE, on the AsuswrtMerlin machine, ensure that distro is installed (i.e. that the OS is correctly detected!).
3: Restart the snmp service on Openwrt:
service snmpd restart
And then wait for discovery and polling on LibreNMS!
This agent script will allow LibreNMS to run a script on a Mikrotik / RouterOS device to gather the vlan information from both /interface/vlan/ and /interface/bridge/vlan/
Go to https://github.com/librenms/librenms-agent/tree/master/snmp/Routeros
Copy and paste the contents of LNMS_vlans.scr file into a script within a RouterOS device. Name this script LNMS_vlans. (This is NOT the same thing as creating a txt file and importing it into the Files section of the device)
If you're unsure how to create the script. Download the LNMS_vlans.scr file. Rename to remove the .scr extension. Copy this file onto all the Mikrotik devices you want to monitor.
Open a Terminal / CLI on each tik and run this. { :global txtContent [/file get LNMS_vlans contents]; /system/script/add name=LNMS_vlans owner=admin policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon source=$txtContent ;} This will import the contents of that txt file into a script named LNMS_vlans
Enable an SNMP community that has both READ and WRITE capabilities. This is important, otherwise, LibreNMS will not be able to run the above script. It is recommended to use SNMP v3 for this.
Discover / Force rediscover your Mikrotik devices. After discovery has been completed the vlans menu should appear within LibreNMS for the device.
"},{"location":"Support/Device-Notes/Routeros/#important-note","title":"*** IMPORTANT NOTE ***","text":"
It is strongly recommended that SNMP service only be allowed to be communicated on a very limited set of IP addresses that LibreNMS and related systems will be coming from. (usually /32 address for each) because the write permission could allow an attack on a device. (such as dropping all firewall filters or changing the admin credentials)
"},{"location":"Support/Device-Notes/Routeros/#theory-of-operation","title":"Theory of operation:","text":"
Mikrotik vlan discovery plugin using the ability of ROS to \"fire up\" a script through SNMP.
At first, LibreNMS check for the existence of the script, and if it is present, it will start the LNMS_vlans script.
The script will gather information from: - /interface/bridge/vlan for tagged ports inside bridge - /interface/bridge/vlan for currently untagged ports inside bridge - /interface/bridge/port for ports PVID (untagged) inside bridge - /interface/vlan for vlan interfaces
after the information is gathered, it is transmitted to LibreNMS over SNMP
protocol is: type,vlanId,ifName
i.e: T,254,ether1 is translated to Tagged vlan 254 on port ether1
U,100,wlan2 is translated to Untagged vlan 100 on port wlan2
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":"Installing Install LibreNMS Now Install Using Docker Setup Applications Auto Discovery Oxidized RRDCached Alerting Rules Templates Transports More... API Using the API API Endpoints Support FAQ Install validation Performance tweaks More... Developing Getting Started Support for a new OS"},{"location":"API/","title":"Using the API","text":""},{"location":"API/#versioning","title":"Versioning","text":"
Versioning an API is a minefield which saw us looking at numerous options on how to do this.
We have currently settled on using versioning within the API end point itself /api/v0. As the API itself is new and still in active development we also decided that v0 would be the best starting point to indicate it's in development.
To access any of the token end points you will be required to authenticate using a token. Tokens can be created directly from within the LibreNMS web interface by going to /api-access/.
Click on 'Create API access token'.
Select the user you would like to generate the token for.
Whilst this documentation will describe and show examples of the end points, we've designed the API so you should be able to traverse through it without knowing any of the available API routes.
Input to the API is done in three different ways, sometimes a combination of two or three of these.
Passing parameters via the api route. For example when obtaining a devices details you will pass the hostname of the device in the route: /api/v0/devices/:hostname.
Passing parameters via the query string. For example you can list all devices on your install but limit the output to devices that are currently down: /api/v0/devices?type=down
Passing data in via JSON, this will mainly be used when adding or updating information via the API, for instance adding a new device:
curl -X POST -d '{\"hostname\":\"localhost.localdomain\",\"version\":\"v1\",\"community\":\"public\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices\n
devices: This is either an array of device ids or -1 for a global rule
builder: The rule which should be in the format entity.condition value (i.e devices.status != 0 for devices marked as down). It must be json encoded in the format rules are currently stored.
severity: The severity level the alert will be raised against, Ok, Warning, Critical.
disabled: Whether the rule will be disabled or not, 0 = enabled, 1 = disabled
count: This is how many polling runs before an alert will trigger and the frequency.
delay: Delay is when to start alerting and how frequently. The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
interval: How often to re-issue notifications while this alert is active,0 means notify once.The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
mute: If mute is enabled then an alert will never be sent but will show up in the Web UI (true or false).
invert: This would invert the rules check.
name: This is the name of the rule and is mandatory.
notes: Some informal notes for this rule
Example:
curl -X POST -d '{\"devices\":[1,2,3], \"name\": \"testrule\", \"builder\":{\"condition\":\"AND\",\"rules\":[{\"id\":\"devices.hostname\",\"field\":\"devices.hostname\",\"type\":\"string\",\"input\":\"text\",\"operator\":\"equal\",\"value\":\"localhost\"}],\"valid\":true},\"severity\": \"critical\",\"count\":15,\"delay\":\"5 m\",\"interval\":\"5 m\",\"mute\":false,\"notes\":\"This a note from the API\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/rules\n
rule_id: You must specify the rule_id to edit an existing rule, if this is absent then a new rule will be created.
devices: This is either an array of device ids or -1 for a global rule
builder: The rule which should be in the format entity.condition value (i.e devices.status != 0 for devices marked as down). It must be json encoded in the format rules are currently stored.
severity: The severity level the alert will be raised against, Ok, Warning, Critical.
disabled: Whether the rule will be disabled or not, 0 = enabled, 1 = disabled
count: This is how many polling runs before an alert will trigger and the frequency.
delay: Delay is when to start alerting and how frequently. The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
interval: How often to re-issue notifications while this alert is active,0 means notify once.The value is stored in seconds but you can specify minutes, hours or days by doing 5 m, 5 h, 5 d for each one.
mute: If mute is enabled then an alert will never be sent but will show up in the Web UI (true or false).
invert: This would invert the rules check.
name: This is the name of the rule and is mandatory.
notes: Some informal notes for this rule
Example:
curl -X PUT -d '{\"rule_id\":1,\"device_id\":\"-1\", \"name\": \"testrule\", \"builder\":{\"condition\":\"AND\",\"rules\":[{\"id\":\"devices.hostname\",\"field\":\"devices.hostname\",\"type\":\"string\",\"input\":\"text\",\"operator\":\"equal\",\"value\":\"localhost\"}],\"valid\":true},\"severity\": \"critical\",\"count\":15,\"delay\":\"5 m\",\"interval\":\"5 m\",\"mute\":false,\"notes\":\"This a note from the API\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/rules\n
Retrieve the data used to draw a graph so it can be rendered in an external system
Route: /api/v0/bills/:id/graphdata/:graph_type
Input:
The reducefactor parameter is used to reduce the number of data points. Billing data has 5 minute granularity, so requesting a graph for a long time period will result in many data points. If not supplied, it will be automatically calculated. A reducefactor of 1 means return all items, 2 means half of the items etc.
If you send an existing bill_id the call replaces all values it receives. For example if you send 2 ports it will delete the existing ports and add the the 2 new ports. So to add ports you have to get the current ports first and add them to your update call.
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
name: optional - The name of the device group
type: optional - should be static or dynamic. Setting this to static requires that the devices input be provided
desc: optional - Description of the device group
rules: required if type == dynamic - A set of rules to determine which devices should be included in this device group
devices: required if type == static - A list of devices that should be included in this group. This is a static list of devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
full: set to any value to return all data for the devices in a given group
title: optional - Some title for the Maintenance Will be replaced with device group name if omitted
notes: optional - Some description for the Maintenance
start: optional - start time of Maintenance in full format Y-m-d H:i:00 eg: 2022-08-01 22:45:00 Current system time now() will be used if omitted
duration: required - Duration of Maintenance in format H:i / Hrs:Mins eg: 02:00
Example with start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devicegroups/Cisco%20switches/maintenance/ \\\n --data-raw '\n{\n \"title\":\"Device group Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with start time\",\n \"start\":\"2022-08-01 08:00:00\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device group Cisco switches (2) will begin maintenance mode at 2022-08-01 22:45:00 for 2:00h\"\n}\n
Example with no start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devicegroups/Cisco%20switches/maintenance/ \\\n --data-raw '\n{\n \"title\":\"Device group Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with no start time\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device group Cisco switches (2) moved into maintenance mode for 2:00h\"\n}\n
"},{"location":"API/DeviceGroups/#add-devices-to-group","title":"Add devices to group","text":"
Add devices to a device group.
Route: /api/v0/devicegroups/:name/devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
devices: required - A list of devices to be added to the group.
"},{"location":"API/DeviceGroups/#remove-devices-from-group","title":"Remove devices from group","text":"
Removes devices from a device group.
Route: /api/v0/devicegroups/:name/devices
name Is the name of the device group which can be obtained using get_devicegroups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Input (JSON):
devices: required - A list of devices to be removed from the group.
Get a particular health class graph for a device, if you provide a sensor_id as well then a single sensor graph will be provided. If no sensor_id value is provided then you will be sent a stacked sensor graph.
Get a particular wireless class graph for a device, if you provide a sensor_id as well then a single sensor graph will be provided. If no sensor_id value is provided then you will be sent a stacked wireless graph.
Get information about a particular port for a device.
Route: /api/v0/devices/:hostname/ports/:ifname
hostname can be either the device hostname or id
ifname can be any of the interface names for the device which can be obtained using get_port_graphs. Please ensure that the ifname is urlencoded if it needs to be (i.e Gi0/1/0 would need to be urlencoded.
Input:
columns: Comma separated list of columns you want returned.
ifname can be any of the interface names for the device which can be obtained using get_port_graphs. Please ensure that the ifname is urlencoded if it needs to be (i.e Gi0/1/0 would need to be urlencoded.
type is the port type you want the graph for, you can request a list of ports for a device with get_port_graphs.
Input:
from: This is the date you would like the graph to start - See http://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html for more information.
to: This is the date you would like the graph to end - See http://oss.oetiker.ch/rrdtool/doc/rrdgraph.en.html for more information.
width: The graph width, defaults to 1075.
height: The graph height, defaults to 300.
ifDescr: If this is set to true then we will use ifDescr to lookup the port instead of ifName. Pass the ifDescr value you want to search as you would ifName.
title: optional - Some title for the Maintenance Will be replaced with hostname if omitted
notes: optional - Some description for the Maintenance Will also be added to device notes if user prefs \"Add schedule notes to devices notes\" is set
start: optional - start time of Maintenance in full format Y-m-d H:i:00 eg: 2022-08-01 22:45:00 Current system time now() will be used if omitted
duration: required - Duration of Maintenance in format H:i / Hrs:Mins eg: 02:00
Example with start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devices/localhost/maintenance/ \\\n --data-raw '\n \"title\":\"Device Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with start time\",\n \"start\":\"2022-08-01 08:00:00\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device localhost (1) will begin maintenance mode at 2022-08-01 22:45:00 for 2:00h\"\n}\n
Example with no start time:
curl -H 'X-Auth-Token: YOURAPITOKENHERE' \\\n -X POST https://librenms.org/api/v0/devices/localhost/maintenance/ \\\n --data-raw '\n \"title\":\"Device Maintenance\",\n \"notes\":\"A 2 hour Maintenance triggered via API with no start time\",\n \"duration\":\"2:00\"\n}\n'\n
Output:
{\n \"status\": \"ok\",\n \"message\": \"Device localhost (1) moved into maintenance mode for 2:00h\"\n}\n
Add a new device. Most fields are optional. You may omit snmp credentials to attempt each system credential in order. See snmp.version, snmp.community, and snmp.v3
To guarantee device is added, use force_add. This will skip checks for duplicate device and snmp reachability, but not duplicate hostname.
Route: /api/v0/devices
Input (JSON):
Fields:
hostname (required): device hostname or IP
display: A string to display as the name of this device, defaults to hostname (or device_display_default setting). May be a simple template using replacements: {{ $hostname }}, {{ $sysName }}, {{ $sysName_fallback }}, {{ $ip }}
snmpver: SNMP version to use, v1, v2c or v3. During checks detection order is v2c,v3,v1
port: SNMP port (defaults to port defined in config).
transport: SNMP protocol (udp,tcp,udp6,tcp6) Defaults to transport defined in config.
port_association_mode: method to identify ports: ifIndex (default), ifName, ifDescr, ifAlias
poller_group: This is the poller_group id used for distributed poller setup. Defaults to 0.
location or location_id: set the location by text or location id
Options:
force_add: Skip all checks and attempts to detect credentials. Add the device as given directly to the database.
ping_fallback: if snmp checks fail, add the device as ping only instead of failing
Update a device port notes field in the devices_attrs database.
Route: /api/v0/devices/:hostname/port/:portid
hostname can be either the device hostname or id
portid needs to be the port unique id (int).
Input (JSON): - notes: The string data to populate on the port notes field.
Examples:
curl -X PATCH -d '{\"notes\": \"This port is in a scheduled maintenance with the provider.\"}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices/localhost/port/5\n
Output:
[\n {\n \"status\": \"ok\",\n \"message\": \"Port notes field has been updated\"\n }\n]\n
curl -X PATCH -d '{\"field\": [\"notes\",\"purpose\"], \"data\": [\"This server should be kept online\", \"For serving web traffic\"]}' -H 'X-Auth-Token: YOURAPITOKENHERE' https://librenms.org/api/v0/devices/localhost\n
Output:
[\n {\n \"status\": \"ok\",\n \"message\": \"Device fields have been updated\"\n }\n]\n
Retrieve the inventory for a device. If you call this without any parameters then you will only get part of the inventory. This is because a lot of devices nest each component, for instance you may initially have the chassis, within this the ports - 1 being an sfp cage, then the sfp itself. The way this API call is designed is to enable a recursive lookup. The first call will retrieve the root entry, included within this response will be entPhysicalIndex, you can then call for entPhysicalContainedIn which will then return the next layer of results. To retrieve all items together, see get_inventory_for_device.
Route: /api/v0/inventory/:hostname
hostname can be either the device hostname or the device id
Input:
entPhysicalClass: This is used to restrict the class of the inventory, for example you can specify chassis to only return items in the inventory that are labelled as chassis.
entPhysicalContainedIn: This is used to retrieve items within the inventory assigned to a previous component, for example specifying the chassis (entPhysicalIndex) will retrieve all items where the chassis is the parent.
Retrieve the flattened inventory for a device. This retrieves all inventory items for a device regardless of their structure, and may be more useful for devices with with nested components.
Route: /api/v0/inventory/:hostname/all
hostname can be either the device hostname or the device id
Accept any json messages and passes to further syslog processing. single messages or an array of multiple messages is accepted. see Syslog for more details and logstash integration
name Is the name of the port group which can be obtained using get_port_groups. Please ensure that the name is urlencoded if it needs to be (i.e Linux Servers would need to be urlencoded.
Params:
full: set to any value to return all data for the devices in a given group
To get started, you first need some alert rules which will react to changes with your devices before raising an alert.
Creating alert rules
After that you also need to tell LibreNMS how to notify you when an alert is raised, this is done using Alert Transports.
Configuring alert transports
The next step is not strictly required but most people find it useful. Creating custom alert templates will help you get the benefit out of the alert system in general. Whilst we include a default template, it is limited in the data that you will receive in the alerts.
This column provides you visibility on the status of the alert:
This alert is currently active and sending alerts. Click this icon to acknowledge the alert.
This alert is currently acknowledged until the alert clears. Click this icon to un-acknowledge the alert.
This alert is currently acknowledged until the alert worsens or gets better, at which stage it will be automatically unacknowledged and alerts will resume. Click this icon to un-acknowledge the alert.
This column will allow you access to the acknowledge/unacknowledge notes for this alert.
"},{"location":"Alerting/Creating-Transport/","title":"Creating a new Transport","text":""},{"location":"Alerting/Creating-Transport/#file-location","title":"File location","text":"
All transports are located in LibreNMS\\Alert\\Transport and the files are named after the Transport name. I.e Discord.php for Discord.
The following functions are required for a new transport to pass the unit tests:
deliverAlert() - This is function called within alerts to invoke the transport. Here you should do any post processing of the transport config to get it ready for use.
contact$Transport() - This is named after the transport so for Discord it would be contactDiscord(). This is what actually interacts with the 3rd party API, invokes the mail command or whatever you want your alert to do.
configTemplate() - This is used to define the form that will accept the transport config in the webui and then what data should be validated and how. Validation is done using Laravel validation
The following function is not required for new Transports and is for legacy reasons only. deliverAlertOld().
Please don't forget to update the Transport file to include details of your new transport.
A table should be provided to indicate the form values that we ask for and examples. I.e:
Config Example Discord URL https://discordapp.com/api/webhooks/4515489001665127664/82-sf4385ysuhfn34u2fhfsdePGLrg8K7cP9wl553Fg6OlZuuxJGaa1d54fe Options username=myname"},{"location":"Alerting/Device-Dependencies/","title":"Device Dependencies","text":"
It is possible to set one or more parents for a device. The aim for that is, if all parent devices are down, alert contacts will not receive redundant alerts for dependent devices. This is very useful when you have an outage, say in a branch office, where normally you'd receive hundreds of alerts, but when this is properly configured, you'd only receive an alert for the parent hosts.
There are three ways to configure this feature. First one is from general settings of a device. The other two can be done in the 'Device Dependencies' item under 'Devices' menu. In this page, you can see all devices and with its parents. Clicking on the 'bin' icon will clear the dependency setting. Clicking on the 'pen' icon will let you edit or change the current setting for chosen device. There's also a 'Manage Device Dependencies' button on the top. This will let you set parents for multiple devices at once.
For an intro on getting started with Device Dependencies, take a look at our Youtube video
Entities as described earlier are based on the table and column names within the database, if you are unsure of what the entity is you want then have a browse around inside MySQL using show tables and desc <tablename>.
Below are some common entities that you can use within the alerting system. This list is not exhaustive and you should look at the MySQL database schema for the full list.
"},{"location":"Alerting/Entities/#devices","title":"Devices","text":"Entity Description devices.hostname The device hostname devices.sysName The device sysName devices.sysDescr The device sysDescr devices.hardware The device hardware devices.version The device os version devices.location The device location devices.status The status of the device, 1 devices.status_reason The reason the device was detected as down (icmp or snmp) devices.ignore If the device is ignored this will be set to 1 devices.disabled If the device is disabled this will be set to 1 devices.last_polled The the last polled datetime (yyyy-mm-dd hh:mm:ss) devices.type The device type such as network, server, firewall, etc."},{"location":"Alerting/Entities/#bgp-peers","title":"BGP Peers","text":"Entity Description bgpPeers.astext This is the description of the BGP Peer bgpPeers.bgpPeerIdentifier The IP address of the BGP Peer bgpPeers.bgpPeerRemoteAs The AS number of the BGP Peer bgpPeers.bgpPeerState The operational state of the BGP session bgpPeers.bgpPeerAdminStatus The administrative state of the BGP session bgpPeers.bgpLocalAddr The local address of the BGP session."},{"location":"Alerting/Entities/#ipsec-tunnels","title":"IPSec Tunnels","text":"Entity Description ipsec_tunnels.peer_addr The remote VPN peer address ipsec_tunnels.local_addr The local VPN address ipsec_tunnels.tunnel_status The VPN tunnels operational status."},{"location":"Alerting/Entities/#memory-pools","title":"Memory pools","text":"
Entity | Description |---|---| mempools.mempool_type | The memory pool type such as hrstorage, cmp and cemp mempools.mempool_descr | The description of the pool such as Physical memory, Virtual memory and System memory mempools.mempool_perc | The used percentage of the memory pool.
"},{"location":"Alerting/Entities/#ports","title":"Ports","text":"Entity Description ports.ifDescr The interface description ports.ifName The interface name ports.ifSpeed The port speed in bps ports.ifHighSpeed The port speed in mbps ports.ifOperStatus The operational status of the port (up or down) ports.ifAdminStatus The administrative status of the port (up or down) ports.ifDuplex Duplex setting of the port ports.ifMtu The MTU setting of the port."},{"location":"Alerting/Entities/#processors","title":"Processors","text":"Entity Description processors.processor_usage The usage of the processor as a percentage processors.processor_descr The description of the processor."},{"location":"Alerting/Entities/#storage","title":"Storage","text":"Entity Description storage.storage_descr The description of the storage storage.storage_perc The usage of the storage as a percentage."},{"location":"Alerting/Entities/#health-sensors","title":"Health / Sensors","text":"Entity Description sensors.sensor_desc The sensors description. sensors.sensor_current The current sensors value. sensors.sensor_prev The previous sensor value. sensors.lastupdate The sensors last updated datetime stamp."},{"location":"Alerting/Macros/","title":"Macros","text":"
Macros are shorthands to either portion of rules or pure SQL enhanced with placeholders.
You can define your own macros in your config.php.
"},{"location":"Alerting/Macros/#ports-now-down-boolean","title":"Ports now down (Boolean)","text":"
Entity: ports.ifOperStatus != ports.ifOperStatus_prev AND ports.ifOperStatus_prev = \"up\" AND ports.ifAdminStatus = \"up\"
Description: Ports that were previously up and have now gone down.
Example: macros.port_now_down = 1
"},{"location":"Alerting/Macros/#port-has-xdp-neighbour-boolean","title":"Port has xDP neighbour (Boolean)","text":"
Entity: %macros.port AND %links.local_port_id = %ports.port_id
Description: Ports that have an xDP (lldp, cdp, etc) neighbour.
Example: macros.port_has_xdp_neighbours = 1
"},{"location":"Alerting/Macros/#port-has-xdp-neighbour-already-known-in-librenms-boolean","title":"Port has xDP neighbour already known in LibreNMS (Boolean)","text":"
Entity: %macros.port_has_neighbours AND (%links.remote_port_id IS NOT NULL)
Description: Ports that have an xDP (lldp, cdp, etc) neighbour that is already known in libreNMS.
Rules must consist of at least 3 elements: An Entity, a Condition and a Value. Rules can contain braces and Glues. Entities are provided from Table and Field from the database. For Example: ports.ifOperStatus.
Conditions can be any of:
Equals =
Not Equals !=
In IN
Not In NOT IN
Begins with LIKE ('...%')
Doesn't begin with NOT LIKE ('...%')
Contains LIKE ('%...%')
Doesn't Contain NOT LIKE ('%...%')
Ends with LIKE ('%...')
Doesn't end with NOT LIKE ('%...')
Between BETWEEN
Not Between NOT BETWEEN
Is Empty = ''
Is Not Empty != '''
Is Null IS NULL
Is Not Null IS NOT NULL
Greater >
Greater or Equal >=
Less <
Less or Equal <=
Regex REGEXP
Values can be an entity or any data. If using macros as value you must include the macro name into backticks. i.e. `macros.past_60m`
On the Advanced tab, you can specify some additional options for the alert rule:
Override SQL: Enable this if you using a custom query
Query: The query to be used for the alert.
An example of this would be an average rule for all CPUs over 10%
SELECT devices.device_id, devices.status, devices.disabled, devices.ignore, \nAVG(processors.processor_usage) AS cpu_avg FROM \ndevices INNER JOIN processors ON devices.device_id \n= processors.device_id WHERE devices.device_id \n= ? AND devices.status = 1 AND devices.disabled = \n0 AND devices.ignore = 0 GROUP BY devices.device_id, \ndevices.status, devices.disabled, devices.ignore \nHAVING AVG(processors.processor_usage) \n> 10\n
The 10 would then contain the average CPU usage value, you can change this value to be whatever you like.
You will to need copy and paste this into the Alert Rule under Advanced then paste into Query box and switch the Override SQL.
You can associate a rule to a procedure by giving the URL of the procedure when creating the rule. Only links like \"http://\" are supported, otherwise an error will be returned. Once configured, procedure can be opened from the Alert widget through the \"Open\" button, which can be shown/hidden from the widget configuration box.
Root-directory gets too full: storage.storage_descr = '/' AND storage.storage_perc >= '75'
Any storage gets fuller than the 'warning': storage.storage_perc >= storage_perc_warn
If device is a server and the used storage is above the warning level, but ignore /boot partitions: storage.storage_perc > storage.storage_perc_warn AND devices.type = \"server\" AND storage.storage_descr != \"/boot\"
VMware LAG is not using \"Source ip address hash\" load balancing: devices.os = \"vmware\" AND ports.ifType = \"ieee8023adLag\" AND ports.ifDescr REGEXP \"Link Aggregation .*, load balancing algorithm: Source ip address hash\"
Syslog, authentication failure during the last 5m: syslog.timestamp >= macros.past_5m AND syslog.msg REGEXP \".*authentication failure.*\"
High memory usage: macros.device_up = 1 AND mempools.mempool_perc >= 90 AND mempools.mempool_descr REGEXP \"Virtual.*\"
High CPU usage(per core usage, not overall): macros.device_up = 1 AND processors.processor_usage >= 90
High port usage, where description is not client & ifType is not softwareLoopback: macros.port_usage_perc >= 80 AND port.port_descr_type != \"client\" AND ports.ifType != \"softwareLoopback\"
Alert when mac address is located on your network ipv4_mac.mac_address = \"2c233a756912\"
You can also select Alert Rule from the Alerts Collection. These Alert Rules are submitted by users in the community :) If would like to submit your alert rules to the collection, please submit them here Alert Rules Collection
This page is for installs running version 1.42 or later. You can find the older docs here
Templates can be assigned to a single or a group of rules and can contain any kind of text. There is also a default template which is used for any rule that isn't associated with a template. This template can be found under Alert Templates page and can be edited. It also has an option revert it back to its default content.
To attach a template to a rule just open the Alert Templates settings page, choose the template to assign and click the yellow button in the Actions column. In the appearing popupbox select the rule(s) you want the template to be assigned to and click the Attach button. You might hold down the CTRL key to select multiple rules at once.
The templating engine in use is Laravel Blade. We will cover some of the basics here, however the official Laravel docs will have more information here
Placeholders are special variables that if used within the template will be replaced with the relevant data, I.e:
The device {{ $alert->hostname }} has been up for {{ $alert->uptime }} seconds would result in the following The device localhost has been up for 30344 seconds.
When using placeholders to echo data, you need to wrap the placeholder in {{ }}. I.e {{ $alert->hostname }}.
Device ID: $alert->device_id
Hostname of the Device: $alert->hostname
sysName of the Device: $alert->sysName
sysDescr of the Device: $alert->sysDescr
display name of the Device: $alert->display
sysContact of the Device: $alert->sysContact
OS of the Device: $alert->os
Type of Device: $alert->type
IP of the Device: $alert->ip
Hardware of the Device: $alert->hardware
Software version of the Device: $alert->version
Features of the Device: $alert->features
Serial number of the Device: $alert->serial
Location of the Device: $alert->location
uptime of the Device (in seconds): $alert->uptime
Short uptime of the Device (28d 22h 30m 7s): $alert->uptime_short
Long uptime of the Device (28 days, 22h 30m 7s): $alert->uptime_long
Description (purpose db field) of the Device: $alert->description
Notes of the Device: $alert->notes
Notes of the alert (ack notes): $alert->alert_notes
Time Elapsed, Only available on recovery ($alert->state == 0): $alert->elapsed
Rule Builder (the actual rule) (use {!! $alert->builder !!}): $alert->builder
Alert-ID: $alert->id
Unique-ID: $alert->uid
Faults, Only available on alert ($alert->state != 0), must be iterated in a foreach (@foreach ($alert->faults as $key => $value) @endforeach). Holds all available information about the Fault, accessible in the format $value['Column'], for example: $value['ifDescr']. Special field $value['string'] has most Identification-information (IDs, Names, Descrs) as single string, this is the equivalent of the default used and must be encased in {{ }}
State: $alert->state
Severity: $alert->severity
Rule: $alert->rule
Rule-Name: $alert->name
Procedure URL: $alert->proc
Timestamp: $alert->timestamp
Transport type: $alert->transport
Transport name: $alert->transport_name
Contacts, must be iterated in a foreach, $key holds email and $value holds name: $alert->contacts
Placeholders can be used within the subjects for templates as well although $faults is most likely going to be worthless.
The Default Template is a 'one-size-fit-all'. We highly recommend defining your own templates for your rules to include more specific information.
You can use plain text or html as per Alert templates and this will form the basis of your common template, feel free to make as many templates in the directory as needed.
There are two helpers for graphs that will use a signed url to allow secure external access. Anyone using the signed url will be able to view the graph.
Your LibreNMS web must be accessible from the location where the graph is viewed. Some alert transports require publicly accessible urls.
APP_URL must be set in .env to use signed graphs.
Changing APP_KEY will invalidate all previously issued singed urls.
You may specify the graph one of two ways, a php array of parameters, or a direct url to a graph.
Note that to and from can be specified either as timestamps with time() or as relative time -3d or -36h. When using relative time, the graph will show based on when the user views the graph, not when the event happened. Sharing a graph image with a relative time will always give the recipient access to current data, where a specific timestamp will only allow access to that timeframe.
This will insert a specially formatted html img tag linking to the graph. Some transports may search the template for this tag to attach images properly for that transport.
"},{"location":"Alerting/Templates/#using-models-for-optional-data","title":"Using models for optional data","text":"
If some value does not exist within the $faults[]-array, you may query fields from the database using Laravel models. You may use models to query additional values and use them on the template by placing the model and the value to search for within the braces. For example, ISIS-alerts do have a port_id value associated with the alert but ifName is not directly accessible from the $faults[]-array. If the name of the port was needed, it's value could be queried using a template such as:
We include a few templates for you to use, these are specific to the type of alert rules you are creating. For example if you create a rule that would alert on BGP sessions then you can assign the BGP template to this rule to provide more information.
The included templates apart from the default template are:
BGP Sessions
Ports
Temperature
"},{"location":"Alerting/Templates/#other-examples","title":"Other Examples","text":""},{"location":"Alerting/Templates/#microsoft-teams-markdown","title":"Microsoft Teams - Markdown","text":"
The simplest way of testing if an alert rule will match a device is by going to the device, clicking edit (the cog), select Capture. From this new screen choose Alerts and click run.
The output will cycle through all alerts applicable to this device and show you the Rule name, rule, MySQL query and if the rule matches.
It's possible to test your new template before assigning it to a rule. To do so you can run ./scripts/test-template.php. The script will provide the help info when ran without any parameters.
As an example, if you wanted to test template ID 10 against localhost running rule ID 2 then you would run:
If the rule is currently alerting for localhost then you will get the full template as expected to see on email, if it's not then you will just see the template without any fault information.
Transports are located within LibreNMS/Alert/Transport/ and can be configured within the WebUI under Alerts -> Alert Transports.
Contacts will be gathered automatically and passed to the configured transports. By default the Contacts will be only gathered when the alert triggers and will ignore future changes in contacts for the incident. If you want contacts to be re-gathered before each dispatch, please set 'Updates to contact email addresses not honored' to Off in the WebUI.
The contacts will always include the SysContact defined in the Device's SNMP configuration and also every LibreNMS user that has at least read-permissions on the entity that is to be alerted.
At the moment LibreNMS only supports Port or Device permissions.
You can exclude the SysContact by toggling 'Issue alerts to sysContact'.
To include users that have Global-Read, Administrator or Normal-User permissions it is required to toggle the options:
Issue alerts to admins.
Issue alerts to read only users
Issue alerts to normal users.
"},{"location":"Alerting/Transports/#using-a-proxy","title":"Using a Proxy","text":"
Proxy Configuration
"},{"location":"Alerting/Transports/#using-a-amqp-based-transport","title":"Using a AMQP based Transport","text":"
You need to install an additional php module : bcmath
The alerta monitoring system is a tool used to consolidate and de-duplicate alerts from multiple sources for quick \u2018at-a-glance\u2019 visualisation. With just one system you can monitor alerts from many other monitoring tools on a single screen.
Example:
Config Example API Endpoint http://alerta.example.com/api/alert Environment Production Apy key api key with write permission Alert state critical Recover state cleared"},{"location":"Alerting/Transports/#alertops","title":"AlertOps","text":"
Using AlertOps integration with LibreNMS, you can seamlessly forward alerts to AlertOps with detailed information. AlertOps acts as a dispatcher for LibreNMS alerts, allowing you to determine the right individuals or teams to notify based on on-call schedules. Notifications can be sent via various channels including email, text messages (SMS), phone calls, and mobile push notifications for iOS & Android devices. Additionally, AlertOps provides escalation policies to ensure alerts are appropriately managed until they are assigned or closed. You can also filter out/aggregate alerts based on different values.
To set up the integration:
Create a LibreNMS Integration: Sign up for an AlertOps account and create a LibreNMS integration from the integrations page. This will generate an Inbound Integration Endpoint URL that you'll need to copy to LibreNMS.
Configure LibreNMS Integration: In LibreNMS, navigate to the integration settings and paste the inbound integration URL obtained from AlertOps.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#alertmanager","title":"Alertmanager","text":"
Alertmanager is an alert handling software, initially developed for alert processing sent by Prometheus.
It has built-in functionality for deduplicating, grouping and routing alerts based on configurable criteria.
LibreNMS uses alert grouping by alert rule, which can produce an array of alerts of similar content for an array of hosts, whereas Alertmanager can group them by alert meta, ideally producing one single notice in case an issue occurs.
It is possible to configure as many label values as required in Alertmanager Options section. Every label and its value should be entered as a new line.
Labels can be a fixed string or a dynamic variable from the alert. To set a dynamic variable your label must start with extra_ then complete with the name of your label (only characters, figures and underscore are allowed here). The value must be the name of the variable you want to get (you can see all the variables in Alerts->Notifications by clicking on the Details icon of your alert when it is pending). If the variable's name does not match with an existing value the label's value will be the string you provided just as it was a fixed string.
Multiple Alertmanager URLs (comma separated) are supported. Each URL will be tried and the search will stop at the first success.
Basic HTTP authentication with a username and a password is supported. If you let those value blank, no authentication will be used.
The API transport allows to reach any service provider using POST, PUT or GET URLs (Like SMS provider, etc). It can be used in multiple ways:
The same text built from the Alert template is available in the variable
$msg, which can then be sent as an option to the API. Be carefull that HTTP GET requests are usually limited in length.
The API-Option fields can be directly built from the variables defined in Template-Syntax but without the 'alert->' prefix. For instance, $alert->uptime is available as $uptime in the API transport
The API-Headers allows you to add the headers that the api endpoint requires.
The API-body allow sending data in the format required by the API endpoint.
A few variables commonly used :
Variable Description {{ $hostname\u00a0}} Hostname {{ $sysName\u00a0}} SysName {{ $sysDescr\u00a0}} SysDescr {{ $os\u00a0}} OS of device (librenms defined) {{ $type\u00a0}} Type of device (librenms defined) {{ $ip\u00a0}} IP Address {{ $hardware\u00a0}} Hardware {{ $version\u00a0}} Version {{ $uptime\u00a0}} Uptime in seconds {{ $uptime_short\u00a0}} Uptime in human-readable format {{ $timestamp\u00a0}} Timestamp of alert {{ $description\u00a0}} Description of device {{ $title\u00a0}} Title (as built from the Alert Template) {{ $msg\u00a0}} Body text (as built from the Alert Template)
Example:
The example below will use the API named sms-api of my.example.com and send the title of the alert to the provided number using the provided service key. Refer to your service documentation to configure it properly.
Config Example API Method GET API URL http://my.example.com/sms-api API Options rcpt=0123456789 key=0987654321abcdef msg=(LNMS) {{ $title }} API Username myUsername API Password myPassword
The example below will use the API named wall-display of my.example.com and send the title and text of the alert to a screen in the Network Operation Center.
Config Example API Method POST API URL http://my.example.com/wall-display API Options title={{ $title }} msg={{ $msg }}
The example below will use the API named component of my.example.com with id 1, body as json status value and headers send token authentication and content type required.
Config Example API Method PUT API URL http://my.example.com/comonent/1 API Headers X-Token=HASH Content-Type=application/json API Body { \"status\": 2 }"},{"location":"Alerting/Transports/#aspsms","title":"aspSMS","text":"
aspSMS is a SMS provider that can be configured by using the generic API Transport. You need a token you can find on your personnal space.
aspSMS docs
Example:
Config Example Transport type Api API Method POST API URL https://soap.aspsms.com/aspsmsx.asmx/SimpleTextSMS Options UserKey=USERKEYPassword=APIPASSWORDRecipient=RECIPIENT Originator=ORIGINATORMessageText={{ $msg }}"},{"location":"Alerting/Transports/#browser-push","title":"Browser Push","text":"
Browser push notifications can send a notification to the user's device even when the browser is not open. This requires HTTPS, the PHP GMP extension, Push API support, and permissions on each device to send alerts.
Simply configure an alert transport and allow notification permission on the device(s) you wish to receive alerts on. You may disable alerts on a browser on the user preferences page.
Canopsis is a hypervision tool. LibreNMS can send alerts to Canopsis which are then converted to canopsis events.
Canopsis Docs
Example:
Config Example Hostname www.xxx.yyy.zzz Port Number 5672 User admin Password my_password Vhost canopsis"},{"location":"Alerting/Transports/#cisco-spark-aka-webex-teams","title":"Cisco Spark (aka Webex Teams)","text":"
Cisco Spark (now known as Webex Teams). LibreNMS can send alerts to a Cisco Spark room. To make this possible you need to have a RoomID and a token. You can also choose to send alerts using Markdown syntax. Enabling this option provides for more richly formatted alerts, but be sure to adjust your alert template to account for the Markdown syntax.
For more information about Cisco Spark RoomID and token, take a look here :
Getting started
Rooms
Example:
Config Example API Token ASd23r23edewda RoomID 34243243251 Use Markdown? x"},{"location":"Alerting/Transports/#clickatell","title":"Clickatell","text":"
Clickatell provides a REST-API requiring an Authorization-Token and at least one Cellphone number.
Clickatell Docs
Here an example using 3 numbers, any amount of numbers is supported:
Example:
Config Example Token dsaWd3rewdwea Mobile Numbers +1234567890,+1234567891,+1234567892"},{"location":"Alerting/Transports/#discord","title":"Discord","text":"
The Discord transport will POST the alert message to your Discord Incoming WebHook. Simple html tags are stripped from the message.
The only required value is for url, without this no call to Discord will be made. The Options field supports the JSON/Form Params listed in the Discord Docs below.
Discord Docs
Example:
Config Example Discord URL https://discordapp.com/api/webhooks/4515489001665127664/82-sf4385ysuhfn34u2fhfsdePGLrg8K7cP9wl553Fg6OlZuuxJGaa1d54fe Options username=myname"},{"location":"Alerting/Transports/#elasticsearch","title":"Elasticsearch","text":"
You can have LibreNMS send alerts to an elasticsearch database. Each fault will be sent as a separate document.
Example:
Config Example Host 127.0.0.1 Port 9200 Index Pattern \\l\\i\\b\\r\\e\\n\\m\\s-Y.m.d"},{"location":"Alerting/Transports/#gitlab","title":"GitLab","text":"
LibreNMS will create issues for warning and critical level alerts however only title and description are set. Uses Personal access tokens to authenticate with GitLab and will store the token in cleartext.
Example:
Config Example Host http://gitlab.host.tld Project ID 1 Personal Access Token AbCdEf12345"},{"location":"Alerting/Transports/#grafana-oncall","title":"Grafana Oncall","text":"
Send alerts to Grafana Oncall using a Formatted Webhook
Example:
Config Example Webhook URL https://a-prod-us-central-0.grafana.net/integrations/v1/formatted_webhook/m12xmIjOcgwH74UF8CN4dk0Dh/"},{"location":"Alerting/Transports/#hipchat","title":"HipChat","text":"
See the HipChat API Documentation for rooms/message for details on acceptable values.
You may notice that the link points at the \"deprecated\" v1 API. This is because the v2 API is still in beta.
Example:
Config Example API URL https://api.hipchat.com/v1/rooms/message?auth_token=109jawregoaihj Room ID 7654321 From Name LibreNMS Options color=red
At present the following options are supported: color.
Note: The default message format for HipChat messages is HTML. It is recommended that you specify the text message format to prevent unexpected results, such as HipChat attempting to interpret angled brackets (< and >).
The IRC transports only works together with the LibreNMS IRC-Bot. Configuration of the LibreNMS IRC-Bot is described here.
Example:
Config Example IRC enabled"},{"location":"Alerting/Transports/#jira","title":"JIRA","text":"
You can have LibreNMS create issues on a Jira instance for critical and warning alerts using either the Jira REST API or webhooks. Custom fields allow you to add any required fields beyond summary and description fields in case mandatory fields are required by your Jira project/issue type configuration. Custom fields are defined in JSON format but ustom fields allow you to add any required fields beyond summary and description fields in case mandatory fields are required by your Jira project/issue type configuration. Custom fields are defined in JSON format. Currently http authentication is used to access Jira and Jira username and password will be stored as cleartext in the LibreNMS database.
The config fields that need to set for webhooks are: Jira Open URL, Jira Close URL, Jira username, Jira password and webhook ID.
Note: Webhooks allow more control over how alerts are handled in Jira. With webhooks, recovery messages can be sent to a different URL than alerts. Additionally, a custom conditional logic can be built using the webhook payload and ID to automatically close an open ticket if predefined conditions are met.
Jira Issue Types Jira Webhooks
Example:
Config Example Project Key JIRAPROJECTKEY Issue Type Myissuetype Open URL https://myjira.mysite.com / https://webhook-open-url Close URL https://webhook-close-url Jira Username myjirauser Jira Password myjirapass Enable webhook ON/OFF Webhook ID alert_id Custom Fileds {\"components\":[{\"id\":\"00001\"}], \"source\": \"LibrenNMS\"}"},{"location":"Alerting/Transports/#jira-service-management","title":"Jira Service Management","text":"
Using Jira Service Management LibreNMS integration, LibreNMS forwards alerts to Jira Service Management with detailed information. Jira Service Management acts as a dispatcher for LibreNMS alerts, determines the right people to notify based on on-call schedules and notifies via email, text messages (SMS), phone calls and iOS & Android push notifications. Then escalates alerts until the alert is acknowledged or closed.
:warning: If the feature isn\u2019t available on your site, keep checking Jira Service Management for updates.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#line-messaging-api","title":"LINE Messaging API","text":"
LINE Messaging API Docs
Here is the step for setup a LINE bot and using it in LibreNMS.
Use your real LINE account register in developer protal.
Add a new channel, choose Messaging API and continue fill up the forms, note that Channel name cannot edit later.
Go to \"Messaging API\" tab of your channel, here listing some important value.
Bot basic ID and QR code is your LINE bot's ID and QR code.
Channel access token (long-lived), will use it in LibreNMS, keep it safe.
Use your real Line account add your LINE bot as a friend.
Recipient ID can be groupID, userID or roomID, it will be used in LibreNMS to send message to a group or a user. Use the following NodeJS program and ngrok for temporally https webhook to listen it.
LINE-bot-RecipientFetcher
Run the program and using ngrok expose port to public
$ node index.js\n$ ngrok http 3000\n
Go to \"Messaging API\" tab of your channel, fill up Webhook URL to https://<your ngrok domain>/webhook
If you want to let LINE bot send message to a yourself, use your real account to send a message to your LINE bot. Program will print out the userID in console.
Config Example Access token fhJ9vH2fsxxxxxxxxxxxxxxxxxxxxlFU= Recipient (groupID, userID or roomID) Ce51xxxxxxxxxxxxxxxxxxxxxxxxxx6ef"},{"location":"Alerting/Transports/#line-notify","title":"LINE Notify","text":"
LINE Notify
LINE Notify API Document
Example:
Config Example Token AbCdEf12345"},{"location":"Alerting/Transports/#mail","title":"Mail","text":"
The E-Mail transports uses the same email-configuration as the rest of LibreNMS. As a small reminder, here is its configuration directives including defaults:
Emails will attach all graphs included with the @signedGraphTag directive. If the email format is set to html, they will be embedded. To disable attaching images, set email_attach_graphs to false.
Config Example Email me@example.com"},{"location":"Alerting/Transports/#matrix","title":"Matrix","text":"
For using the Matrix transports, you have to create a room on the Matrix-server. The provided Auth_token belongs to an user, which is member of this room. The Message, sent to the matrix-room can be built from the variables defined in Template-Syntax but without the 'alert->' prefix. See API-Transport. The variable $msg is contains the result of the Alert template.The Matrix-Server URL is cutted before the beginning of the _matrix/client/r0/... API-part.
LibreNMS can send text messages through Messagebird Rest API transport.
Config Example Api Key Api rest key given in the messagebird dashboard Originator E.164 formatted originator Recipient E.164 formatted recipient for multi recipents comma separated Character limit Range 1..480 (max 3 split messages)"},{"location":"Alerting/Transports/#messagebird-voice","title":"Messagebird Voice","text":"
LibreNMS can send messages through Messagebird voice Rest API transport (text to speech).
Config Example Api Key Api rest key given in the messagebird dashboard Originator E.164 formatted originator Recipient E.164 formatted recipient for multi recipents comma separated Language Select box for options Spoken voice Female or Male Repeat X times the message is repeated"},{"location":"Alerting/Transports/#microsoft-teams","title":"Microsoft Teams","text":"
LibreNMS can send alerts to Microsoft Teams Incoming Webhooks which are then posted to a specific channel. Microsoft recommends using markdown formatting for connector cards. Administrators can opt to compose the MessageCard themselves using JSON to get the full functionality.
Example:
Config Example WebHook URL https://outlook.office365.com/webhook/123456789 Use JSON? x"},{"location":"Alerting/Transports/#nagios-compatible","title":"Nagios Compatible","text":"
The nagios transport will feed a FIFO at the defined location with the same format that nagios would. This allows you to use other alerting systems with LibreNMS, for example Flapjack.
Example:
Config Example Nagios FIFO /path/to/my.fifo"},{"location":"Alerting/Transports/#opsgenie","title":"OpsGenie","text":"
Using OpsGenie LibreNMS integration, LibreNMS forwards alerts to OpsGenie with detailed information. OpsGenie acts as a dispatcher for LibreNMS alerts, determines the right people to notify based on on-call schedules and notifies via email, text messages (SMS), phone calls and iOS & Android push notifications. Then escalates alerts until the alert is acknowledged or closed.
Create a LibreNMS Integration from the integrations page once you signup. Then copy the API key from OpsGenie to LibreNMS.
If you want to automatically ack and close alerts, leverage Marid integration. More detail with screenshots is available in OpsGenie LibreNMS Integration page.
Example:
Config Example WebHook URL https://url/path/to/webhook"},{"location":"Alerting/Transports/#osticket","title":"osTicket","text":"
LibreNMS can send alerts to osTicket API which are then converted to osTicket tickets.
Example:
Config Example API URL http://osticket.example.com/api/http.php/tickets.json API Token 123456789"},{"location":"Alerting/Transports/#pagerduty","title":"PagerDuty","text":"
LibreNMS can make use of PagerDuty, this is done by utilizing an API key and Integraton Key.
API Keys can be found under 'API Access' in the PagerDuty portal.
Integration Keys can be found under 'Integration' for the particular Service you have created in the PagerDuty portal.
Example:
Config Example API Key randomsample Integration Key somerandomstring"},{"location":"Alerting/Transports/#philips-hue","title":"Philips Hue","text":"
Want to spice up your noc life? LibreNMS will flash all lights connected to your philips hue bridge whenever an alert is triggered.
To setup, go to the you http://your-bridge-ip/debug/clip.html
Update the \"URL:\" field to /api
Paste this in the \"Message Body\" {\"devicetype\":\"librenms\"}
Press the round button on your philips Hue Bridge
Click on POST
In the Command Response You should see output with your username. Copy this without the quotes
More Info: Philips Hue Documentation
Example:
Config Example Host http://your-bridge-ip Hue User username Duration 1 Second"},{"location":"Alerting/Transports/#playsms","title":"PlaySMS","text":"
PlaySMS is an open source SMS-Gateway that can be used via their HTTP API using a Username and WebService Token. Please consult PlaySMS's documentation regarding number formatting.
PlaySMS Docs
Here an example using 3 numbers, any amount of numbers is supported:
Example:
Config Example PlaySMS https://localhost/index.php User user1 Token MYFANCYACCESSTOKEN From My Name Mobiles +1234567892,+1234567890,+1234567891"},{"location":"Alerting/Transports/#pushbullet","title":"Pushbullet","text":"
Get your Access Token from your Pushbullet's settings page and set it in your transport:
Example:
Config Example Access Token MYFANCYACCESSTOKEN"},{"location":"Alerting/Transports/#pushover","title":"Pushover","text":"
If you want to change the default notification sound for all notifications then you can add the following in Pushover Options:
sound=falling
You also have the possibility to change sound per severity: sound_critical=fallingsound_warning=sirensound_ok=magic
Enabling Pushover support is fairly easy, there are only two required parameters.
Firstly you need to create a new Application (called LibreNMS, for example) in your account on the Pushover website (https://pushover.net/apps).
Now copy your API Key and obtain your User Key from the newly created Application and setup the transport.
Pushover Docs
Example:
Config Example Api Key APPLICATIONAPIKEYGOESHERE User Key USERKEYGOESHERE Pushover Options sound_critical=falling sound_warning=siren sound_ok=magic"},{"location":"Alerting/Transports/#rocketchat","title":"Rocket.chat","text":"
The Rocket.chat transport will POST the alert message to your Rocket.chat Incoming WebHook using the attachments option. Simple html tags are stripped from the message. All options are optional, the only required value is for url, without this then no call to Rocket.chat will be made.
The Sensu transport will POST an Event to the Agent API upon an alert being generated.
It will be categorised (ok, warning or critical), and if you configure the alert to send recovery notifications, Sensu will also clear the alert automatically. No configuration is required - as long as you are running the Sensu Agent on your poller with the HTTP socket enabled on tcp/3031, LibreNMS will start generating Sensu events as soon as you create the transport.
Acknowledging alerts within LibreNMS is not directly supported, but an annotation (acknowledged) is set, so a mutator or silence, or even the handler could be written to look for it directly in the handler. There is also an annotation (generated-by) set, to allow you to treat LibreNMS events differently from agent events.
The 'shortname' option is a simple way to reduce the length of device names in configs. It replaces the last 3 domain components with single letters (e.g. websrv08.dc4.eu.corp.example.net gets shortened to websrv08.dc4.eu.cen).
Sensu will reject rules with special characters - the Transport will attempt to fix up rule names, but it's best to stick to letters, numbers and spaces
The transport only deals in absolutes - it ignores the got worse/got better states
The agent will buffer alerts, but LibreNMS will not - if your agent is offline, alerts will be dropped
There is no backchannel between Sensu and LibreNMS - if you make changes in Sensu to LibreNMS alerts, they'll be lost on the next event (silences will work)
Example:
Config Example Sensu Endpoint http://localhost:3031 Sensu Namespace eu-west Check Prefix lnms Source Key hostname"},{"location":"Alerting/Transports/#signl4","title":"SIGNL4","text":"
SIGNL4 offers critical alerting, incident response and service dispatching for operating critical infrastructure. It alerts you persistently via app push, SMS text, voice calls, and email including tracking, escalation, on-call duty scheduling and collaboration.
Integrating SIGNL4 with LibreNMS to forward critical alerts with detailed information to responsible people or on-call teams. The integration supports triggering as well as closing alerts.
In the configuration for your SIGNL4 alert transport you just need to enter your SIGNL4 webhook URL including team or integration secret.
Example:
Config Example Webhook URL https://connect.signl4.com/webhook/{team-secret}
You can find more information about the integration here.
The Slack transport will POST the alert message to your Slack Incoming WebHook using the attachments option, you are able to specify multiple webhooks along with the relevant options to go with it. Simple html tags are stripped from the message. All options are optional, the only required value is for url, without this then no call to Slack will be made.
We currently support the following attachment options:
author_name
We currently support the following global message options:
channel_name : Slack channel name (without the leading '#') to which the alert will go
icon_emoji : Emoji name in colon format to use as the author icon
Slack docs
The alert template can make use of Slack markdown. In the Slack markdown dialect, custom links are denoted with HTML angled brackets, but LibreNMS strips these out. To support embedding custom links in alerts, use the bracket/parentheses markdown syntax for links. For example if you would typically use this for a Slack link:
<https://www.example.com|My Link>
Use this in your alert template:
[My Link](https://www.example.com)
Example:
Config Example Webhook URL https://slack.com/url/somehook Channel network-alerts Author Name LibreNMS Bot Icon :scream:"},{"location":"Alerting/Transports/#smseagle","title":"SMSEagle","text":"
SMSEagle is a hardware SMS Gateway that can be used via their HTTP API using a Username and password.
Destination numbers are one per line, with no spaces. They can be in either local or international dialling format.
SMSEagle Docs
Example:
Config Example SMSEagle Host ip.add.re.ss User smseagle_user Password smseagle_user_password Mobiles +3534567890 0834567891"},{"location":"Alerting/Transports/#smsmode","title":"SMSmode","text":"
SMSmode is a SMS provider that can be configured by using the generic API Transport. You need a token you can find on your personnal space.
SMSmode docs
Example:
Config Example Transport type Api API Method POST API URL http://api.smsmode.com/http/1.6/sendSMS.do Options accessToken=PUT_HERE_YOUR_TOKEN numero=PUT_HERE_DESTS_NUMBER_COMMA_SEPARATEDmessage={{ $msg }}"},{"location":"Alerting/Transports/#splunk","title":"Splunk","text":"
LibreNMS can send alerts to a Splunk instance and provide all device and alert details.
Config Example Host 127.0.0.1 UDP Port 514"},{"location":"Alerting/Transports/#syslog","title":"Syslog","text":"
You can have LibreNMS emit alerts as syslogs complying with RFC 3164.
More information on RFC 3164 can be found here: https://tools.ietf.org/html/rfc3164
Example output: <26> Mar 22 00:59:03 librenms.host.net librenms[233]: [Critical] network.device.net: Port Down - port_id => 98939; ifDescr => xe-1/1/0;
Each fault will be sent as a separate syslog.
Example:
Config Example Host 127.0.0.1 Port 514 Facility 3"},{"location":"Alerting/Transports/#telegram","title":"Telegram","text":"
Thank you to snis for these instructions.
First you must create a telegram account and add BotFather to you list. To do this click on the following url: https://telegram.me/botfather
Generate a new bot with the command \"/newbot\" BotFather is then asking for a username and a normal name. After that your bot is created and you get a HTTP token. (for more options for your bot type \"/help\")
Add your bot to telegram with the following url: http://telegram.me/<botname> to use app or https://web.telegram.org/<botname> to use in web, and send some text to the bot.
The BotFather should have responded with a token, copy your token code and go to the following page in chrome: https://api.telegram.org/bot<tokencode>/getUpdates (this could take a while so continue to refresh until you see something similar to below)
You see a json code with the message you sent to the bot. Copy the Chat id. In this example that is \u201c-9787468\u201d within this example: \"message\":{\"message_id\":7,\"from\":\"id\":656556,\"first_name\":\"Joo\",\"last_name\":\"Doo\",\"username\":\"JohnDoo\"},\"chat\":{\"id\":-9787468,\"title\":\"Telegram Group\"},\"date\":1435216924,\"text\":\"Hi\"}}]}.
Now create a new \"Telegram transport\" in LibreNMS (Global Settings -> Alerting Settings -> Telegram transport). Click on 'Add Telegram config' and put your chat id and token into the relevant box.
If want to use a group to receive alerts, you need to pick the Chat ID of the group chat, and not of the Bot itself.
Telegram Docs
Example:
Config Example Chat ID 34243432 Token 3ed32wwf235234 Format HTML or MARKDOWN"},{"location":"Alerting/Transports/#twilio-sms","title":"Twilio SMS","text":"
Twilio will send your alert via SMS. From your Twilio account you will need your account SID, account token and your Twilio SMS phone number that you would like to send the alerts from. Twilio's APIs are located at: https://www.twilio.com/docs/api?filter-product=sms
Example:
Config Example SID ACxxxxxxxxxxxxxxxxxxxxxxxxxxxx Token 7xxxx573acxxxbc2xxx308d6xxx652d32 Twilio SMS Number 8888778660"},{"location":"Alerting/Transports/#ukfast-pss","title":"UKFast PSS","text":"
UKFast PSS tickets can be raised from alerts using the UKFastPSS transport. This required an API key with PSS write permissions
Example:
Config Example API Key ABCDefgfg12 Author 5423 Priority Critical Secure true"},{"location":"Alerting/Transports/#victorops","title":"VictorOps","text":"
VictorOps provide a webHook url to make integration extremely simple. To get the URL required login to your VictorOps account and go to:
The URL provided will have $routing_key at the end, you need to change this to something that is unique to the system sending the alerts such as librenms. I.e:
Config Example Post URL https://alert.victorops.com/integrations/generic/20132414/alert/2f974ce1-08fc-4dg8-a4f4-9aee6cf35c98/librenms"},{"location":"Alerting/Transports/#kayako-classic","title":"Kayako Classic","text":"
LibreNMS can send alerts to Kayako Classic API which are then converted to tickets. To use this module, you need REST API feature enabled in Kayako Classic and configured email account at LibreNMS. To enable this, do this:
AdminCP -> REST API -> Settings -> Enable API (Yes)
Also you need to know the department id to provide tickets to appropriate department and a user email to provide, which is used as ticket author. To get department id: navigate to appropriate department name at the departments list page in Admin CP and watch the number at the end of url. Example: http://servicedesk.example.com/admin/Base/Department/Edit/17. Department ID is 17
As a requirement, you have to know API Url, API Key and API Secret to connect to servicedesk
Kayako REST API Docs
Example:
Config Example Kayako URL http://servicedesk.example.com/api/ Kayako API Key 8cc02f38-7465-4a0c-8730-bb3af122167b Kayako API Secret Y2NhZDIxNDMtNjVkMi0wYzE0LWExYTUtZGUwMjJiZDI0ZWEzMmRhOGNiYWMtNTU2YS0yODk0LTA1MTEtN2VhN2YzYzgzZjk5 Kayako Department 1"},{"location":"Alerting/Transports/#signal-cli","title":"Signal CLI","text":"
Use the Signal Mesenger for Alerts. Run the Signal CLI with the D-Bus option.
GitHub Project
Example:
Config Example Path /opt/signal-cli/bin/signal-cli Recipient type Group Recipient dfgjsdkgljior4345=="},{"location":"Alerting/Transports/#smsfeedback","title":"SMSFeedback","text":"
SMSFeedback is a SAAS service, which can be used to deliver Alerts via API, using API url, Username & Password.
They can be in international dialling format only.
SMSFeedback Api Docs
Example:
Config Example User smsfeedback_user Password smsfeedback_password Mobiles 71234567890 Sender name CIA"},{"location":"Alerting/Transports/#zenduty","title":"Zenduty","text":"
Leveraging LibreNMS<>Zenduty Integration, users can send new LibreNMS alerts to the right team and notify them based on on-call schedules via email, SMS, Phone Calls, Slack, Microsoft Teams and mobile push notifications. Zenduty provides engineers with detailed context around the LibreNMS alert along with playbooks and a complete incident command framework to triage, remediate and resolve incidents with speed.
Create a LibreNMS Integration from inside Zenduty, then copy the Webhook URL from Zenduty to LibreNMS.
For a detailed guide with screenshots, refer to the LibreNMS documentation at Zenduty.
Example:
Config Example WebHook URL https://www.zenduty.com/api/integration/librenms/integration-key/"},{"location":"Developing/Application-Notes/","title":"Notes On Application Development","text":""},{"location":"Developing/Application-Notes/#librenms-json-snmp-extends","title":"LibreNMS JSON SNMP Extends","text":"
The polling function json_app_get makes it easy to poll complex data using SNMP extends and JSON.
The following exceptions are provided by it.
It takes three parameters, in order in the list below.
Integer :: Device ID to fetch it for.
String :: The extend name. For example, if 'zfs' is passed it will be converted to 'nsExtendOutputFull.3.122.102.115'.
Integer :: Minimum expected version of the JSON return.
The required keys for the returned JSON are as below.
version :: The version of the snmp extend script. Should be numeric and at least 1.
error :: Error code from the snmp extend script. Should be > 0 (0 will be ignored and negatives are reserved)
errorString :: Text to describe the error.
data :: An key with an array with the data to be used.
The supported exceptions are as below.
JsonAppPollingFailedException :: Empty return from SNMP.
JsonAppParsingFailedException :: Could not parse the JSON
JsonAppWrongVersionException :: Older version than supported.
JsonAppExtendErroredException :: Polling and parsing was good, but the returned data has an error set. This may be checked via $e->getParsedJson() and then checking the keys error and errorString.
The error value can be accessed via $e->getCode(). The output can be accessed via $->getOutput() Only returned JsonAppParsingFailedException. The parsed JSON can be access via $e->getParsedJson().
An example below from includes/polling/applications/zfs.inc.php...
try {\n $zfs = json_app_get($device, $name, 1)['data'];\n} catch (JsonAppMissingKeysException $e) {\n //old version with out the data key\n $zfs = $e->getParsedJson();\n} catch (JsonAppException $e) {\n echo PHP_EOL . $name . ':' . $e->getCode() . ':' . $e->getMessage() . PHP_EOL;\n update_application($app, $e->getCode() . ':' . $e->getMessage(), []);\n\n return;\n}\n
Also worth noting that json_app_get supports compressed data via base64 encoded gzip. If base64 encoding is detected on the the SNMP return, it will be gunzipped and then parsed.
https://github.com/librenms/librenms-agent/blob/master/utils/librenms_return_optimizer may be used to optimize JSON returns.
"},{"location":"Developing/Application-Notes/#application-data-storage","title":"Application Data Storage","text":"
The $app model is supplied for each application poller and graph. You may access and update the $app->data field to store arrays of data the Application model.
When you call update_application() the $app model will be saved along with any changes to the data field.
// set the varaible data to $foo\n$app->data = [\n 'item_A' => 123,\n 'item_B' => 4.5,\n 'type' => 'foo',\n 'other_items' => [ 'a', 'b', 'c' ],\n];\n\n// save the change\n$app->save();\n\n// var_dump the contents of the variable\nvar_dump($app->data);\n
This document will try and provide a good overview of how the code is structured within LibreNMS. We will go through the main directories and provide information on how and when they are used. LibreNMS now uses Laravel for much of it's frontend (webui) and database code. Much of the Laravel documentation applies: https://laravel.com/docs/structure
Directories from the (filtered) structure tree below are some of the directories that will be most interesting during development:
Classes that don't belong to the Laravel application belong in this directory, with a directory structure that matches the namespace. One class per file. See PSR-0 for details.
This is the main file which all links within LibreNMS are parsed through. It loads the majority of the relevant includes needed for the control panel to function. CSS and JS files are also loaded here.
This directory is quite big and contains all the files to make the cli and polling / discovery to work. This code is not currently accessible from Laravel code (intentionally).
All the discovery and polling code. The format is usually quite similar between discovery and polling. Both are made up of modules and the files within the relevant directories will match that module. So for instance if you want to update the os detection for a device, you would look in includes/discovery/os/ for a file named after the operating system such as linux: includes/discovery/linux.inc.php. Within here you would update or add support for newer OS'. This is the same for polling as well.
This is where the majority of the website core files are located. These tend to be files that contain functions or often used code segments that can be included where needed rather than duplicating code.
In here is a list of of files that generate PDF reports available to the user. These are dynamically called in from html/pdf.php based on the report the user requests.
This directory contains all of the ajax calls when generating the table of data. Most have been converted over so if you are planning to add a new table of data then you will do so here for all of the back end data calls.
This directory contains the URL structure when browsing the Web UI. So for example /devices/ is actually a call to includes/html/pages/devices.inc.php, /device/tab=ports/ is includes/html/pages/device/ports.inc.php.
Here is where all of the mibs are located. Generally standard mibs should be in the root directory and specific vendor mibs should be in their own subdirectory.
One of the goals of the LibreNMS project is to enable users to get all of the help they need from our documentation.
The documentation uses the markdown markup language and is generated with mkdocs. To edit or create markdown you only need a text editor, but it is recommended to build your docs before submitting, in order to check them visually. The section on this page has instructions for this step.
When you are adding a new feature or extension, we need to have full documentation to go along with it. It's quite simple to do this:
Find the relevant directory to store your new document in, General, Support and Extensions are the most likely choices.
Think of a descriptive name that's not too long, it should match what they may be looking for or describes the feature.
Add the new document into the nav section of mkdocs.yml if it needs to appear in the table of contents
Ensure the first line contains: source: path/to/file.md - don't include the initial doc/.
In the body of the document, be descriptive but keep things simple. Some tips:
If the document could cover different distros like CentOS and Ubuntu please try and include the information for them all. If that's not possible then at least put a placeholder in asking for contributions.
Ensure you use the correct formatting for commands and code blocks by wrapping one liners in backticks or blocks in ```.
Put content into sub-headings where possible to organise the content.
If you rename a file, please add a redirect for the old file in mkdocs.yml like so:
Please ensure you add the document to the relevant section within pages of mkdocs.yml so that it's in the correct menu and is built. Forgetting this step will result in your document never seeing the light of day :)
Our docs are based on Markdown using mkdocs which adheres to markdown specs and nothing more, because of that we also import a couple of extra libraries:
pymdownx.tasklist
pymdownx.tilde
This means you can use:
~~strikethrough~~ to perform strikethrough
- [X] List items
Url's can be made [like this](https://www.librenms.org) like this
Code can be placed in `` for single line or ``` for multiline.
# Can be used for main headings which translates to a <h1> tag, increasing the #'s will increase the hX tags.
### Can be used for sub-headings which will appear in the TOC to the left.
Settings should be prefixed with !!! setting \"<webui setting path>\"
If you encounter permissions issues, these might be reoslved by using the user option, with whatever user you are building as, e.g. -u librenms
A configuration file for building LibreNMS docs is already included in the distribution: /opt/librenms/mkdocs.yml. The various configuration directives are documented here.
Build from the librenms base directory: cd /opt/librenms.
Building is simple:
mkdocs build\n
This will output all the documentation in html format to /opt/librenms/out (this folder will be ignored from any commits).
mkdocs includes it's own light-weight webserver for this purpose.
Viewing is as simple as running the following command:
$ mkdocs serve\nINFO - Building documentation...\n<..>\nINFO - Documentation built in 12.54 seconds\n<..>\nINFO - Serving on http://127.0.0.1:8000\n<..>\nINFO - Start watching changes\n
Now you will find the complete set of LibreNMS documentation by opening your browser to localhost:8000.
Note it is not necessary to build before viewing as the serve command will do this for you. Also the server will update the documents it is serving whenever changes to the markdown are made, such as in another terminal.
"},{"location":"Developing/Creating-Documentation/#viewing-docs-from-another-machine","title":"Viewing docs from another machine","text":"
By default the server will only listen for connections from the local machine. If you are building on a different machine you can use the following directive to listen on all interfaces:
mkdocs serve --dev-addr=0.0.0.0:8000\n
WARNING: this is not a secure webserver, do this at your own risk, with appropriate host security and do not leave the server running.
"},{"location":"Developing/Creating-Release/","title":"Creating a release","text":""},{"location":"Developing/Creating-Release/#github","title":"GitHub","text":"
You can create a new release on GitHub.
Enter the tag version that month, i.e for September 2016 you would enter 201609.
Enter a title, we usually use August 2016 Release
Enter a placeholder for the body, we will edit this later.
For this, we assume you are using the master branch to create the release against.
We now generate the changelog using the GitHub API itself so it shouldn't matter what state your local branch is in so long as it has the code to generate the changelog itself.
Using the GitHub API means we can use the labels associated with merged pull requests to categorise the changelog. We also then record who made the pull request to thank them in the changelog itself.
You will be asked for a GitHub personal access token. You can generate this here. No permissions should be needed so just give it a name and click Generate Token. You can then export the token as an environment variable GH_TOKEN or place it in your .env file.
The basic command to run is by using artisan. Here you pass new tag (1.41) and previous tag (1.40). For further help run php artisan release:tag --help. This will generate a changelog up to the latest master branch, if you want it to be done against something else then pass the latest pull request number with --pr $PR_NUMBER.
php artisan release:tag 1.41 1.40\n
Now commit and push the change that has been made to doc/General/Changelog.md.
Once the pull request has been merged in for the Changelog, you can create a new release on GitHub.
Create two threads on the community site:
A changelog thread example
An info thread example
Tweet it
Facebook it
Google Plus it
LinkedIn it
"},{"location":"Developing/Dynamic-Config/","title":"Adding new config settings","text":"
Adding support for users to update a new config option via the WebUI is now a lot easier for general options. This document shows you how to add a new config option and even section to the WebUI.
Config settings are defined in misc/config_definitions.json
You should give a little thought to the name of your config setting. For example: a good setting for snmp community, would be snmp.community. The dot notation is path and when the config is hydrated, it is converted to a nested array. If the user is overriding the option in config.php it would use the format $config['snmp']['community']
The config definition system inherently supports translation. You must add the English names in the resoures/lang/en/settings.php file (and other languages if you can).
You may set the type field to a custom type and define a Vue.js component to display it to the user.
The Vue.js component should be named as \"SettingType\" where type is the custom type entered with the first letter capitalized. Vue.js components exist in the resources/js/components directory.
Here is an empty component named SettingType (make sure to rename it). It pulls in BaseSetting mixin for basic setting code to reuse. You should review the BaseSetting component.
Using Vue.js is beyond the scope of this document. Documentation can be found at vuejs.org.
"},{"location":"Developing/Getting-Started/","title":"Get ready to contribute to LibreNMS","text":"
This document is intended to help you get your local environment set up to contribute code to the LibreNMS project.
"},{"location":"Developing/Getting-Started/#setting-up-a-development-environment","title":"Setting up a development environment","text":"
When starting to develop, it may be tempting to just make changes on your production server, but that will make things harder for you. Taking a little time to set up somewhere to work on code changes can really help.
Possible options:
A Linux computer, VM, or container
Another directory on your LibreNMS server
Windows Subsystem for Linux
"},{"location":"Developing/Getting-Started/#set-up-your-development-git-clone","title":"Set up your development git clone","text":"
Follow the documentation on using git
Install development dependencies ./scripts/composer_wrapper.php install
Set variables in .env, including database settings. Which could be a local or remote MySQL server including your production DB.
LibreNMS uses continuous integration to test code changes to help reduce bugs. This also helps guarantee the changes you contribute won't be broken in the future. You can find out more in our Validating Code Documentation
The default database connection for automated testing is testing.
To override the database parameters for unit tests, configure your .env file accordingly. The defaults (from config/database.php) are:
Sometimes you want to find out what a variable contains (such as the data return from an snmpwalk). You can dump one or more variables and halt execution with the dd() function.
dd($variable1, $variable2);\n
"},{"location":"Developing/Getting-Started/#inspecting-web-pages","title":"Inspecting web pages","text":"
Installing the development dependencies and setting APP_DEBUG enables the Laravel Debugbar This will allow you to inspect page generation and errors right in your web browser.
"},{"location":"Developing/Getting-Started/#better-code-completion-in-ides-and-editors","title":"Better code completion in IDEs and editors","text":"
You can generate some files to improve code completion. (These file are not updated automatically, so you may need to re-run these command periodically)
You can capture and emulate devices using Snmpsim. LibreNMS has a set of scripts to make it easier to work with snmprec files. LibreNMS Snmpsim helpers
You must have a working snmptrapd. See SNMP TRAP HANDLER
Make sure the MIB is loaded from the trap you are adding. Edit /etc/systemd/system/snmptrapd.service.d/mibs.conf to add it then restart snmptrapd.
MIBDIRS option is not recursive, so you need to specify each directory individually.
Create a new class in LibreNMS\\Snmptrap\\Handlers that implements the LibreNMS\\Interfaces\\SnmptrapHandler interface. For example:
<?php\n/**\n * ColdBoot.php\n *\n * Handles the SNMPv2-MIB::coldStart trap\n *\n * This program is free software: you can redistribute it and/or modify\n * it under the terms of the GNU General Public License as published by\n * the Free Software Foundation, either version 3 of the License, or\n * (at your option) any later version.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.See the\n * GNU General Public License for more details.\n *\n * You should have received a copy of the GNU General Public License\n * along with this program. If not, see <https://www.gnu.org/licenses/>.\n *\n * @package LibreNMS\n * @link https://www.librenms.org\n */\n\nnamespace LibreNMS\\Snmptrap\\Handlers;\n\nuse App\\Models\\Device;\nuse LibreNMS\\Enum\\Severity;\nuse LibreNMS\\Interfaces\\SnmptrapHandler;\nuse LibreNMS\\Snmptrap\\Trap;\n\nclass ColdBoot implements SnmptrapHandler\n{\n /**\n * Handle snmptrap.\n * Data is pre-parsed and delivered as a Trap.\n *\n * @param Device $device\n * @param Trap $trap\n * @return void\n */\n public function handle(Device $device, Trap $trap)\n {\n $trap->log('SNMP Trap: Device ' . $device->displayName() . ' cold booted', $device->device_id, 'reboot', Severity::Warning);\n }\n}\n
where number on the end means color of the eventlog:
The handle function inside your new class will receive a LibreNMS/Snmptrap/Trap object containing the parsed trap. It is common to update the database and create event log entries within the handle function.
"},{"location":"Developing/SNMP-Traps/#getting-information-from-the-trap","title":"Getting information from the Trap","text":""},{"location":"Developing/SNMP-Traps/#source-information","title":"Source information","text":"
$trap->getDevice(); // gets Device model for the device associated with this trap\n$trap->ip; // gets source IP of this trap\n$trap->getTrapOid(); // returns the string you registered your class with\n
"},{"location":"Developing/SNMP-Traps/#retrieving-data-from-the-trap","title":"Retrieving data from the Trap","text":"
$trap->getOidData('IF-MIB::ifDescr.114');\n
getOidData() requires the full name including any additional index. You can use these functions to search the OID keys.
$trap->findOid('ifDescr'); // returns the first oid key that contains the string\n$trap->findOids('ifDescr'); // returns all oid keys containing the string\n
Submitting new traps requires them to be fully tested. You can find many examples in the tests/Feature/SnmpTraps/ directory.
Here is a basic example of a test that trap handler only creates a log message. If your trap modifies the database, you should also test that it does so.
<?php\n\nnamespace LibreNMS\\Tests\\Feature\\SnmpTraps;\n\nclass ColdStratTest extends SnmpTrapTestCase\n{\n public function testColdStart(): void\n {\n $this->assertTrapLogsMessage(rawTrap: <<<'TRAP'\n{{ hostname }}\nUDP: [{{ ip }}]:44298->[192.168.5.5]:162\nDISMAN-EVENT-MIB::sysUpTimeInstance 0:0:1:12.7\nSNMPv2-MIB::snmpTrapOID.0 SNMPv2-MIB::coldStart\nTRAP,\n log: 'SNMP Trap: Device {{ hostname }} cold booted', // The log message sent\n failureMessage: 'Failed to handle SNMPv2-MIB::coldStart', // an informative message to let user know what failed\n args: [4, 'reboot'], // the additional arguments to the log method\n );\n }\n}\n
"},{"location":"Developing/Sensor-State-Support/","title":"Sensor State Support","text":""},{"location":"Developing/Sensor-State-Support/#introduction","title":"Introduction","text":"
In this section we are briefly going to walk through, what it takes to write sensor state support. We will also briefly get around the concepts of the current sensor state monitoring.
Each time a sensor needs to be polled, the system needs to know which sensor is it that it need to poll, at what oid is this sensor located and what class the sensor is etc. This information is fetched from the sensors table.
Is where we map the possible returned state sensor values to a generic LibreNMS value, in order to make displaying and alerting more generic. We also map these values to the actual state sensor(state_index) where these values are actually returned from.
The LibreNMS generic states are derived from Nagios:
This example will be based on a Cisco power supply sensor and is all it takes to have sensor state support for Cisco power supplies in Cisco switches. The file should be located in /includes/discovery/sensors/state/cisco.inc.php.
This document is broken down into the relevant sections depending on what support you are adding. During all of these examples we will be using the OS of pulse as the example OS we will add.
Adding the initial detection.
Adding Memory and CPU information.
Adding Health / Sensor information.
Adding Wireless Sensor information.
Adding custom graphs.
Adding Unit tests (required).
Optional Settings
We currently have a script in pre-beta stages that can help speed up the process of deploying a new OS. It has support for add sensors in a basic form (except state sensors).
In this example, we will add a new OS called test-os using the device ID 101 that has already been added. It will be of the type network and belongs to the vendor, Cisco:
The process will then step you through the process with some more questions. Please be warned, this is currently pre-beta and may cause some issues. Please let us know of any on Discord.
"},{"location":"Developing/Using-Git/#clone-the-repo","title":"Clone the repo","text":"
Ok so now that you have forked the repo, you now need to clone it to your local install where you can then make the changes you need and submit them back.
cd /opt/\ngit clone git@github.com:/yourusername/librenms.git\n
As you become more familiar you may find a better workflow that fits your needs, until then this should be a safe workflow for you to follow.
Before you start work on a new branch / feature. Make sure you are up to date.
cd /opt/librenms\ngit checkout master\ngit pull upstream master\ngit push origin master\n
At this stage it's worth pointing out that we have some standard checks that are performed when you submit a pull request, you can run these checks yourself to be sure no issues are present in your pull request.
Now, create a new branch to do you work on. It's important that you do this as you are then able to work on more than one feature at a time and submit them as pull requests individually. If you did all your work in the master branch then it gets a bit messy!
You need to give your branch a name. If an issue is open (or closed on GitHub) then you can use that, in this example if the issue number is 123 then we will use issue-123. If a post exists on the community forum then you can use the post id like community-123. You're also welcome to use any arbitrary name for your branch but try and make it relevant to what the branch is.
git checkout -b issue-123\n
Now, code away. Make the changes you need, test, change and test again :) When you are ready to submit the updates as a pull request then commit away.
git add path/to/new/files/or/folders\ngit commit -a -m 'Added feature to do X, Y and Z'\ngit push origin issue-123\n
If you need to rebase against master then you can do this with:
If after do this you get some merge conflicts then you need to resolve these before carrying on.
Please try to squash all commits into one, this isn't essential as we can do this when we merge but it would be helpful to do this before you submit your pull request.
Now you will be ready to submit a pull request from within GitHub. To do this, go to your GitHub page for the LibreNMS repo. Now select the branch you have just been working on (issue-123) from the drop down to the left and then click 'Pull Request'. Fill in the details to describe the work you have done and click 'Create pull request'.
Thanks for your first pull request :)
Ok, that should get you started on the contributing path. If you have any other questions then stop by our Discord Server
"},{"location":"Developing/Using-Git/#hints-and-tips","title":"Hints and tips","text":"
As part of the pull request process with GitHub we run some automated build tests to ensure that the code is error free, standards compliant and our test suite builds successfully.
Rather than submit a pull request and wait for the results, you can run these checks yourself to ensure a more seamless merge.
All of these commands should be run from within the librenms directory and can be run as the librenms user unless otherwise noted.
Install composer (you can skip this if composer is already installed).
curl -sS https://getcomposer.org/installer | php
Composer will now be installed into /opt/librenms/composer.phar.
Now install the dependencies we require:
./composer.phar install
Once composer is installed you can now run the code validation script:
./lnms dev:check
If you see Tests ok, submit away :) then all is well. If you see other output then it should contain what you need to resolve the issues and re-test.
Git has a hook system which you can use to trigger checks at various stages. Utilising the ./lnms dev:check you can make this part of your commit process.
Add ./lnms dev:check to your .git/hooks/pre-commit:
First we define our graphs in includes/definitions.inc.php to share our work and contribute in the development of LibreNMS. :-) (or place in config.php if you don't plan to contribute)
OS polling is not necessarily where custom polling should be done, please speak to one of the core devs in Discord for guidance.
Let's update our example file to add additional polling:
includes/polling/os/pulse.inc.php\n
We declare two specific graphs for users and sessions numbers. Theses two graphs will be displayed on the firewall section of the graphs tab as it was written in the definition include file.
This document will guide you through adding health / sensor information for your new device.
Currently, we have support for the following health metrics along with the values we expect to see the data in:
Class Measurement airflow cfm ber ratio charge % chromatic_dispersion ps/nm cooling W count # current A dbm dBm delay s eer eer fanspeed rpm frequency Hz humidity % load % loss % power W power_consumed kWh power_factor ratio pressure kPa quality_factor dB runtime Min signal dBm snr SNR state # temperature C tv_signal dBmV bitrate bps voltage V waterflow l/m percent %"},{"location":"Developing/os/Health-Information/#simple-health-discovery","title":"Simple health discovery","text":"
We have support for defining health / sensor discovery using YAML files so that you don't need to know how to write PHP.
Please note that DISPLAY-HINTS are disabled so ensure you use the correct divisor / multiplier if applicable.
All yaml files are located in includes/definitions/discovery/$os.yaml. Defining the information here is not always possible and is heavily reliant on vendors being sensible with the MIBs they generate. Only snmp walks are supported, and you must provide a sane table that can be traversed and contains all the data you need. We will use netbotz as an example here.
At the top you can define one or more mibs to be used in the lookup of data:
mib: NETBOTZV2-MIB For use of multiple MIB files separate them with a colon: mib: NETBOTZV2-MIB:SECOND-MIB
For data: you have the following options:
The only sensor we have defined here is airflow. The available options are as follows:
oid (required): This is the name of the table you want to snmp walk for data.
value (optional): This is the key within the table that contains the value. If not provided will use oid
num_oid (required for PullRequests): If not provided, this parameter should be computed automatically by discovery process. This parameter is still required to submit a pull request. This is the numerical OID that contains value. This should usually include {{ $index }}. In case the index is a string, {{ $str_index_as_numeric }} can be used instead and will convert the string to the equivalent OID representation.
divisor (optional): This is the divisor to use against the returned value.
multiplier (optional): This is the multiplier to use against the returned value.
low_limit (optional): This is the critical low threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
low_warn_limit (optional): This is the warning low threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
warn_limit (optional): This is the warning high threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
high_limit (optional): This is the critical high threshold that value should be (used in alerting). If an OID is specified then divisor / multiplier are used.
descr (required): The visible label for this sensor. It can be a key with in the table or a static string, optionally using {{ index }}.
group (optional): Groups sensors together under in the webui, displaying this text. Not specifying this will put the sensors in the default group.
index (optional): This is the index value we use to uniquely identify this sensor. {{ $index }} will be replaced by the index from the snmp walk.
skip_values (optional): This is an array of values we should skip over (see note below).
skip_value_lt (optional): If sensor value is less than this, skip the discovery.
skip_value_gt (optional): If sensor value is greater than this, skip the discovery.
entPhysicalIndex and entPhysicalIndex_measured (optional) : If the sensor belongs to a physical entity then you can link them here. The currently supported variants are :
entPhysicalIndex contains the entPhysicalIndex from entPhysical table, and entPhysicalIndex_measured is NULL
entPhysicalIndex contains \"ifIndex\" value of the linked port and entPhysicalIndex_measured contains \"ports\"
user_func (optional): You can provide a function name for the sensors value to be processed through (i.e. Convert fahrenheit to celsius use fahrenheit_to_celsius)
snmp_flags (optional): this sets the flags to be sent to snmpwalk, it overrides flags set on the sensor type and os. The default is '-OQUb'. A common issue is dealing with string indexes, setting '-OQUsbe' will change them to numeric oids. Setting ['-OQUsbe', '-Pu'] will also allow _ in oid names. You can find more in the Man Page
rrd_type (optional): You can change the type of the RRD file that will be created to store the data. By default, type GAUGE is used. More details can be found here: https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html
For options: you have the following available:
divisor: This is the divisor to use against the returned value.
multiplier: This is the multiplier to use against the returned value.
skip_values: This is an array of values we should skip over (see note below).
skip_value_lt: If sensor value is less than this, skip the discovery.
skip_value_gt: If sensor value is greater than this, skip the discovery.
Multiple variables can be used in the sensor's definition. The syntax is {{ $variable }}. Any oid in the current table can be used, as well as pre_cached data. The index ($index) and the sub_indexes (in case the oid is indexed multiple times) are also available: if $index=\"1.20\", then $subindex0=\"1\" and $subindex1=\"20\".
When referencing an oid in another table the full index will be used to match the other table. If this is undesirable, you may use a single sub index by appending the sub index after a colon to the variable name. Example {{ $ifName:2 }}
skip_values can also compare items within the OID table against values. The index of the sensor is used to retrieve the value from the OID, unless a target index is appended to the OID. Additionally, you may check fields from the device. Comparisons behave on a logical OR basis when chained, so only one of them needs to be matched for that particular sensor to be skipped during discovery. An example of this is below:
If you aren't able to use yaml to perform the sensor discovery, you will most likely need to use Advanced health discovery.
"},{"location":"Developing/os/Health-Information/#advanced-health-discovery","title":"Advanced health discovery","text":"
If you can't use the yaml files as above, then you will need to create the discovery code in php. If it is possible to create via yaml, php discovery will likely be rejected due to the much higher chance of later problems, so it is highly suggested to use yaml.
The directory structure for sensor information is includes/discovery/sensors/$class/$os.inc.php. The format of all the sensors follows the same code format which is to collect sensor information via SNMP and then call the discover_sensor() function; except state sensors which requires additional code. Sensor information is commonly found in an ENTITY mib supplied by device's vendor in the form of a table. Other mib tables may be used as well. Sensor information is first collected by includes/discovery/sensors/pre_cache/$os.inc.php. This program will pull in data from mib tables into a $pre_cache array that can then be used in includes/discovery/sensors/$class/$os.inc.php to extract specific values which are then passed to discover_sensor().
discover_sensor() Accepts the following arguments:
&$valid = This is always null. This is unused.
$class = Required. This is the sensor class from the table above (i.e humidity).
$device = Required. This is the $device array.
$oid = Required. This must be the numerical OID for where the data can be found, i.e .1.2.3.4.5.6.7.0
$index = Required. This must be unique for this sensor class, device and type. Typically it's the index from the table being walked, or it could be the name of the OID if it's a single value.
$type = Required. This should be the OS name, i.e. pulse.
$descr = Required. This is a descriptive value for the sensor. Some devices will provide names to use.
$divisor = Defaults to 1. This is used to divide the returned value.
$multiplier = Defaults to 1. This is used to multiply the returned value.
$low_limit = Defaults to null. Sets the low threshold limit for the sensor, used in alerting to report out range sensors.
$low_warn_limit = Defaults to null. Sets the low warning limit for the sensor, used in alerting to report near out of range sensors.
$warn_limit = Defaults to null. Sets the high warning limit for the sensor, used in alerting to report near out of range sensors.
$high_limit = Defaults to null. Sets the high limit for the sensor, used in alerting to report out range sensors.
$current = Defaults to null. Can be used to set the current value on discovery. Poller will update this on the next poll cycle anyway.
$poller_type = Defaults to snmp. Things like the unix-agent can set different values but for the most part this should be left as snmp.
$entPhysicalIndex = Defaults to null. Sets the entPhysicalIndex to be used to look up further hardware if available.
$entPhysicalIndex_measured = Defaults to null. Sets the type of entPhysicalIndex used, i.e ports.
$user_func = Defaults to null. You can provide a function name for the sensors value to be processed through (i.e. Convert fahrenheit to celsius use fahrenheit_to_celsius)
$group = Defaults to null. Groups sensors together under in the webui, displaying this text.
$rrd_type = Default to 'GAUGE'. Allows to change the type of the RRD file created for this sensor. More details can be found here in the RRD documentation: https://oss.oetiker.ch/rrdtool/doc/rrdcreate.en.html
For the majority of devices, this is all that's required to add support for a sensor. Polling is done based on the data gathered using discover_sensor(). If custom polling is needed then the file format is similar to discovery: includes/polling/sensors/$class/$os.inc.php. Whilst it's possible to perform additional snmp queries within polling this should be avoided where possible. The value for the OID is already available as $sensor_value.
Graphing is performed automatically for sensors, no custom graphing is required or supported.
"},{"location":"Developing/os/Health-Information/#adding-a-new-sensor-class","title":"Adding a new sensor class","text":"
You will need to add code for your new sensor class in the following existing files:
app/Models/Sensor.php: add a free icon from Font Awesome in the $icons array.
doc/Developing/os/Health-Information.md: documentation for every sensor class is mandatory.
includes/discovery/sensors.inc.php: add the sensor class to the $run_sensors array.
includes/discovery/functions.inc.php: optional - if sensible low_limit and high_limit values are guessable when a SNMP-retrievable threshold is not available, add a case for the sensor class to the sensor_limit() and/or sensor_low_limit() functions.
LibreNMS/Util/ObjectCache.php: optional - choose menu grouping for the sensor class.
includes/html/pages/device/health.inc.php: add a dbFetchCell(), $datas[], and $type_text[] entry for the sensor class.
includes/html/pages/device/overview.inc.php: add require 'overview/sensors/$class.inc.php' in the desired order for the device overview page.
includes/html/pages/health.inc.php: add a $type_text[] entry for the sensor class.
lang/en/sensors.php: add human-readable names and units for the sensor class in English, feel free to do so for other languages as well.
Create and populate new files for the sensor class in the following places:
includes/discovery/sensors/$class/: create the folder where advanced php-based discovery files are stored. Not used for yaml discovery. =======
includes/html/pages/device/health.inc.php: add a dbFetchCell(), $datas[], and $type_text[] entry for the sensor class.
includes/html/pages/device/overview.inc.php: add require 'overview/sensors/$class.inc.php' in the desired order for the device overview page.
includes/html/pages/health.inc.php: add a $type_text[] entry for the sensor class.
lang/en/sensors.php: add human-readable names and units for the sensor class in English, feel free to do so for other languages as well.
Create and populate new files for the sensor class in the following places:
includes/discovery/sensors/$class/: create the folder where advanced php-based discovery files are stored. Not used for yaml discovery.
includes/html/graphs/device/$class.inc.php: define unit names used in RRDtool graphs.
includes/html/graphs/sensor/$class.inc.php: define various parameters for RRDtool graphs.
"},{"location":"Developing/os/Health-Information/#advanced-health-sensor-example","title":"Advanced health sensor example","text":"
This example shows how to build sensors using the advanced method. In this example we will be collecting optical power level (dBm) from Adva FSP150CC family MetroE devices. This example will assume an understanding of SNMP and MIBs.
First we setup includes/discovery/sensors/pre_cache/adva_fsp150.inc as shown below. The first line walks the cmEntityObject table to get information about the chassis and line cards. From this information we extract the model type which will identify which tables in the CM-Facility-Mib the ports are populated in. The program then reads the appropriate table into the $pre_cache array adva_fsp150_ports. This array will have OID indexies for each port, which we will use later to identify our sensor OIDs.
Next we are going to build our sensor discovery code. These are optical readings, so the file will be created as the dBm sensor type in includes/discover/sensors/dbm/adva_fsp150.inc.php. Below is a snippet of the code:
First the program will loop through each port's index value. In the case of Advas, the ports are names Ethernet 1-1-1-1, 1-1-1-2, etc, and they are indexed as oid.1.1.1.1, oid.1.1.1.2, etc in the mib.
Next the program checks which table the port exists in and that the connector type is 'fiber'. There are other port tables in the full code that were ommitted from the example for brevity. Copper media won't have optical readings, so if the media type isn't fiber we skip discovery for that port.
The next two lines build the OIDs for getting the optical receive and transmit values using the $index for the port. Using the OIDs the program gets the current receive and transmit values ($currentRx and $currentTx repectively) to verify the values are not 0. Not all SFPs collect digital optical monitoring (DOM) data, in the case of Adva the value of both transmit and recieve will be 0 if DOM is not available. While 0 is a valid value for optical power, its extremely unlikely that both will be 0 if DOM is present. If DOM is not available, then the program stops discovery for that port. Note that while this is the case with Adva, other vendors may differ in how they handle optics that do not supply DOM. Please check your vendor's mibs.
Next the program assigns the values of $entPhysicalIndex and $entPhysicalIndex_measured. In this case $entPhysicalIndex is set to the value of the cmEthernetTrafficPortIfIndex so that it is associated with port. This will also allow the sensor graphs to show up on the associated port's page in the GUI in addition to the Health page.
Following that the program uses a database call to get the description of the port which will be used as the title for the graph in the GUI.
Lastly the program calls discover_sensor() and passes the information collected in the previous steps. The null values are for low, low warning, high, and high warning values, which are not collected in the Adva's MIB.
You can manually run discovery to verify the code works by running ./discovery.php -h $device_id -m sensors. You can use -v to see what calls are being used during discovery and -d to see debug output. In the output under #### Load disco module sensors #### you can see a list of sensors types. If there is a + a sensor is added, if there is a - one was deleted, and a . means no change. If there is nothing next to the sensor type then the sensor was not discovered. There is is also information about changes to the database and RRD files at the bottom.
OS discovery is how LibreNMS detects which OS should be used for a device. Generally detection should use sysObjectID or sysDescr, but you can also snmpget an oid and check for a value. snmpget is discouraged because it slows down all os detections, not just the added os.
To begin, create the new OS file which should be called includes/definitions/pulse.yaml. Here is a working example:
mib_dir: You can use this to specify an additional directory to look in for MIBs. An array is not accepted, only one directory may be specified.
mib_dir: juniper\n
poller_modules: This is a list of poller modules to either enable (1) or disable (0). Check misc/config_definitions.json to see which modules are enabled/disabled by default.
discovery_modules: This is the list of discovery modules to either enable (1) or disable (0). Check misc/config_definitions.json to see which modules are enabled/disabled by default.
OS discovery collects additional standardized data about the OS. These are specified in the discovery yaml includes/definitions/discovery/<os>.yaml or LibreNMS/OS/<os>.php if more complex collection is required.
version The version of the OS running on the device.
hardware The hardware version for the device. For example: 'WS-C3560X-24T-S'
features Features for the device, for example a list of enabled software features.
serial The main serial number of the device.
"},{"location":"Developing/os/Initial-Detection/#yaml-based-os-discovery","title":"Yaml based OS discovery","text":"
sysDescr_regex apply a regex or list of regexes to the sysDescr to extract named groups, this data has the lowest precedence
<field> specify an oid or list of oids to attempt to pull the data from, the first non-empty response will be used
<field>_regex parse the value out of the returned oid data, must use a named group
<field>_template combine multiple oid results together to create a final string value. The result is trimmed.
<field>_replace An array of replacements ['search regex', 'replace'] or regex to remove
hardware_mib MIB used to translate sysObjectID to get hardware. hardware_regex can process the result.
If the device has MIBs available and you use it in the detection then you can add these in. It is highly recommended that you add mibs to a vendor specific directory. For instance HP mibs are in mibs/hp. Please ensure that these directories are specified in the yaml detection file, see mib_dir above.
"},{"location":"Developing/os/Initial-Detection/#icon-and-logo","title":"Icon and Logo","text":"
It is highly recommended to use SVG images where possible, these scale and provide a nice visual image for users with HiDPI screens. If you can't find SVG images then please use png.
Create an SVG image of the icon and logo. Legacy PNG bitmaps are also supported but look bad on HiDPI.
A vector image should not contain padding.
The file should not be larger than 20 Kb. Simplify paths to reduce large files.
Use plain SVG without gzip compression.
The SVG root element must not contain length and width attributes, only viewBox.
Use Path -> Simplify to simplify paths of large files.
Use File -> Document Properties\u2026 -> Resize page to content\u2026 to remove padding.
Use File -> Clean up document to remove unused gradients, patterns, or markers.
Use File -> Save As -> Plain SVG to save the final image.
By optimizing the SVG you can shrink the file size in some cases to less than 20 %. SVG Optimizer does a great job. There is also an online version.
"},{"location":"Developing/os/Initial-Detection/#the-final-check","title":"The final check","text":"
Discovery
./discovery.php -d -h HOSTNAME\n
Polling
lnms device:poll HOSTNAME\n
At this step we should see all the values retrieved in LibreNMS.
Note: If you have made a number of changes to either the OS's Discovery files, it's possible earlier edits have been cached. As such, if you do not get expected behaviour when completing the final check above, try removing the cache file first:
LibreNMS will attempt to detect memory statistics using the standard HOST-RESOURCES-MIB and UCD-SNMP-MIB MIBs. To detect non-standard MIBs, they can be defined via Yaml.
In order to successfully detect memory amount and usage, two of the for keys below are required. Some OS only provide a usage percentage, which will work, but a total RAM amount will not be displayed.
The code can also interpret table based OIDs and supports many of the same features as Health Sensors including {{ }} parsing, skip_values, and precache.
Valid data entry keys:
oid oid to walk to collect processor data
total oid or integer total memory size in bytes (or precision)
used oid memory used in bytes (or precision)
free oid memory free in bytes (or precision)
percent_used oid of percentage of used memory
descr A visible description of the memory measurement defaults to \"Memory\"
warn_percent Usage percentage to used for alert purposes
precision precision for all byte values, typically a power of 2 (1024 for example)
classused to generate rrd filename, defaults to system. If system, buffers, and cached exist they will be combined to calculate available memory.
type used to generate rrd filename, defaults to the os name
index used to generate rrd filename, defaults to the oid index
skip_values skip values see Health Sensors for specification
snmp_flags additional net-snmp flags
"},{"location":"Developing/os/Mem-CPU-Information/#custom-processor-discovery-and-polling","title":"Custom Processor Discovery and Polling","text":"
If you need to implement custom discovery or polling you can implement the MempoolsDiscovery interface and the MempoolsPolling interface in the OS class. MempoolsPolling is optional, standard polling will be used based on OIDs stored in the database.
OS Class files reside under LibreNMS\\OS
<?php\n\nnamespace LibreNMS\\OS;\n\nuse LibreNMS\\Interfaces\\Discovery\\MempoolsDiscovery;\nuse LibreNMS\\Interfaces\\Polling\\MempoolsPolling;\n\nclass Example extends \\LibreNMS\\OS implements MempoolsDiscovery, MempoolsPolling\n{\n /**\n * Discover a Collection of Mempool models.\n * Will be keyed by mempool_type and mempool_index\n *\n * @return \\Illuminate\\Support\\Collection \\App\\Models\\Mempool\n */\n public function discoverMempools()\n {\n // TODO: Implement discoverMempools() method.\n }\n\n /**\n * @param \\Illuminate\\Support\\Collection $mempools \\App\\Models\\Mempool\n * @return \\Illuminate\\Support\\Collection \\App\\Models\\Mempool\n */\n public function pollMempools($mempools)\n {\n // TODO: Implement pollMempools() method.\n }\n}\n
Key Default Description oid required The string based oid to fetch data, could be a table or a single value num_oid optional The numerical oid to fetch data from when polling, usually should be appended by {{ $index }}. Computed by discovery process if not provided. value optional Oid to retrieve data from, primarily used for tables precision 1 The multiplier to multiply the data by. If this is negative, the data will be multiplied then subtracted from 100. descr Processor Description of this processor, may be an oid or plain string. Helpful values {{ $index }} and {{$count}} type Name of this sensor. This is used with the index to generate a unique id for this sensor. index {{ $index }} The index of this sensor, defaults to the index of the oid. skip_values optional Do not detect this sensor if the value matches
Accessing values within yaml:
{{ $index }} The index after the given oid {{ $count }} The count of entries (starting with 1) {{ $oid }} Any oid in the table or pre-fetched"},{"location":"Developing/os/Mem-CPU-Information/#custom-processor-discovery-and-polling_1","title":"Custom Processor Discovery and Polling","text":"
If you need to implement custom discovery or polling you can implement the ProcessorDiscovery interface and the ProcessorPolling interface in the OS class.
OS Class files reside under LibreNMS\\OS
<?php\nnamespace LibreNMS\\OS;\n\nuse LibreNMS\\Device\\Processor;\nuse LibreNMS\\Interfaces\\Discovery\\ProcessorDiscovery;\nuse LibreNMS\\Interfaces\\Polling\\ProcessorPolling;\nuse LibreNMS\\OS;\n\nclass ExampleOS extends OS implements ProcessorDiscovery, ProcessorPolling\n{\n /**\n * Discover processors.\n * Returns an array of LibreNMS\\Device\\Processor objects that have been discovered\n *\n * @return array Processors\n */\n public function discoverProcessors()\n {\n // discovery code here\n }\n\n /**\n * Poll processor data. This can be implemented if custom polling is needed.\n *\n * @param array $processors Array of processor entries from the database that need to be polled\n * @return array of polled data\n */\n public function pollProcessors(array $processors)\n {\n // polling code here\n }\n}\n
"},{"location":"Developing/os/Settings/","title":"Optional OS Settings","text":"
This page documents settings that can be set in the os yaml files or in config.php. All settings listed here are optional. If they are not set, the global default will be used.
"},{"location":"Developing/os/Settings/#user-override-in-configphp","title":"User override in config.php","text":"
Users can override these settings in their config.php.
By default we use ifDescr to label ports/interfaces. Setting either ifname or ifalias will override that. Only set one of these. ifAlias is user supplied. ifindex will append the ifindex to the port label.
ifname: true\nifalias: true\n\nifindex: true\n
"},{"location":"Developing/os/Settings/#poller-and-discovery-modules","title":"Poller and Discovery Modules","text":"
The various discovery and poller modules can be enabled or disabled per OS. The defaults are usually reasonable, so likely you won't want to change more than a few. These modules can be enabled or disabled per-device in the webui and per os or globally in config.php. Usually, a poller module will not work if it's corresponding discovery module is not enabled.
You should avoid setting these to false in the OS definitions unless it has a significant negative impact on polling. Setting modules in the definition reduces user control of modules.
Some devices have buggy snmp implementations and don't respond well to the more efficient snmpbulkwalk. To disable snmpbulkwalk and only use snmpwalk for an OS set the following.
snmp_bulk: false\n
If only some specific OIDs fail with snmpbulkwalk. You can disable just those OIDs. This needs to match exactly the OID being walked by LibreNMS. MIB::oid is preferred to prevent name collisions.
oids:\n no_bulk:\n - UCD-SNMP-MIB::laLoadInt\n
"},{"location":"Developing/os/Settings/#limit-the-oids-per-snmpget","title":"Limit the oids per snmpget","text":"
Tests ensure LibreNMS works as expected, now and in the future. New OS should provide as much test data as needed and added test data for existing OS is welcome.
Saved snmp data can be found in tests/snmpsim/*.snmprec and saved database data can be found in tests/data/*.json. Please review this for any sensitive data before submitting. When replacing data, make sure it is modified in a consistent manner.
We utilise snmpsim to do unit testing. For OS discovery, we can mock snmpsim, but for other tests you will need it installed and functioning. We run snmpsim during our integration tests, but not by default when running lnms dev:check. You can install snmpsim with the command pip3 install snmpsim.
"},{"location":"Developing/os/Test-Units/#capturing-test-data","title":"Capturing test data","text":"If test data already exists
If test data already exists, but is for a different device/configuration with the same OS. Then you should use the --variant (-v) option to specify a different variant of the OS, this will be tested completely separate from other variants. If there is only one variant, please do not specify one.
./scripts/collect-snmp-data.php is provided to make it easy to collect data for tests. Running collect-snmp-data.php with the --hostname (-h) allows you to capture all data used to discover and poll a device already added to LibreNMS. Make sure to re-run the script if you add additional support. Check the command-line help for more options.
"},{"location":"Developing/os/Test-Units/#2-save-test-data","title":"2. Save test data","text":"
After you have collected snmp data, run ./scripts/save-test-data.php with the --os (-o) option to dump the post discovery and post poll database entries to json files. This step requires snmpsim, if you are having issues, the maintainers may help you generate it from the snmprec you created in the previous step.
Generally, you will only need to collect data once. After you have the data you need in the snmprec file, you can just use save-test-data.php to update the database dump (json) after that.
Note: To run tests, ensure you have executed ./scripts/composer_wrapper.php install from your LibreNMS root directory. This will read composer.json and install any dependencies required.
After you have saved your test data, you should run lnms dev:check verify they pass.
To run the full suite of tests enable database and snmpsim reliant tests: lnms dev:check unit --db --snmpsim
Snmprec files are simple files that store the snmp data. The data format is simple with three columns: numeric oid, type code, and data. Here is an example snippet.
During testing LibreNMS will use any info in the snmprec file for snmp calls. This one provides sysDescr (.1.3.6.1.2.1.1.1.0, 4 = Octet String) and sysObjectID (.1.3.6.1.2.1.1.2.0, 6 = Object Identifier), which is the minimum that should be provided for new snmprec files.
To look up the numeric OID and type of an string OID with snmptranslate:
If the base os (.snmprec) already contains test data for the module you are testing or that data conflicts with your new data, you must use a variant to store your test data (-v)."},{"location":"Developing/os/Test-Units/#add-initial-detection","title":"Add initial detection","text":"
Add device to LibreNMS. It is generic and device_id = 42
Run ./scripts/collect-snmp-data.php -h 42, initial snmprec will be created
Add initial detection for example-os
Run discovery to make sure it detects properly ./discovery.php -h 42
Add any additional os items like version, hardware, features, or serial.
If there is additional snmp data required, run ./scripts/collect-snmp-data.php -h 42
Run ./scripts/save-test-data.php -o example-os to update the dumped database data.
Review data. If you modified the snmprec or code (don't modify json manually) run ./scripts/save-test-data.php -o example-os -m os
Run lnms dev:check unit --db --snmpsim
If the tests succeed submit a pull request
"},{"location":"Developing/os/Test-Units/#additional-module-support-or-test-data","title":"Additional module support or test data","text":"
Add code to support module or support already exists.
./scripts/collect-snmp-data.php -h 42 -m <module>, this will add more data to the snmprec file
Review data. If you modified the snmprec (don't modify json manually) run ./scripts/save-test-data.php -o example-os -m <module>
Run lnms dev:check unit --db --snmpsim
If the tests succeed submit a pull request
"},{"location":"Developing/os/Test-Units/#json-application-test-writing-using-scriptsjson-app-toolphp","title":"JSON Application Test Writing Using ./scripts/json-app-tool.php","text":"
First you will need a good example JSON output produced via SNMP extend in question.
Read the help via ./scripts/json-app-tool.php -h.
Generate the SNMPrec data via ./scripts/json-app-tool.php -a appName -s > ./tests/snmpsim/linux_appName-v1.snmprec. If the SNMP extend name OID different than the application name, then you will need to pass the -S flag for over riding that.
Generate the test JSON data via ./scripts/json-app-tool.php -a appName -t > ./tests/data/linux_appName-v1.json.
Update the generated './tests/data/linux_appName-v1.json' making sure that all the expected metrics are present. This assumes that everything under .data in the JSON will be collapsed and used.
During test runs if it does not appear to be detecting the app and it has a different app name and SNMP extend name OID, make sure that -S is set properly and that 'includes/discovery/applications.inc.php' has been updated.
This document will guide you through adding wireless sensors for your new wireless device.
Currently we have support for the following wireless metrics along with the values we expect to see the data in:
Type Measurement Interface Description ap-count % WirelessApCountDiscovery The number of APs attached to this controller capacity % WirelessCapacityDiscovery The % of operating rate vs theoretical max ccq % WirelessCcqDiscovery The Client Connection Quality channel count WirelessChannelDiscovery The channel, use of frequency is preferred cell count WirelessCellDiscovery The cell in a multicell technology clients count WirelessClientsDiscovery The number of clients connected to/managed by this device distance km WirelessDistanceDiscovery The distance of a radio link in Kilometers error-rate bps WirelessErrorRateDiscovery The rate of errored packets or bits, etc error-ratio % WirelessErrorRatioDiscovery The percent of errored packets or bits, etc errors count WirelessErrorsDiscovery The total bits of errored packets or bits, etc frequency MHz WirelessFrequencyDiscovery The frequency of the radio in MHz, channels can be converted mse dB WirelessMseDiscovery The Mean Square Error noise-floor dBm WirelessNoiseFloorDiscovery The amount of noise received by the radio power dBm WirelessPowerDiscovery The power of transmit or receive, including signal level quality % WirelessQualityDiscovery The % of quality of the link, 100% = perfect link rate bps WirelessRateDiscovery The negotiated rate of the connection (not data transfer) rssi dBm WirelessRssiDiscovery The Received Signal Strength Indicator snr dB WirelessSnrDiscovery The Signal to Noise ratio, which is signal - noise floor sinr dB WirelessSinrDiscovery The Signal-to-Interference-plus-Noise Ratio rsrq dB WirelessRsrqDiscovery The Reference Signal Received Quality rsrp dBm WirelessRsrpDiscovery The Reference Signals Received Power xpi dBm WirelessXpiDiscovery The Cross Polar Interference values ssr dB WirelessSsrDiscovery The Signal strength ratio, the ratio(or difference) of Vertical rx power to Horizontal rx power utilization % WirelessUtilizationDiscovery The % of utilization compared to the current rate
You will need to create a new OS class for your os if one doesn't exist under LibreNMS/OS. The name of this file should be the os name in camel case for example airos -> Airos, ios-wlc -> IosWlc.
Your new OS class should extend LibreNMS\\OS and implement the interfaces for the sensors your os supports.
namespace LibreNMS\\OS;\n\nuse LibreNMS\\Device\\WirelessSensor;\nuse LibreNMS\\Interfaces\\Discovery\\Sensors\\WirelessClientsDiscovery;\nuse LibreNMS\\OS;\n\nclass Airos extends OS implements WirelessClientsDiscovery\n{\n public function discoverWirelessClients()\n {\n $oid = '.1.3.6.1.4.1.41112.1.4.5.1.15.1'; //UBNT-AirMAX-MIB::ubntWlStatStaCount.1\n return array(\n new WirelessSensor('clients', $this->getDeviceId(), $oid, 'airos', 1, 'Clients')\n );\n }\n}\n
All discovery interfaces will require you to return an array of WirelessSensor objects.
new WirelessSensor() Accepts the following arguments:
$type = Required. This is the sensor class from the table above (i.e humidity).
$device_id = Required. You can get this value with $this->getDeviceId()
$oids = Required. This must be the numerical OID for where the data can be found, i.e .1.2.3.4.5.6.7.0. If this is an array of oids, you should probably specify an $aggregator.
$subtype = Required. This should be the OS name, i.e airos.
$index = Required. This must be unique for this sensor type, device and subtype. Typically it's the index from the table being walked or it could be the name of the OID if it's a single value.
$description = Required. This is a descriptive value for the sensor. Shown to the user, if this is a per-ssid statistic, using SSID: $ssid here is appropriate
$current = Defaults to null. Can be used to set the current value on discovery. If this is null the values will be polled right away and if they do not return valid value(s), the sensor will not be discovered. Supplying a value here implies you have already verified this sensor is valid.
$multiplier = Defaults to 1. This is used to multiply the returned value.
$divisor = Defaults to 1. This is used to divided the returned value.
$aggregator = Defaults to sum. Valid values: sum, avg. This will combine multiple values from multiple oids into one.
$access_point_id = Defaults to null. If this is a wireless controller, you can link sensors to entries in the access_points table.
$high_limit = Defaults to null. Sets the high limit for the sensor, used in alerting to report out range sensors.
$low_limit = Defaults to null. Sets the low threshold limit for the sensor, used in alerting to report out range sensors.
$high_warn = Defaults to null. Sets the high warning limit for the sensor, used in alerting to report near out of range sensors.
$low_warn = Defaults to null. Sets the low warning limit for the sensor, used in alerting to report near out of range sensors.
$entPhysicalIndex = Defaults to null. Sets the entPhysicalIndex to be used to look up further hardware if available.
$entPhysicalIndexMeasured = Defaults to null. Sets the type of entPhysicalIndex used, i.e ports.
Polling is done automatically based on the discovered data. If for some reason you need to override polling, you can implement the required polling interface in LibreNMS/Interfaces/Polling/Sensors. Using the polling interfaces should be avoided if possible.
Graphing is performed automatically for wireless sensors, no custom graphing is required or supported.
The agent can be used to gather data from remote systems you can use LibreNMS in combination with check_mk (found here). The agent can be extended to include data about applications on the remote system.
5: Copy each of the scripts from agent-local/ into /usr/lib/check_mk_agent/local that you require to be graphed. You can find detail setup instructions for specific applications above.
6: Make each one executable that you want to use with chmod +x /usr/lib/check_mk_agent/local/$script
8: Login to the LibreNMS web interface and edit the device you want to monitor. Under the modules section, ensure that unix-agent is enabled.
9: Then under Applications, enable the apps that you plan to monitor.
10: Wait for around 10 minutes and you should start seeing data in your graphs under Apps for the device.
"},{"location":"Extensions/Agent-Setup/#restrict-the-devices-on-which-the-agent-listens-linux-systemd","title":"Restrict the devices on which the agent listens: Linux systemd","text":"
If you want to restrict which network adapter the agent listens on, do the following:
1: Edit /etc/systemd/system/check_mk.socket
2: Under the [Socket] section, add a new line BindToDevice= and the name of your network adapter.
3: If the script has already been enabled in systemd, you may need to issue a systemctl daemon-reload and then systemctl restart check_mk.socket
Grab version 1.2.6b5 of the check_mk agent from the check_mk github repo (exe/msi or compile it yourself depending on your usage): https://github.com/tribe29/checkmk/tree/v1.2.6b5/agents/windows
Run the msi / exe
Make sure your LibreNMS instance can reach TCP port 6556 on your target.
When using the snmp extend method, the application discovery module will pick up which applications you have set up for monitoring automatically, even if the device is already in LibreNMS. The application discovery module is enabled by default for most *nix operating systems, but in some cases you will need to manually enable the application discovery module.
One major thing to keep in mind when using SNMP extend is these run as the snmpd user that can be an unprivileged user. In these situations you need to use sudo.
To test if you need sudo, first check the user snmpd is running as. Then test if you can run the extend script as that user without issue. For example if snmpd is running as 'Debian-snmp' and we want to run the extend for proxmox, we check that the following run without error:
sudo -u Debian-snmp /usr/local/bin/proxmox\n
If it doesn't work, then you will need to use sudo with the extend command. For the example above, that would mean adding the line below to the sudoers file:
Debian-snmp ALL = NOPASSWD: /usr/local/bin/proxmox\n
Finally we would need to add sudo to the extend command, which would look like that for proxmox:
"},{"location":"Extensions/Applications/#json-return-optimization-using-librenms_return_optimizer","title":"JSON Return Optimization Using librenms_return_optimizer","text":"
While the json_app_get does allow for more complex and larger data to be easily returned by a extend and the data to then be worked with, this can also sometimes result in large returns that occasionally don't play nice with SNMP on some networks.
librenms_return_optimizer fixes this via taking the extend output piped to it, gzipping it, and then converting it to base64. The later is needed as net-snmp does not play that nice with binary data, converting most of the non-printable characters to .. This does add a bit of additional overhead to the gzipped data, but still tends to be result in a return that is usually a third of the size for JSONs items.
The change required is fairly simply. So for the portactivity example below...
The following apps have extends that have native support for this, if congiured to do so.
suricata
"},{"location":"Extensions/Applications/#enable-the-application-discovery-module","title":"Enable the application discovery module","text":"
Edit the device for which you want to add this support
Click on the Modules tab and enable the applications module.
This will be automatically saved, and you should get a green confirmation pop-up message.
After you have enabled the application module, it would be wise to then also enable which applications you want to monitor, in the rare case where LibreNMS does not automatically detect it.
Note: Only do this if an application was not auto-discovered by LibreNMS during discovery and polling.
"},{"location":"Extensions/Applications/#enable-the-applications-to-be-discovered","title":"Enable the application(s) to be discovered","text":"
Go to the device you have just enabled the application module for.
Click on the Applications tab and select the applications you want to monitor.
This will also be automatically saved, and you should get a green confirmation pop-up message.
The unix-agent does not have a discovery module, only a poller module. That poller module is always disabled by default. It needs to be manually enabled if using the agent. Some applications will be automatically enabled by the unix-agent poller module. It is better to ensure that your application is enabled for monitoring. You can check by following the steps under the SNMP Extend heading.
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
Verify it is working by running /etc/snmp/apache-stats.py Package urllib3 for python3 needs to be installed. In Debian-based systems for example you can achieve this by issuing:
apt-get install python3-urllib3\n
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend apache /etc/snmp/apache-stats.py\n
Restart snmpd on your host
Test by running
snmpwalk <various options depending on your setup> localhost NET-SNMP-EXTEND-MIB::nsExtendOutput2Table\n
Install the agent on this device if it isn't already and copy the apache script to /usr/lib/check_mk_agent/local/
Verify it is working by running /usr/lib/check_mk_agent/local/apache (If you get error like \"Can't locate LWP/Simple.pm\". libwww-perl needs to be installed: apt-get install libwww-perl)
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
On the device page in Librenms, edit your host and check the Apache under the Applications tab.
Verify it is working by running /etc/snmp/asterisk
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend asterisk /etc/snmp/asterisk\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Restart your bind9/named after changing the configuration.
Verify that everything works by executing rndc stats && cat /var/cache/bind/stats. In case you get a Permission Denied error, make sure you changed the ownership correctly.
Also be aware that this file is appended to each time rndc stats is called. Given this it is suggested you setup file rotation for it. Alternatively you can also set zero_stats to 1 in the config.
The script for this also requires the Perl module File::ReadBackwards.
If it is not available, it can be installed by cpan -i File::ReadBackwards.
You may possibly need to configure the agent/extend script as well.
The config file's path defaults to the same path as the script, but with .config appended. So if the script is located at /etc/snmp/bind, the config file will be /etc/snmp/bind.config. Alternatively you can also specify a config via -c $file.
Anything starting with a # is comment. The format for variables are $variable=$value. Empty lines are ignored. Spaces and tabs at either the start or end of a line are ignored.
Content of an example /etc/snmp/bind.config . Please edit with your own settings.
rndc = The path to rndc. Default: /usr/bin/env rndc\ncall_rndc = A 0/1 boolean on whether or not to call rndc stats.\n Suggest to set to 0 if using netdata. Default: 1\nstats_file = The path to the named stats file. Default: /var/cache/bind/stats\nagent = A 0/1 boolean for if this is being used as a LibreNMS\n agent or not. Default: 0\nzero_stats = A 0/1 boolean for if the stats file should be zeroed\n first. Default: 0 (1 if guessed)\n
If you want to guess at the configuration, call the script with -g and it will print out what it thinks it should be.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Install the agent on this device if it isn't already and copy the script to /usr/lib/check_mk_agent/local/bind via wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/bind -O /usr/lib/check_mk_agent/local/bind
Due to the lack of SNMP support in the BIRD daemon, this application extracts all configured BGP protocols and parses it into LibreNMS. This application supports both IPv4 and IPv6 Peer processing.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend bird2 '/usr/bin/sudo /usr/sbin/birdc -r show protocols all'\n
Edit your sudo users (usually visudo) and add at the bottom:
Debian-snmp ALL=(ALL) NOPASSWD: /usr/sbin/birdc\n
If your snmp daemon is running on a user that isnt Debian-snmp make sure that user has the correct permission to execute birdc
Verify the time format for bird2 is defined. Otherwise iso short ms (hh:mm:ss) is the default value that will be used. Which is not compatible with the datetime parsing logic used to parse the output from the bird show command. timeformat protocol is the one important to be defibned for the bird2 app parsing logic to work.
Example starting point using Bird2 shorthand iso long (YYYY-MM-DD hh:mm:ss):
timeformat base iso long;\ntimeformat log iso long;\ntimeformat protocol iso long;\ntimeformat route iso long;\n
Timezone can be manually specified, example \"%F %T %z\" (YYYY-MM-DD hh:mm:ss +11:45). See the Bird 2 docs for more information
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
a. (Required): Key 'domains' contains a list of domains to check. b. (Optional): You can define a port. By default it checks on port 443. c. (Optional): You may define a certificate location for self-signed certificates."},{"location":"Extensions/Applications/#snmp-extend_6","title":"SNMP Extend","text":"
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend certificate /etc/snmp/certificate.py\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The config file is a ini file and handled by Config::Tiny.
- mode :: single or multi, for if this is a single repo or for\n multiple repos.\n - Default :: single\n\n- repo :: Directory for the borg backup repo.\n - Default :: undef\n\n- passphrase :: Passphrase for the borg backup repo.\n - Default :: undef\n\n- passcommand :: Passcommand for the borg backup repo.\n - Default :: undef\n
For single repos all those variables are in the root section of the config, so lets the repo is at '/backup/borg' with a passphrase of '1234abc'.
repo=/backup/borg\nrepo=1234abc\n
For multi, each section outside of the root represents a repo. So if there is '/backup/borg1' with a passphrase of 'foobar' and '/backup/derp' with a passcommand of 'pass show backup' it would be like below.
mode=multi\n\n[borg1]\nrepo=/backup/borg1\npassphrase=foobar\n\n[derp]\nrepo=/backup/derp\npasscommand=pass show backup\n
If 'passphrase' and 'passcommand' are both specified, then passcommand is used.
The metrics are all from .data.totals in the extend return.
Value Type Description errored repos Total number of repos that info could not be fetched for. locked repos Total number of locked repos locked_for seconds Longest time any repo has been locked. time_since_last_modified seconds Largest time - mtime for the repo nonce total_chunks chunks Total number of chunks total_csize bytes Total compressed size of all archives in all repos. total_size byes Total uncompressed size of all archives in all repos. total_unique_chunks chunks Total number of unique chuckes in all repos. unique_csize bytes Total deduplicated size of all archives in all repos. unique_size chunks Total number of chunks in all repos."},{"location":"Extensions/Applications/#capev2","title":"CAPEv2","text":"
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend power-stat /etc/snmp/power-stat.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Copy the shell script to the desired host. By default, it will only show the status for containers that are running. To include all containers modify the constant in the script at the top of the file and change it to ONLY_RUNNING_CONTAINERS = False
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend entropy /etc/snmp/entropy.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
If not specified, \"/usr/bin/env fail2ban-client\" is used.
Restart snmpd on your host
If you wish to use caching, add the following to /etc/crontab and restart cron.
*/3 * * * * root /etc/snmp/fail2ban -u\n
Restart or reload cron on your system.
If you have more than a few jails configured, you may need to use caching as each jail needs to be polled and fail2ban-client can't do so in a timely manner for than a few. This can result in failure of other SNMP information being polled.
For additional details of the switches, please see the POD in the script it self at the top.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The FreeRADIUS application extension requires that status_server be enabled in your FreeRADIUS config. For more information see: https://wiki.freeradius.org/config/Status
You should note that status requests increment the FreeRADIUS request stats. So LibreNMS polls will ultimately be reflected in your stats/charts.
Go to your FreeRADIUS configuration directory (usually /etc/raddb or /etc/freeradius).
cd sites-enabled
ln -s ../sites-available/status status
Restart FreeRADIUS.
You should be able to test with the radclient as follows...
If you've made any changes to the FreeRADIUS status_server config (secret key, port, etc.) edit freeradius.sh and adjust the config variable accordingly.
Edit your snmpd.conf file and add:
extend freeradius /etc/snmp/freeradius.sh\n
Restart snmpd on the host in question.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
If you've made any changes to the FreeRADIUS status_server config (secret key, port, etc.) edit freeradius.sh and adjust the config variable accordingly.
Edit the freeradius.sh script and set the variable 'AGENT' to '1' in the config.
Configure FSCLI in the script. You may also have to create an /etc/fs_cli.conf file if your fs_cli command requires authentication.
Verify it is working by running /etc/snmp/freeswitch
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend freeswitch /etc/snmp/freeswitch\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend gpsd /etc/snmp/gpsd\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading at the top of the page.
Set it up to be be ran by cron by root. Yes, you can directly call this script from SNMPD, but be aware, especially with Libvirt, there is a very real possibility of the snmpget timing out, especially if a VM is spinning up/down as virsh domstats can block for a few seconds or so then.
A small python3 script that reports current DHCP leases stats and pool usage of ISC DHCP Server.
Also you have to install the dhcpd-pools and the required Perl modules. Under Ubuntu/Debian just run apt install cpanminus ; cpanm Net::ISC::DHCPd::Leases Mime::Base64 File::Slurp or under FreeBSD pkg install p5-JSON p5-MIME-Base64 p5-App-cpanminus p5-File-Slurp ; cpanm Net::ISC::DHCPd::Leases.
Option Description -c $file Path to dhcpd.conf. -l $file Path to lease file. -Z Enable GZip+Base64 compression. -d Do not de-dup. -w $file File to write it out to.
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Configure the config at /usr/local/etc/logsize.conf. You can find the documentation for the config file in the extend. Below is a small example.
# monitor log sizes of logs directly udner /var/log\n[sets.var_log]\ndir=\"/var/log/\"\n\n# monitor remote logs from network devices\n[sets.remote_network]\ndir=\"/var/log/remote/network/\"\n\n# monitor remote logs from windows sources\n[sets.remote_windows]\ndir=\"/var/log/remote/windows/\"\n\n# monitor suricata flows logs sizes\n[sets.suricata_flows]\ndir=\"/var/log/suricata/flows/current\"\n
If the directories all readable via SNMPD, this script can be ran via snmpd. Otherwise it needs setup in cron. Similarly is processing a large number of files, it may also need setup in cron if it takes the script awhile to run.
linux_config_files is an application intended to monitor a Linux distribution's configuration files via that distribution's configuration management tool/system. At this time, ONLY RPM-based (Fedora/RHEL) SYSTEMS ARE SUPPORTED utilizing the rpmconf tool. The linux_config_files application collects and graphs the total count of configuration files that are out of sync and graphs that number.
Fedora/RHEL: Rpmconf is a utility that analyzes rpm configuration files using the RPM Package Manager. Rpmconf reports when a new configuration file standard has been issued for an upgraded/downgraded piece of software. Typically, rpmconf is used to provide a diff of the current configuration file versus the new, standard configuration file. The administrator can then choose to install the new configuration file or keep the old one.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend mailscanner /etc/snmp/mailscanner.php\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend mdadm /etc/snmp/mdadm\n
Verify it is working by running
sudo /etc/snmp/mdadm\n
Restart snmpd on your host
sudo service snmpd restart\n
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend memcached /etc/snmp/memcached\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Install your munin scripts into the above directory.
To create your own custom munin scripts, please see this example:
#!/bin/bash\nif [ \"$1\" = \"config\" ]; then\n echo 'graph_title Some title'\n echo 'graph_args --base 1000 -l 0' #not required\n echo 'graph_vlabel Some label'\n echo 'graph_scale no' #not required, can be yes/no\n echo 'graph_category system' #Choose something meaningful, can be anything\n echo 'graph_info This graph shows something awesome.' #Short desc\n echo 'foobar.label Label for your unit' # Repeat these two lines as much as you like\n echo 'foobar.info Desc for your unit.'\n exit 0\nfi\necho -n \"foobar.value \" $(date +%s) #Populate a value, here unix-timestamp\n
Create the cache directory, '/var/cache/librenms/' and make sure that it is owned by the user running the SNMP daemon.
mkdir -p /var/cache/librenms/\n
The MySQL script requires PHP-CLI and the PHP MySQL extension, so please verify those are installed.
CentOS (May vary based on PHP version)
yum install php-cli php-mysql\n
Debian (May vary based on PHP version)
apt-get install php-cli php-mysql\n
Unlike most other scripts, the MySQL script requires a configuration file mysql.cnf in the same directory as the extend or agent script with following content:
Note that depending on your MySQL installation (chrooted install for example), you may have to specify 127.0.0.1 instead of localhost. Localhost make a MySQL connection via the mysql socket, while 127.0.0.1 make a standard IP connection to mysql.
Note also if you get a mysql error Uncaught TypeError: mysqli_num_rows(): Argument #1, this is because you are using a newer mysql version which doesnt support UNBLOCKING for slave statuses, so you need to also include the line $chk_options['slave'] = false; into mysql.cnf to skip checking slave statuses
Edit /etc/snmp/mysql to set your MySQL connection constants or declare them in /etc/snmp/mysql.cnf (new file)
Edit your snmpd.conf file and add:
extend mysql /etc/snmp/mysql\n
Restart snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend nginx /etc/snmp/nginx\n
(Optional) If you have SELinux in Enforcing mode, you must add a module so the script can request /nginx-status:
cat << EOF > snmpd_nginx.te\nmodule snmpd_nginx 1.0;\n\nrequire {\n type httpd_t;\n type http_port_t;\n type snmpd_t;\n class tcp_socket name_connect;\n}\n\n#============= snmpd_t ==============\n\nallow snmpd_t http_port_t:tcp_socket name_connect;\nEOF\ncheckmodule -M -m -o snmpd_nginx.mod snmpd_nginx.te\nsemodule_package -o snmpd_nginx.pp -m snmpd_nginx.mod\nsemodule -i snmpd_nginx.pp\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ntp-client /etc/snmp/ntp-client\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
"},{"location":"Extensions/Applications/#ntp-server-aka-ntpd","title":"NTP Server aka NTPD","text":"
A shell script that gets stats from ntp server (ntpd).
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ntp-server /etc/snmp/ntp-server.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit the snmpd.conf file to include the extend by adding the following line to the end of the config file:
extend chronyd /etc/snmp/chrony\n
Note: Some distributions need sudo-permissions for the script to work with SNMP Extend. See the instructions on the section SUDO for more information.
Restart snmpd service on the host
Application should be auto-discovered and its stats presented on the Apps-page on the host. Note: Applications module needs to be enabled on the host or globally for the statistics to work as intended.
Update root crontab with. This is required as it will this will likely time out otherwise. Use */1 if you want to have the most recent stats when polled or to */5 if you just want at exactly a 5 minute interval.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ogs /etc/snmp/rocks.sh\n
Restart snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
A small shell script that checks your system package manager for any available updates. Supports apt-get/pacman/yum/zypper package managers.
For pacman users automatically refreshing the database, it is recommended you use an alternative database location --dbpath=/var/lib/pacman/checkupdate
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend osupdate /etc/snmp/osupdate\n
Restart snmpd on your host
Note: apt-get depends on an updated package index. There are several ways to have your system run apt-get update automatically. The easiest is to create /etc/apt/apt.conf.d/10periodic and pasting the following in it: APT::Periodic::Update-Package-Lists \"1\";. If you have apticron, cron-apt or apt-listchanges installed and configured, chances are that packages are already updated periodically .
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend phpfpmsp /etc/snmp/php-fpm\n
Create the config file /usr/local/etc/php-fpm_extend.json. Alternate locations may be specified using the the -f switch. Akin to like below. For more information, see /etc/snmp/php-fpm --help.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
To get all data you must get your API auth token from Pi-hole server and change the API_AUTH_KEY entry inside the snmp script.
Restard snmpd.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Make sure the cache file in /etc/snmp/postfixdetailed is some place that snmpd can write too. This file is used for tracking changes between various values between each time it is called by snmpd. Also make sure the path for pflogsumm is correct.
Run /etc/snmp/postfixdetailed to create the initial cache file so you don't end up with some crazy initial starting value. Please note that each time /etc/snmp/postfixdetailed is ran, the cache file is updated, so if this happens in between LibreNMS doing it then the values will be thrown off for that polling period.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
NOTE: If using RHEL for your postfix server, qshape must be installed manually as it is not officially supported. CentOs 6 rpms seem to work without issues.
Install the Nagios check check_postgres.pl on your system: https://github.com/bucardo/check_postgres
Verify the path to check_postgres.pl in /etc/snmp/postgres is correct.
(Optional) If you wish to change the DB username (default: pgsql), enable the postgres DB in totalling (e.g. set ignorePG to 0, default: 1), or set a hostname for check_postgres.pl to connect to (default: the Unix Socket postgresql is running on), then create the file /etc/snmp/postgres.config with the following contents (note that not all of them need be defined, just whichever you'd like to change):
DBuser=monitoring\nignorePG=0\nDBhost=localhost\n
Note that if you are using netdata or the like, you may wish to set ignorePG to 1 or otherwise that total will be very skewed on systems with light or moderate usage.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The LibreNMS polling host must be able to connect to port 8082 on the monitored device. The web-server must be enabled, see the Recursor docs: https://doc.powerdns.com/md/recursor/settings/#webserver
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
PowerMon tracks the power usage on your host and can report on both consumption and cost, using a python script installed on the host.
PowerMon consumption graph
Currently the script uses one of two methods to determine current power usage:
ACPI via libsensors
HP-Health (HP Proliant servers only)
The ACPI method is quite unreliable as it is usually only implemented by battery-powered devices, e.g. laptops. YMMV. However, it's possible to support any method as long as it can return a power value, usually in Watts.
TIP: You can achieve this by adding a method and a function for that method to the script. It should be called by getData() and return a dictionary.
Because the methods are unreliable for all hardware, you need to declare to the script which method to use. The are several options to assist with testing, see --help.
For this to work, the following log items need enabled for Privoxy.
debug 2 # show each connection status\ndebug 512 # Common Log Format\ndebug 1024 # Log the destination for requests Privoxy didn't let through, and the reason why.\ndebug 4096 # Startup banner and warnings\ndebug 8192 # Non-fatal errors\n
If your logfile is not at /var/log/privoxy/logfile, that may be changed via the -f option.
If privoxy-log-parser.pl is not found in your standard $PATH setting, you may will need up call the extend via /usr/bin/env with a $PATH set to something that includes it.
Once that is done, just wait for the server to be rediscovered or just enable it manually.
Pwrstatd (commonly known as powerpanel) is an application/service available from CyberPower to monitor their PSUs over USB. It is currently capable of reading the status of only one PSU connected via USB at a time. The powerpanel software is available here: https://www.cyberpowersystems.com/products/software/power-panel-personal/
Note: If you are using Raspian, the default user is Debian-snmp. Change snmp above to Debian-snmp. You can verify the user snmpd is using with ps aux | grep snmpd
Restart snmpd on PI host
"},{"location":"Extensions/Applications/#raspberry-pi-gpio-monitor","title":"Raspberry Pi GPIO Monitor","text":"
SNMP extend script to monitor your IO pins or sensor modules connected to your GPIO header.
1: Make sure you have wiringpi installed on your Raspberry Pi. In Debian-based systems for example you can achieve this by issuing:
apt-get install wiringpi\n
2: Download the script to your Raspberry Pi. wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/rpigpiomonitor.php -O /etc/snmp/rpigpiomonitor.php
3: (optional) Download the example configuration to your Raspberry Pi. wget https://raw.githubusercontent.com/librenms/librenms-agent/master/snmp/rpigpiomonitor.ini -O /etc/snmp/rpigpiomonitor.ini
4: Make the script executable: chmod +x /etc/snmp/rpigpiomonitor.php
5: Create or edit your rpigpiomonitor.ini file according to your needs.
6: Check your configuration with rpigpiomonitor.php -validate
7: Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
Install/Setup: For Install/Setup Local Librenms RRDCached: Please see RRDCached
Will collect stats by: 1. Connecting directly to the associated device on port 42217 2. Monitor thru snmp with SNMP extend, as outlined below 3. Connecting to the rrdcached server specified by the rrdcached setting
SNMP extend script to monitor your (remote) RRDCached via snmp
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend sdfsinfo /etc/snmp/sdfsinfo\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
url = Url how to get access to Seafile Server\nusername = Login to Seafile Server.\n It is important that used Login has admin privileges.\n Otherwise most API calls will be denied.\npassword = Password to the configured login.\naccount_identifier = Defines how user accounts are listed in RRD Graph.\n Options are: name, email\nhide_monitoring_account = With this Boolean you can hide the Account which you\n use to access Seafile API\n
Note:It is recommended to use a dedicated Administrator account for monitoring.
Setup a cronjob to run it. This ensures slow to poll disks won't result in errors.
*/5 * * * * /etc/snmp/smart -u -Z\n
Edit your snmpd.conf file and add:
extend smart /bin/cat /var/cache/smart\n
You will also need to create the config file, which defaults to the same path as the script, but with .config appended. So if the script is located at /etc/snmp/smart, the config file will be /etc/snmp/smart.config. Alternatively you can also specific a config via -c.
Anything starting with a # is comment. The format for variables is $variable=$value. Empty lines are ignored. Spaces and tabes at either the start or end of a line are ignored. Any line with out a matched variable or # are treated as a disk.
#This is a comment\ncache=/var/cache/smart\nsmartctl=/usr/bin/env smartctl\nuseSN=1\nada0\nada1\nda5 /dev/da5 -d sat\ntwl0,0 /dev/twl0 -d 3ware,0\ntwl0,1 /dev/twl0 -d 3ware,1\ntwl0,2 /dev/twl0 -d 3ware,2\n
The variables are as below.
cache = The path to the cache file to use. Default: /var/cache/smart\nsmartctl = The path to use for smartctl. Default: /usr/bin/env smartctl\nuseSN = If set to 1, it will use the disks SN for reporting instead of the device name.\n 1 is the default. 0 will use the device name.\n
A disk line is can be as simple as just a disk name under /dev/. Such as in the config above The line \"ada0\" would resolve to \"/dev/ada0\" and would be called with no special argument. If a line has a space in it, everything before the space is treated as the disk name and is what used for reporting and everything after that is used as the argument to be passed to smartctl.
If you want to guess at the configuration, call it with -g and it will print out what it thinks it should be.
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Optionally setup nightly self tests for the disks. The exend will run the specified test on all configured disks if called with the -t flag and the name of the SMART test to run.
This is for replacing Nagios/Icinga or the LibreNMS service integration in regards to NRPE. This allows LibreNMS to query what checks were ran on the server and keep track of totals of OK, WARNING, CRITICAL, and UNKNOWN statuses.
The big advantage over this compared to a NRPE are as below.
It does not need to know what checks are configured on it.
Also does not need to wait for the tests to run as sneck is meant to be ran via cron and the then return the cache when queried via SNMP, meaning a lot faster response time, especially if slow checks are being performed.
Works over proxied SNMP connections.
Included are alert examples. Although for setting up custom ones, the metrics below are provided.
Metric Description ok Total OK checks warning Total WARNING checks critical Total CRITICAL checks unknown Total UNKNOWN checks errored Total checks that errored time_to_polling Differnce in seconds between when polling data was generated and when polled time_to_polling_abs The absolute value of time_to_polling. check_$CHECK Exit status of a specific check $CHECK is equal to the name of the check in question. So foo would be check_foo
The standard Nagios/Icinga style exit codes are used and those are as below.
Exit Meaning 0 okay 1 warning 2 critical 3+ unknown
To use time_to_polling, it will need to enabled via setting the config item below. The default is false. Unless set to true, this value will default to 0. If enabling this, one will want to make sure that NTP is in use every were or it will alert if it goes over a difference of 540s.
Configure any of the checks you want to run in /usr/local/etc/sneck.conf. You con find it documented here.
Set it up in cron. This will mean you don't need to wait for all the checks to complete when polled via SNMP, which for like SMART or other long running checks will mean it timing out. Also means it does not need called via sudo as well.
For metrics the stats are migrated as below from the stats JSON.
f_drop_percent and drop_percent are computed based on the found data.
Instance Key Stats JSON Key uptime .stats.uptime total .stats.captured.total drop .stats.captured.drop ignore .stats.captured.ignore threshold .stats.captured.theshold after .stats.captured.after match .stats.captured.match bytes .stats.captured.bytes_total bytes_ignored .stats.captured.bytes_ignored max_bytes_log_line .stats.captured.max_bytes_log_line eps .stats.captured.eps f_total .stats.flow.total f_dropped .stats.flow.dropped
Those keys are appended with the name of the instance running with _ between the instance name and instance metric key. So uptime for ids would be ids_uptime.
The default is named 'ids' unless otherwise specified via the extend.
There is a special instance name of .total which is the total of all the instances. So if you want the total eps, the metric would be .total_eps. Also worth noting that the alert value is the highest one found among all the instances.
Any configuration of sagan_stat_check should be done in the cron setup. If the default does not work, check the docs for it at MetaCPAN for sagan_stat_check
The Socket Statistics application polls ss and scrapes socket statuses. Individual sockets and address-families may be filtered out within the script's optional configuration JSON file.
The following socket types are polled directly. Filtering a socket type will disable direct polling as-well-as indirect polling within any address-families that list the socket type as their child:
dccp (also exists within address-families \"inet\" and \"inet6\")\nmptcp (also exists within address-families \"inet\" and \"inet6\")\nraw (also exists within address-families \"inet\" and \"inet6\")\nsctp (also exists within address-families \"inet\" and \"inet6\")\ntcp (also exists within address-families \"inet\" and \"inet6\")\nudp (also exists within address-families \"inet\" and \"inet6\")\nxdp\n
The following socket types are polled within an address-family only:
The following address-families are polled directly and have their child socket types tab-indented below them. Filtering a socket type (see \"1\" above) will filter it from the address-family. Filtering an address-family will filter out all of its child socket types. However, if those socket types are not DIRECTLY filtered out (see \"1\" above), then they will continue to be monitored either directly or within other address-families in which they exist:
(Optional) Create a /etc/snmp/ss.json file and specify:
\"ss_cmd\" - String path to the ss binary: [\"/sbin/ss\"]
\"socket_types\" - A comma-delimited list of socket types to include. The following socket types are valid: dccp, icmp6, mptcp, p_dgr, p_raw, raw, sctp, tcp, ti_dg, ti_rd, ti_sq, ti_st, u_dgr, u_seq, u_str, udp, unknown, v_dgr, v_dgr, xdp. Please note that the \"unknown\" socket type is represented in /sbin/ss output with the netid \"???\". Please also note that the p_dgr and p_raw socket types are specific to the \"link\" address family; the ti_dg, ti_rd, ti_sq, and ti_st socket types are specific to the \"tipc\" address family; the u_dgr, u_seq, and u_str socket types are specific to the \"unix\" address family; and the v_dgr and v_str socket types are specific to the \"vsock\" address family. Filtering out the parent address families for the aforementioned will also filter out their specific socket types. Specifying \"all\" includes all of the socket types. For example: to include only tcp, udp, icmp6 sockets, you would specify \"tcp,udp,icmp6\": [\"all\"]
\"addr_families\" - A comma-delimited list of address families to include. The following families are valid: inet, inet6, link, netlink, tipc, unix, vsock. As mentioned above under (b), filtering out the link, tipc, unix, or vsock address families will also filter out their respective socket types. Specifying \"all\" includes all of the families. For example: to include only inet and inet6 families, you would specify \"inet,inet6\": [\"all\"]
You will want to make sure Suricata is set to output the stats to the eve file once a minute. This will help make sure that it won't be to far back in the file and will make sure it is recent when the cronjob runs.
Any configuration of suricata_stat_check should be done in the cron setup. If the default does not work, check the docs for it at MetaCPAN for suricata_stat_check
Install the agent on this device if it isn't already and copy the tinydns script to /usr/lib/check_mk_agent/local/
Note: We assume that you use DJB's Daemontools to start/stop tinydns. And that your tinydns instance is located in /service/dns, adjust this path if necessary.
Replace your log's run file, typically located in /service/dns/log/run with:
#!/bin/sh\nexec setuidgid dnslog tinystats ./main/tinystats/ multilog t n3 s250000 ./main/\n
Restart TinyDNS and Daemontools: /etc/init.d/svscan restart Note: Some say svc -t /service/dns is enough, on my install (Gentoo) it doesn't rehook the logging and I'm forced to restart it entirely.
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Edit your snmpd.conf file (usually /etc/snmp/snmpd.conf) and add:
extend ups-nut /etc/snmp/ups-nut.sh\n
Restart snmpd on your host
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Optionally if you have multiple UPS or your UPS is not named APCUPS you can specify its name as an argument into /etc/snmp/ups-nut.sh
The application should be auto-discovered as described at the top of the page. If it is not, please follow the steps set out under SNMP Extend heading top of page.
Create the optional config file, /usr/local/etc/wireguard_extend.json.
key default description include_pubkey 0 Include the pubkey with the return. use_short_hostname 1 If the hostname should be shortened to just the first part. public_key_to_arbitrary_name {} A hash of pubkeys to name mappings. pubkey_resolvers Resolvers to use for the pubkeys.
The default for pubkey_resolvers is config,endpoint_if_first_allowed_is_subnet_use_hosts,endpoint_if_first_allowed_is_subnet_use_ip,first_allowed_use_hosts,first_allowed_use_ip.
resolver description config Use the mappings from .public_key_to_arbitrary_name . endpoint_if_first_allowed_is_subnet_use_hosts If the first allowed IP is a subnet, see if a matching IP can be found in hosts for the endpoint. endpoint_if_first_allowed_is_subnet_use_getent If the first allowed IP is a subnet, see if a hit can be found for the endpoint IP via getent hosts. endpoint_if_first_allowed_is_subnet_use_ip If the first allowed IP is a subnet, use the endpoint IP for the name. first_allowed_use_hosts See if a match can be found in hosts for the first allowed IP. first_allowed_use_getent Use getent hosts to see try to fetch a match for the first allowed IP. first_allowed_use_ip Use the first allowed IP as the name.
LibreNMS supports multiple authentication modules along with Two Factor Auth. Here we will provide configuration details for these modules. Alternatively, you can use Socialite Providers which supports a wide variety of social/OAuth/SAML authentication methods.
To enable a particular authentication module you need to set this up in config.php. Please note that only ONE module can be enabled. LibreNMS doesn't support multiple authentication mechanisms at the same time.
auth/general
lnms config:set auth_mechanism mysql\n
"},{"location":"Extensions/Authentication/#user-levels-and-user-account-type","title":"User levels and User account type","text":"
1: Normal User: You will need to assign device / port permissions for users at this level.
5: Global Read: Read only Administrator.
10: Administrator: This is a global read/write admin account.
11: Demo Account: Provides full read/write with certain restrictions (i.e can't delete devices).
Note Oxidized configs can often contain sensitive data. Because of that only Administrator account type can see configs.
"},{"location":"Extensions/Authentication/#note-for-selinux-users","title":"Note for SELinux users","text":"
When using SELinux on the LibreNMS server, you need to allow Apache (httpd) to connect LDAP/Active Directory server, this is disabled by default. You can use SELinux Booleans to allow network access to LDAP resources with this command:
Install php-ldap or php8.1-ldap, making sure to install the same version as PHP.
If you have issues with secure LDAP try setting
auth/ad
lnms config:set auth_ad_check_certificates 0\n
this will ignore certificate errors.
"},{"location":"Extensions/Authentication/#require-actual-membership-of-the-configured-groups","title":"Require actual membership of the configured groups","text":"
If you set auth_ad_require_groupmembership to 1, the authenticated user has to be a member of the specific group. Otherwise all users can authenticate, and will be either level 0 or you may set auth_ad_global_read to 1 and all users will have read only access unless otherwise specified.
Cleanup of old accounts is done by checking the authlog. You will need to set the number of days when old accounts will be purged AUTOMATICALLY by daily.sh.
Please ensure that you set the authlog_purge value to be greater than active_directory.users_purge otherwise old users won't be removed.
Replace ad-admingroup with your Active Directory admin-user group and ad-usergroup with your standard user group. It is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
This yields (&(objectclass=user)(sAMAccountName=$username)) for the user filter and (&(objectclass=group)(sAMAccountName=$group)) for the group filter.
Install php_ldap or php7.0-ldap, making sure to install the same version as PHP.
For the below, keep in mind the auth DN is composed using a string join of auth_ldap_prefix, the username, and auth_ldap_suffix. This means it needs to include = in the prefix and , in the suffix. So lets say we have a prefix of uid=, the user derp, and the suffix of ,ou=users,dc=foo,dc=bar, then the result is uid=derp,ou=users,dc=foo,dc=bar.
"},{"location":"Extensions/Authentication/#ldap-bind-user-optional","title":"LDAP bind user (optional)","text":"
If your ldap server does not allow anonymous bind, it is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
Please note that a mysql user is created for each user the logs in successfully. Users are assigned the user role by default, unless radius sends a reply attribute with a role.
The attribute Filter-ID is a standard Radius-Reply-Attribute (string) that can be assigned a specially formatted string to assign a single role to the user.
The string to send in Filter-ID reply attribute must start with librenms_role_ followed by the role name. For example to set the admin role send librenms_role_admin.
The following strings correspond to the built-in roles, but any defined role can be used: - librenms_role_normal - Sets the normal user level. - librenms_role_admin - Sets the administrator level. - librenms_role_global-read - Sets the global read level
LibreNMS will ignore any other strings sent in Filter-ID and revert to default role that is set in your config.
$config['radius']['hostname'] = 'localhost';\n$config['radius']['port'] = '1812';\n$config['radius']['secret'] = 'testing123';\n$config['radius']['timeout'] = 3;\n$config['radius']['users_purge'] = 14; // Purge users who haven't logged in for 14 days.\n$config['radius']['default_level'] = 1; // Set the default user level when automatically creating a user.\n
Freeradius has a function called Radius Huntgroup which allows to send different attributes based on NAS. This may be utilized if you already use Filter-ID in your environment and also want to use radius with LibreNMS.
Cleanup of old accounts is done by checking the authlog. You will need to set the number of days when old accounts will be purged AUTOMATICALLY by daily.sh.
Please ensure that you set the $config['authlog_purge'] value to be greater than $config['radius']['users_purge'] otherwise old users won't be removed.
LibreNMS will expect the user to have authenticated via your webservice already. At this stage it will need to assign a userlevel for that user which is done in one of two ways:
A user exists in MySQL still where the usernames match up.
A global guest user (which still needs to be added into MySQL:
$config['http_auth_guest'] = \"guest\";\n
This will then assign the userlevel for guest to all authenticated users.
"},{"location":"Extensions/Authentication/#http-authentication-ad-authorization","title":"HTTP Authentication / AD Authorization","text":"
Config option: ad-authorization
This module is a combination of http-auth and active_directory
LibreNMS will expect the user to have authenticated via your webservice already (e.g. using Kerberos Authentication in Apache) but will use Active Directory lookups to determine and assign the userlevel of a user. The userlevel will be calculated by using AD group membership information as the active_directory module does.
The configuration is the same as for the active_directory module with two extra, optional options: auth_ad_binduser and auth_ad_bindpassword. These should be set to a AD user with read capabilities in your AD Domain in order to be able to perform searches. If these options are omitted, the module will attempt an anonymous bind (which then of course must be allowed by your Active Directory server(s)).
There is also one extra option for controlling user information caching: auth_ldap_cache_ttl. This option allows to control how long user information (user_exists, userid, userlevel) are cached within the PHP Session. The default value is 300 seconds. To disable this caching (highly discourage) set this option to 0.
This module is a combination of http-auth and ldap
LibreNMS will expect the user to have authenticated via your webservice already (e.g. using Kerberos Authentication in Apache) but will use LDAP to determine and assign the userlevel of a user. The userlevel will be calculated by using LDAP group membership information as the ldap module does.
The configuration is similar to the ldap module with one extra option: auth_ldap_cache_ttl. This option allows to control how long user information (user_exists, userid, userlevel) are cached within the PHP Session. The default value is 300 seconds. To disabled this caching (highly discourage) set this option to 0.
$config['auth_mechanism'] = 'ldap-authorization';\n$config['auth_ldap_server'] = 'ldap.example.com'; // Set server(s), space separated. Prefix with ldaps:// for ssl\n$config['auth_ldap_suffix'] = ',ou=People,dc=example,dc=com'; // appended to usernames\n$config['auth_ldap_groupbase'] = 'ou=groups,dc=example,dc=com'; // all groups must be inside this\n$config['auth_ldap_groups']['admin']['roles'] = ['admin']; // set admin group to admin role\n$config['auth_ldap_groups']['pfy']['roles'] = ['global-read']; // set pfy group to global read only role\n$config['auth_ldap_groups']['support']['roles'] = ['user']; // set support group as a normal user\n
"},{"location":"Extensions/Authentication/#additional-options-usually-not-needed_1","title":"Additional options (usually not needed)","text":"
$config['auth_ldap_version'] = 3; # v2 or v3\n$config['auth_ldap_port'] = 389; // 389 or 636 for ssl\n$config['auth_ldap_starttls'] = True; // Enable TLS on port 389\n$config['auth_ldap_prefix'] = 'uid='; // prepended to usernames\n$config['auth_ldap_group'] = 'cn=groupname,ou=groups,dc=example,dc=com'; // generic group with level 0\n$config['auth_ldap_groupmemberattr'] = 'memberUid'; // attribute to use to see if a user is a member of a group\n$config['auth_ldap_groupmembertype'] = 'username'; // username type to find group members by, either username (default), fulldn or puredn\n$config['auth_ldap_emailattr'] = 'mail'; // attribute for email address\n$config['auth_ldap_attr.uid'] = 'uid'; // attribute to check username against\n$config['auth_ldap_userlist_filter'] = 'service=informatique'; // Replace 'service=informatique' by your ldap filter to limit the number of responses if you have an ldap directory with thousand of users\n$config['auth_ldap_cache_ttl'] = 300;\n
"},{"location":"Extensions/Authentication/#ldap-bind-user-optional_1","title":"LDAP bind user (optional)","text":"
If your ldap server does not allow anonymous bind, it is highly suggested to create a bind user, otherwise \"remember me\", alerting users, and the API will not work.
$config['auth_ldap_binduser'] = 'ldapbind'; // will use auth_ldap_prefix and auth_ldap_suffix\n#$config['auth_ldap_binddn'] = 'CN=John.Smith,CN=Users,DC=MyDomain,DC=com'; // overrides binduser\n$config['auth_ldap_bindpassword'] = 'password';\n
"},{"location":"Extensions/Authentication/#viewembedded-graphs-without-being-logged-into-librenms","title":"View/embedded graphs without being logged into LibreNMS","text":"
The single sign-on mechanism is used to integrate with third party authentication providers that are managed outside of LibreNMS - such as ADFS, Shibboleth, EZProxy, BeyondCorp, and others. A large number of these methods use SAML the module has been written assuming the use of SAML, and therefore these instructions contain some SAML terminology, but it should be possible to use any software that works in a similar way.
In order to make use of the single sign-on module, you need to have an Identity Provider up and running, and know how to configure your Relying Party to pass attributes to LibreNMS via header injection or environment variables. Setting these up is outside of the scope of this documentation.
As this module deals with authentication, it is extremely careful about validating the configuration - if it finds that certain values in the configuration are not set, it will reject access rather than try and guess.
This, along with the defaults, sets up a basic Single Sign-on setup that:
Reads values from environment variables
Automatically creates users when they're first seen
Automatically updates users with new values
Gives everyone privilege level 10
This happens to mimic the behaviour of http-auth, so if this is the kind of setup you want, you're probably better of just going and using that mechanism.
If there is a proxy involved (e.g. EZProxy, Azure AD Application Proxy, NGINX, mod_proxy) it's essential that you have some means in place to prevent headers being injected between the proxy and the end user, and also prevent end users from contacting LibreNMS directly.
This should also apply to user connections to the proxy itself - the proxy must not be allowed to blindly pass through HTTP headers. modsecurity_ should be considered a minimum, with a full WAF being strongly recommended. This advice applies to the IDP too.
The mechanism includes very basic protection, in the form of an IP whitelist with should contain the source addresses of your proxies:
This configuration item should contain an array with a list of IP addresses or CIDR prefixes that are allowed to connect to LibreNMS and supply environment variables or headers.
If for some reason your relying party doesn't store the username in REMOTE_USER, you can override this choice.
$config['sso']['user_attr'] = 'HTTP_UID';\n
Note that the user lookup is a little special - normally headers are prefixed with HTTP_, however this is not the case for remote user - it's a special case. If you're using something different you need to figure out of the HTTP_ prefix is required or not yourself.
"},{"location":"Extensions/Authentication/#automatic-user-createupdate","title":"Automatic User Create/Update","text":"
If these are not enabled, user logins will be (somewhat silently) rejected unless an administrator has created the account in advance. Note that in the case of SAML federations, unless release of the users true identity has been negotiated with the IDP, the username (probably ePTID) is not likely to be predicable.
As used above, static gives every single user the same privilege level. If you're working with a small team, or don't need access control, this is probably suitable.
If your Relying Party is capable of calculating the necessary privilege level, you can configure the module to read the privilege number straight from an attribute. sso_level_attr should contain the name of the attribute that the Relying Party exposes to LibreNMS - as long as sso_mode is correctly set, the mechanism should find the value.
This mechanism expects to find a delimited list of groups within the attribute that sso_group_attr points to. This should be an associative array of group name keys, with privilege levels as values. The mechanism will scan the list and find the highest privilege level that the user is entitled to, and assign that value to the user.
If there are no matches between the user's groups and the sso_group_level_map, the user will be assigned the privilege level specified in the sso_static_level variable, with a default of 0 (no access). This feature can be used to provide a default access level (such as read-only) to all authenticated users.
Additionally, this format may be specific to Shibboleth; other relying party software may need changes to the mechanism (e.g. mod_auth_mellon may create pseudo arrays).
There is an optional value for sites with large numbers of groups:
LibreNMS has no capability to log out a user authenticated via Single Sign-On - that responsibility falls to the Relying Party.
If your Relying Party has a magic URL that needs to be called to end a session, you can configure LibreNMS to direct the user to it:
# Example for Shibboleth\n$config['auth_logout_handler'] = '/Shibboleth.sso/Logout';\n\n# Example for oauth2-proxy\n$config['auth_logout_handler'] = '/oauth2/sign_out';\n
This option functions independently of the Single Sign-on mechanism.
LibreNMS provides the ability to automatically add devices on your network, we can do this via a few methods which will be explained below and also indicate if they are enabled by default.
All discovery methods run when discovery runs (every 6 hours by default and within 5 minutes for new devices).
Please note that you need at least ONE device added before auto-discovery will work.
The first thing to do though is add the required configuration options to config.php.
"},{"location":"Extensions/Auto-Discovery/#additional-options","title":"Additional Options","text":""},{"location":"Extensions/Auto-Discovery/#discovering-devices-by-ip","title":"Discovering devices by IP","text":"
By default we don't add devices by IP address, we look for a reverse dns name to be found and add with that. If this fails and you would like to still add devices automatically then you will need to set $config['discovery_by_ip'] = true;
By default we require unique sysNames when adding devices (this is returned over snmp by your devices). If you would like to allow devices to be added with duplicate sysNames then please set
$config['autodiscovery']['xdp'] = false; to disable.
This includes FDP, CDP and LLDP support based on the device type.
The LLDP/xDP links with neighbours will always be discovered as soon as the discovery module is enabled. However, LibreNMS will only try to add the new devices discovered with LLDP/xDP if $config['autodiscovery']['xdp'] = true;.
Devices may be excluded from xdp discovery by sysName and sysDescr.
//Exclude devices by name\n$config['autodiscovery']['xdp_exclude']['sysname_regexp'][] = '/host1/';\n$config['autodiscovery']['xdp_exclude']['sysname_regexp'][] = '/^dev/';\n\n//Exclude devices by description\n$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/Vendor X/';\n$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/Vendor Y/';\n
Devices may be excluded from cdp discovery by platform.
//Exclude devices by platform(Cisco only)\n$config['autodiscovery']['cdp_exclude']['platform_regexp'][] = '/WS-C3750G/';\n
These devices are excluded by default:
$config['autodiscovery']['xdp_exclude']['sysdesc_regexp'][] = '/-K9W8/'; // Cisco Lightweight Access Point\n$config['autodiscovery']['cdp_exclude']['platform_regexp'][] = '/^Cisco IP Phone/'; //Cisco IP Phone\n
Apart from the aforementioned Auto-Discovery options, LibreNMS is also able to proactively scan a network for SNMP-enabled devices using the configured version/credentials.
SNMP Scan will scan nets by default and respects autodiscovery.nets-exclude.
To run the SNMP-Scanner you need to execute the snmp-scan.py from within your LibreNMS installation directory.
Here the script's help-page for reference:
usage: snmp-scan.py [-h] [-t THREADS] [-g GROUP] [-l] [-v] [--ping-fallback] [--ping-only] [-P] [network ...]\n\nScan network for snmp hosts and add them to LibreNMS.\n\npositional arguments:\n network CIDR noted IP-Range to scan. Can be specified multiple times\n This argument is only required if 'nets' config is not set\n Example: 192.168.0.0/24\n Example: 192.168.0.0/31 will be treated as an RFC3021 p-t-p network with two addresses, 192.168.0.0 and 192.168.0.1\n Example: 192.168.0.1/32 will be treated as a single host address\n\noptional arguments:\n -h, --help show this help message and exit\n -t THREADS How many IPs to scan at a time. More will increase the scan speed, but could overload your system. Default: 32\n -g GROUP The poller group all scanned devices will be added to. Default: The first group listed in 'distributed_poller_group', or 0 if not specificed\n -l, --legend Print the legend.\n -v, --verbose Show debug output. Specifying multiple times increases the verbosity.\n --ping-fallback Add the device as an ICMP only device if it replies to ping but not SNMP.\n --ping-only Always add the device as an ICMP only device.\n -P, --ping Deprecated. Use --ping-fallback instead.\n
Newly discovered devices will be added to the default_poller_group, this value defaults to 0 if unset.
When using distributed polling, this value can be changed locally by setting $config['default_poller_group'] in config.php or globally by using lnms config:set.
# Set the compact view mode for the availability map\nlnms config:set webui.availability_map_compact false\n\n# Size of the box for each device in the availability map (not compact)\nlnms config:set webui.availability_map_box_size 165\n\n# Sort by status instead of hostname\nlnms config:set webui.availability_map_sort_status false\n\n# Show the device group drop-down on the availabiltiy map page\nlnms config:set webui.availability_map_use_device_groups true\n
With the billing module you can create a bill, assign a quota to it and add ports to it. It then tracks the ports usage and shows you the usage in the bill, including any overage. Accounting by both total transferred data and 95th percentile is supported.
To enable and use the billing module you need to perform the following steps:
Edit config.php and add (or enable) the following line near the end of the config
Billing data is stored in the MySQL database, and you may wish to purge the detailed stats for old data (per-month totals will always be kept). To enable this, add the following to config.php:
$config['billing_data_purge'] = 12; // Number of months to retain\n
Data for the last complete billing cycle will always be retained - only data older than this by the configured number of months will be removed. This task is performed in the daily cleanup tasks.
For 95th Percentile billing, the default behavior is to use the highest of the input or output 95th Percentile calculation.
To instead use the combined total of inout + output to derive the 95th percentile, This can be changed on a per bill basis by setting 95th Calculation to \"Aggregate\".
To change the default option to Aggregate, add the following the config.php:
$config['billing']['95th_default_agg'] = 1; // Set aggregate 95th as default\n
This configuration setting is cosmetic and only changes the default selected option when adding a new bill.
The Component extension provides a generic database storage mechanism for discovery and poller modules. The Driver behind this extension was to provide the features of ports, in a generic manner to discovery/poller modules.
It provides a status (Nagios convention), the ability to Disable (do not poll), or Ignore (do not Alert).
When this data from both the component and component_prefs tables is returned in one single consolidated array, there is the potential for someone to attempt to set an attribute (in the component_prefs) table that is used in the component table. Because of this all fields of the component table are reserved, they cannot be used as custom attributes, if you update these the module will attempt to write them to the component table, not the component_prefs table.
"},{"location":"Extensions/Component/#edit-the-array","title":"Edit the Array","text":"
Once you have a component array from getComponents the first thing to do is extract the components for only the single device you are editing. This is required because the setComponentPrefs function only saves a single device at a time.
When writing the component array there are several caveats to be aware of, these are:
$ARRAY must be in the format of a single device ID - $ARRAY[$COMPONENT_ID][Attribute] = 'Value'; NOT in the multi device format returned by getComponents - $ARRAY[$DEVICE_ID][$COMPONENT_ID][Attribute] = 'Value';
You cannot edit the Component ID or the Device ID
reserved fields can not be removed
if a change is found an entry will be written to the eventlog.
It is intended that discovery/poller modules will detect the status of a component during the polling cycle. Status is logged using the Nagios convention for status codes, where:
0 = Ok,\n1 = Warning,\n2 = Critical\n
If you are creating a poller module which can detect a fault condition simply set STATUS to something other than 0 and ERROR to a message that indicates the problem.
To actually raise an alert, the user will need to create an alert rule. To assist with this several Alerting Macro's have been created:
%macro.component_normal - A component that is not disabled or ignored and in a Normal state.
%macro.component_warning - A component that is not disabled or ignored and NOT in a Warning state.
%macro.component_critical - A component that is not disabled or ignored and NOT in a Critical state.
To raise alerts for components, the following rules could be created:
%macros.component_critical = \"1\" - To alert on all Critical components
%macros.component_critical = \"1\" && %component.type = \"<Type of Component>\" - To alert on all Critical components of a particular type.
If there is a particular component you would like excluded from alerting, simply set the ignore field to 1.
The data that is written to each alert when it is raised is in the following format:
LibreNMS has the ability to create custom maps to give a quick overview of parts of the network including up/down status of devices and link utilisation. These are also referred to as weather maps.
Once some maps have been created, they will be visible to any users who have read access to all devices on a given map. Custom maps are available through the Overview -> Maps -> Custom Maps menu.
Some key points about the viewer are:
Nodes will change colour if they are down or disabled
Links are only associated with a single network interface
Link utilisation can only be shown if the link speed is known
Link speed is decoded from SNMP if possible (Upload/Download) and defaults to the physical speed if SNMP data is not available, or cannot be decoded
Links will change colour as follows:
Black if the link is down, or the max speed is unknown
Green at 0% utilisation, with a gradual change to
Yellow at 50% utilisation, with a gradual change to
Orange at 75% utilisation, with a gradual change to
To access the custom map editor, a user must be an admin. The editor is accessed through the Overview -> Maps -> Custom Map Editor menu.
Once you are in the editor, you will be given a drop-down list of all the custom maps so you can choose one to edit, or select \"Create New Map\" to create a new map.
When you create a new map, you will be presented with a page to set some global map settings. These are:
Name: The name for the map
Width: The width of the map in pixels
Height: The height of the map in pixels
Node Alignment: When devices are added to the map, this will align the devices to an invisible grid this many pixels wide, which can help to make the maps look better. This can be set to 0 to disable.
Background: An image (PNG/JPG) up to 2MB can be uploaded as a background.
These settings can be changed at any stage by clicking on the \"Edit Map Settings\" button in the top-left of the editor.
Once you have a map, you can start by adding \"nodes\" to the map. A node represents a device, or an external point in the network (e.g. the internet) To add a node, you click on the \"Add Node\" button in the control bar, then click on the map area where you want to add the node. You will then be aked for the following information:
Label: The text to display on this point in the network
Device: If this node represents a device, you can select the device from the drop-down. This will overwrite the label, which you can then change if you want to.
Style: You can select the style of the node. If a device has been selected you can choose the LibreNMS icon by choosing \"Device Image\". You can also choose \"Icon\" to select an image for the device.
Icon: If you choose \"Icon\" in the style box, you can select from a list of images to represent this node
There are also options to choose the size and colour of the node and the font.
Once you have finished choosing the options for the node, you can press Save to add it to the map. NOTE: This does not save anything to the database immediately. You need to click on the \"Save Map\" button in the top-right to save your changes to the database.
You can edit a node at any time by selecting it on the map and clicking on the \"Edit Node\" button in the control bar.
You can also modify the default settings for all new nodes by clicking on the \"Edit Node Default\" button at the top of the page.
Once you have 2 or more nodes, you can add links between the nodes. These are called edges in the editor. To add a link, click on the \"Add Edge\" button in the control bar, then click on one of the nodes you want to link and drag the cursor to the second node that you want to link. You will then be prompted for the following information:
From: The node that the link runs from (it will default to first node you selected)
To: The node that the link runs to (it will default to the second node you selected)
Port: If the From or To node is linked to a device, you can select an interface from one of the devices and the custom map will show traffic utilisation for the selected interface.
Reverse Port Direction: If the selected port displays data in the wrong direction for the link, you can reverse it by toggling this option.
Line Style: You can try different line styles, especially if you are running multiple links between the same 2 nodes
Show percent usage: Choose whether to have text on the lines showing the link utilisation as a percentage
Recenter Line: If you tick this box, the centre point of the line will be moved back to half way between the 2 nodes when you click on the save button.
Once you have finished choosing the options for the node, you can press Save to add it to the map. NOTE: This does not save anything to the database immediately. You need to click on the \"Save Map\" button in the top-right to save your changes to the database.
Once you press save, you it will create 3 objects on the screen, 2 arrows and a round node in the middle. Having the 3 objects allows you to move the mid point of the line off centre, and also allows us to display bandwidth information for both directions of the link.
You can edit an edge at any time by selecting it on the map and clicking on the \"Edit Edge\" button in the control bar.
You can also modify the default settings for all new edges by clicking on the \"Edit Edge Default\" button at the top of the page.
When you drag items around the map, some of the lines will bend. This will cause a \"Re-Render Map\" button to appear at the top-right of the page. This button can be clicked on to cause all lines to be re-drawn the way they will be shown in the viewer.
Once you are happy with a set of changes that you have made, you can click on the \"Save Map\" button in the top-right of the page to commit changes to the database. This will cause anyone viewing the map to see the new version the next time their page refreshes.
You can add your own images to use on the custom map by copying files into the html/images/custommap/icons/ directory. Any files with a .svg, .png or .jpg extension will be shown in the image selection drop-down in the custom map editor.
"},{"location":"Extensions/Customizing-the-Web-UI/","title":"Customizing the Web UI","text":""},{"location":"Extensions/Customizing-the-Web-UI/#custom-menu-entry","title":"Custom menu entry","text":"
Create the file resources/views/menu/custom.blade.php
"},{"location":"Extensions/Customizing-the-Web-UI/#custom-device-menu-action","title":"Custom device menu action","text":"
You can add custom external links in the menu on the device page.
This feature allows you to easily link applications to related systems, as shown in the example of Open-audIT.
The url value is parsed by the Laravel Blade templating engine. You can access device variables such as $device->hostname, $device->sysName and use full PHP.
Field Description url Url blade template resulting in valid url. Required. title Title text displayed in the menu. Required. icon Font Awesome icon class. Default: fa-external-link external Open link in new window. Default: true action Show as action on device list. Default: false"},{"location":"Extensions/Customizing-the-Web-UI/#launching-windows-programs-from-the-librenms-device-menu","title":"Launching Windows programs from the LibreNMS device menu","text":"
You can launch windows programs from links in LibreNMS, but it does take some registry entries on the client device. Save the following as winbox.reg, edit for your winbox.exe path and double click to add to your registry.
"},{"location":"Extensions/Customizing-the-Web-UI/#setting-the-primary-device-menu-action","title":"Setting the primary device menu action","text":"
You can change the icon that is clickable in the device without having to open the dropdown menu. The primary button is edit device by default.
settings/webui/device
lnms config:set html.device.primary_link web\n
Value Description edit Edit device web Connect to the device via https/http ssh launch ssh:// protocol to the device, make sure you have a handler registered telnet launch telnet:// protocol to the device capture Link to the device capture page custom1 Custom Link 1 custom2 Custom Link 2 custom3 Custom Link 3 custom4 Custom Link 4 custom5 Custom Link 5 custom6 Custom Link 6 custom7 Custom Link 7 custom8 Custom Link 8
!!! Custom http, ssh, telnet ports
Custom ports can be set through the device setting misc tab and will be appended to the Uri. Empty value will not append anything and automatically default to the standard. - custom ssh port set to 2222 will result in ssh://10.0.0.0:2222 - custom telnet port set to 2323 will result in telnet://10.0.0.0:2323
Create customised dashboards in LibreNMS per user. You can share dashboards with other users. You can also make a custom dashboard and default it for all users in LibreNMS.
LibreNMS has a whole list of Widgets to select from.
Alerts Widget: Displays all alert notifications.
Availability Map: Displays all devices with colored tiles, green up, yellow for warning (device has been restarted in last 24 hours), red for down. You can also list all services and ignored/disabled devices in this widget.
Components Status: List all components Ok state, Warning state, Critical state.
Device Summary horizontal: List device totals, up, down, ignored, disabled. Same for ports and services.
Device Summary vertical: List device totals, up, down, ignored, disabled. Same for ports and services.
Eventlog: Displays all events with your devices and LibreNMS.
External Image: can be used to show external images on your dashboard. Or images from inside LibreNMS.
Globe Map: Will display map of the globe.
Graph: Can be used to display graphs from devices.
Graylog: Displays all Graylog's syslog entries.
Notes: use for html tags, embed links and external web pages. Or just notes in general.
Server Stats: Will display gauges for CPU, Memory, Storage usage. Note the device type has to be listed as \"Server\".
Syslog: Displays all syslog entries.
Top Devices: By Traffic, or Uptime, or Response time, or Poller Duration, or Processor load, or Memory Usage, or Storage Usage.
Top Interfaces: Lists top interfaces by traffic utilization.
World Map: displays all your devices locations. From syslocation or from override sysLocation.
<iframe src=\"your_url\" frameBorder=\"0\" width=\"100%\" height = \"100%\">\n <p>Your browser does not support iframes.</p>\n</iframe>\n
Note you may need to play with the width and height and also size your widget properly.
src=\"url\" needs to be URL to webpage you are linking to. Also some web pages may not support html embedded or iframe.
"},{"location":"Extensions/Dashboards/#how-to-create-ports-graph","title":"How to create ports graph","text":"
In the dashboard, you want to create an interface graph select the widget called
'Graph' then select \"Port\" -> \"Bits\"
Note: you can map the port by description or the alias or by port id. You will need to know this in order to map the port to the graph.
"},{"location":"Extensions/Dashboards/#dimension-parameter-replacement-for-generic-image-widget","title":"Dimension parameter replacement for Generic-image widget","text":"
When using the Generic-image widget you can provide the width and height of the widget with your request. This will ensure that the image will fit nicely with the dimensions if the Generic-image widget. You can add @AUTO_HEIGHT@ and @AUTO_WIDTH@ to the Image URL as parameters.
For Dell OpenManage support you will need to install Dell OpenManage (yeah - really :)) (minimum 5.1) onto the device you want to monitor. Ensure that net-snmp is using srvadmin, you should see something similar to:
master agentx\nview all included .1\naccess notConfigGroup \"\" any noauth exact all none none\nsmuxpeer .1.3.6.1.4.1.674.10892.1\n
Restart net-snmp:
service snmpd restart\n
Ensure that srvadmin is started, this is usually done by executing:
Download OpenManage from Dell's support page Link and install OpenManage on your windows server. Make sure you have SNMP setup and running on your windows server.
LibreNMS has the ability to show you a dynamic network map based on device dependencies that have been configure. These maps are accessed through the following menu options:
The rule is based on the MySQL structure your data is in. Such as tablename.columnname. If you already know the entity you want, you can browse around inside MySQL using show tables and desc <tablename>.
As a working example and a common question, let's assume you want to group devices by hostname. If your hostname format is dcX.[devicetype].example.com. You would use the field devices.hostname.
If you want to group them by device type, you would add a rule for routers of devices.hostname endswith rtr.example.com.
If you want to group them by DC, you could use the rule devices.hostname regex dc1\\..*\\.example\\.com (Don't forget to escape periods in the regex)
You can create static groups (and convert dynamic groups to static) to put specific devices in a group. Just select static as the type and select the devices you want in the group.
You can now select this group from the Devices -> All Devices link in the navigation at the top. You can also use the group to map alert rules to by creating an alert mapping Overview -> Alerts -> Rule Mapping.
The LibreNMS dispatcher service (librenms-service.py) is a new method of running the poller service at set times. It does not replace the php scripts, just the cron entries running them.
"},{"location":"Extensions/Dispatcher-Service/#external-requirements","title":"External Requirements","text":""},{"location":"Extensions/Dispatcher-Service/#a-recent-version-of-python","title":"A recent version of Python","text":"
The LibreNMS service requires Python 3 and some features require behaviour only found in Python3.4+.
If you want to use distributed polling, you'll need a Redis instance to coordinate the nodes. It's recommended that you do not share the Redis database with any other system - by default, Redis supports up to 16 databases (numbered 0-15). You can also use Redis on a single host if you want
It's strongly recommended that you deploy a resilient cluster of redis systems, and use redis-sentinel.
You should not rely on the password for the security of your system. See https://redis.io/topics/security
LibreNMS can still use memcached as a locking mechanism when using distributed polling. So you can configure memcached for this purpose unless you have updates disabled.
See Locking Mechanisms at https://docs.librenms.org/Extensions/Distributed-Poller/
You should already have this, but the pollers do need access to the SQL database. The LibreNMS service runs faster and more aggressively than the standard poller, so keep an eye on the number of open connections and other important health metrics.
Connection settings are required in .env. The .env file is generated after composer install and APP_KEY and NODE_ID are set. Remember that the APP_KEY value must be the same on all your pollers.
#APP_KEY= #Required, generated by composer install\n#NODE_ID= #Required, generated by composer install\n\nDB_HOST=localhost\nDB_DATABASE=librenms\nDB_USERNAME=librenms\nDB_PASSWORD=\n
Once you have your Redis database set up, configure it in the .env file on each node. Configure the redis cache driver for distributed locking.
There are a number of options - most of them are optional if your redis instance is standalone and unauthenticated (neither recommended).
##\n## Standalone\n##\nREDIS_HOST=127.0.0.1\nREDIS_PORT=6379\nREDIS_DB=0\nREDIS_TIMEOUT=60\n\n# If requirepass is set in redis set everything above as well as: (recommended)\nREDIS_PASSWORD=PasswordGoesHere\n\n# If ACL's are in use, set everything above as well as: (highly recommended)\nREDIS_USERNAME=UsernameGoesHere\n\n##\n## Sentinel\n##\nREDIS_SENTINEL=redis-001.example.org:26379,redis-002.example.org:26379,redis-003.example.org:26379\nREDIS_SENTINEL_SERVICE=mymaster\n\n# If requirepass is set in sentinel, set everything above as well as: (recommended)\nREDIS_SENTINEL_PASSWORD=SentinelPasswordGoesHere\n\n# If ACL's are in use, set everything above as well as: (highly recommended)\nREDIS_SENTINEL_USERNAME=SentinelUsernameGoesHere\n
For more information on ACL's, see https://redis.io/docs/management/security/acl/
Note that if you use Sentinel, you may still need REDIS_PASSWORD, REDIS_USERNAME, REDIS_DB and REDIS_TIMEOUT - Sentinel just provides the address of the instance currently accepting writes and manages failover. It's possible (and recommended) to have authentication both on Sentinel and the managed Redis instances.
There are also some SQL options, but these should be inherited from your LibreNMS web UI configuration.
Logs are sent to the system logging service (usually journald or rsyslog) - see https://docs.python.org/3/library/logging.html#logging-levels for the options available.
$config['distributed_poller'] = true; # Set to true to enable distributed polling\n$config['distributed_poller_name'] = php_uname('n'); # Uniquely identifies the poller instance\n$config['distributed_poller_group'] = 0; # Which group to poll\n
"},{"location":"Extensions/Dispatcher-Service/#tuning-the-number-of-workers","title":"Tuning the number of workers","text":"
See https://your_librenms_install/poller
You want to keep Consumed Worker Seconds comfortably below Maximum Worker Seconds. The closer the values are to each other, the flatter the CPU graph of the poller machine. Meaning that you are utilizing your CPU resources well. As long as Consumed WS stays below Maximum WS and Devices Pending is 0, you should be ok.
If Consumed WS is below Maximum WS and Devices Pending is > 0, your hardware is not up to the task.
Maximum WS equals the number of workers multiplied with the number of seconds in the polling period. (default 300)
The watchdog scheduler is disabled by default. You can enable it by setting the following:
$config['service_watchdog_enabled'] = true;\n
The watchdog scheduler will check that the poller log file has been written to within the last poll period. If there is no change to the log file since, the watchdog will restart the polling service. The poller log file is set by $config['log_file'] and defaults to ./logs/librenms.log
Once the LibreNMS service is installed, the cron scripts used by LibreNMS to start alerting, polling, discovery and maintenance tasks are no longer required and must be disabled either by removing or commenting them out. The service handles these tasks when enabled.
"},{"location":"Extensions/Dispatcher-Service/#systemd-service-with-watchdog","title":"systemd service with watchdog","text":"
This service file is an alternative to the above service file. It uses the systemd WatchdogSec= option to restart the service if it does not receive a keep-alive from the running process.
A systemd unit file can be found in misc/librenms-watchdog.service. To install run:
This requires: python3-systemd (or python-systemd on older systems) or https://pypi.org/project/systemd-python/ If you run this systemd service without python3-systemd it will restart every 30 seconds.
* may only be installed on one server (however, some can be clustered)
Distributed Polling allows the workers to be spread across additional servers for horizontal scaling. Distributed polling is not intended for remote polling.
Devices can be grouped together into a poller_group to pin these devices to a single or a group of designated pollers.
All pollers need to write to the same set of RRD files, preferably via RRDcached.
It is also a requirement that at least one locking service is in place to which all pollers can connect. There are currently three locking mechanisms available
memcached
redis (preferred)
sql locks (default)
All of the above locking mechanisms are natively supported in LibreNMS. If none are specified, it will default to using SQL.
"},{"location":"Extensions/Distributed-Poller/#requirements-for-distributed-polling","title":"Requirements for distributed polling","text":"
These requirements are above the normal requirements for a full LibreNMS install.
rrdtool version 1.4 or above
At least one locking mechanism configured
a rrdcached install
By default, all hosts are shared and have the poller_group = 0. To pin a device to a poller, set it to a value greater than 0 and set the same value in the poller's config with distributed_poller_group. One can also specify a comma separated string of poller groups in distributed_poller_group. The poller will then poll devices from any of the groups listed. If new devices get added from the poller they will be assigned to the first poller group in the list unless the group is specified when adding the device.
The following is a standard config, combined with a locking mechanism below:
Preferably you should set the memcached server settings via the web UI. Under Settings > Global Settings > Distributed poller, you fill out the memcached host and port, and then in your .env file you will need to add:
CACHE_DRIVER=memcached\n
If you want to use memcached, you will also need to install an additional Python 3 python-memcached package."},{"location":"Extensions/Distributed-Poller/#example-setups","title":"Example Setups","text":""},{"location":"Extensions/Distributed-Poller/#openstack","title":"OpenStack","text":"
Below is an example setup based on a real deployment which at the time of writing covers over 2,500 devices and 50,000 ports. The setup is running within an OpenStack environment with some commodity hardware for remote pollers. Here's a diagram of how you can scale LibreNMS out:
This is a distributed setup that I created for a regional hybrid ISP (fixed wireless/fiber optic backhaul). It was created at around the ~4,000 device mark to transition from multiple separate instances to one more central. When I left the company, it was monitoring: * 10,800 devices * 307,700 ports * 37,000 processors * 17,000 wireless sensors * ~480,000 other objects/sensors.
As our goal was more to catch alerts and monitor overall trends we went with a 10 minute polling cycle. Polling the above would take roughly 8 minutes and 120GHz worth of CPU across all VMs. CPUs were older Xeons (E5). The diagram below shows the CPU and RAM utilization of each VM during polling. Disk space utilization for SQL/RRD is also included.
Device discovery was split off into its own VM as that process would take multiple hours.
Workers were assigned in the following way:
Web/RRD Server:
alerting: 1
billing: 2
discovery: 0
ping: 1
poller: 10
services: 16
Discovery Server:
alerting: 1
billing: 2
discovery: 60
ping: 1
poller: 5
services: 8
Pollers
alerting: 1
billing: 2
discovery: 0
ping: 1
poller: 40
services: 8
Each poller had on average 19,500/24,000 worker seconds consumed.
RRDCached is incredibly important; this setup ran on spinning disks due to the wonders of caching.
I very strongly recommend setting up recursive DNS on your discovery and polling servers. While I used DNSMASQ there are many options.
SQL tuner will help you quite a bit. You'll also want to increase your maximum connections amount to support the pollers. This setup was at 500. Less important, but putting ~12GB of the database in RAM was reported to have helped web UI performance as well as some DB-heavy Tableau reports. RAM was precious in this environment or it would've been more, but it wasn't necessary either.
Be careful with keeping the default value for 'Device Down Retry' as it can eat up quite a lot of poller activity. I freed up over 20,000 worker seconds when setting this to only happen once or twice per 10-minute polling cycle. The impact of this will vary depending on the percentage of down device in your system. This example had it set at 400 seconds.
Also be wary of keeping event log and syslog entries for too long as it can have a pretty negative effect on web UI performance.
To resolve an issue with large device groups the php fpm max input vars was increased to 20000.
All of these VMs were within the same physical data center so latency was minimal.
The decision of redis over the other locking methods was arbitrary but in over 2 years I never had to touch that VM aside from security updates.
How you set the distribution up is entirely up to you. You can choose to host the majority of the required services on a single virtual machine or server and then a poller to actually query the devices being monitored, all the way through to having a dedicated server for each of the individual roles. Below are notes on what you need to consider both from the software layer, but also connectivity.
"},{"location":"Extensions/Distributed-Poller/#web-api-layer","title":"Web / API Layer","text":"
This is typically Apache but we have setup guides for both Nginx and Lighttpd which should work perfectly fine. There is nothing unique about the role this service is providing except that if you are adding devices from this layer then the web service will need to be able to connect to the end device via SNMP and perform an ICMP test.
It is advisable to run RRDCached within this setup so that you don't need to share the rrd folder via a remote file share such as NFS. The web service can then generate rrd graphs via RRDCached. If RRDCached isn't an option then you can mount the rrd directory to read the RRD files directly.
Central storage should be provided so all RRD files can be read from and written to in one location. As suggested above, it's recommended that RRD Cached is configured and used.
For this example, we are running RRDCached to allow all pollers and web/api servers to read/write to the rrd files with the rrd directory also exported by NFS for simple access and maintenance.
Pollers can be installed and run from anywhere, the only requirements are:
They can access the Memcache instance
They can create RRD files via some method such as a shared filesystem or RRDTool >=1.5.5
They can access the MySQL server
You can either assign pollers into groups and set a poller group against certain devices, this will mean that those devices will only be processed by certain pollers (default poller group is 0) or you can assign all pollers to the default poller group for them to process any and all devices.
This will provide the ability to have a single poller behind a NAT firewall monitor internal devices and report back to your central system. You will then be able to monitor those devices from the Web UI as normal.
Another benefit to this is that you can provide N+x pollers, i.e if you know that you require three pollers to process all devices within 300 seconds then adding a 4th poller will mean that should any one single poller fail then the remaining three will complete polling in time. You could also use this to take a poller out of service for maintenance, i.e OS updates and software updates.
It is extremely advisable to either run a central recursive dns server such as pdns-recursor and have all of your pollers use this or install a recursive dns server on each poller - the volume of DNS requests on large installs can be significant and will slow polling down enough to cause issues with a large number of devices.
A last note to make sure of, is that all pollers writing to the same DB need to have the same APP_KEY value set in the .env file.
Depending on your setup will depend on how you configure your discovery processes.
Cron based polling
It's not necessary to run discovery services on all pollers. In fact, you should only run one discovery process per poller group. Designate a single poller to run discovery (or a separate server if required).
If you run billing, you can do this in one of two ways:
Run poll-billing.php and calculate-billing.php on a single poller which will create billing information for all bills. Please note this poller must have snmp access to all of your devices which have ports within a bill.
The other option is to enable $config['distributed_billing'] = true; in config.php. Then run poll-billing.php on a single poller per group. You can run calculate-billing.php on any poller but only one poller overall.
Dispatcher service When using the dispatcher service, discovery can run on all nodes.
Normally, LibreNMS sends an ICMP ping to the device before polling to check if it is up or down. This check is tied to the poller frequency, which is normally 5 minutes. This means it may take up to 5 minutes to find out if a device is down.
Some users may want to know if devices stop responding to ping more quickly than that. LibreNMS offers a ping.php script to run ping checks as quickly as possible without increasing snmp load on your devices by switching to 1 minute polling.
WARNING: If you do not have an alert rule that alerts on device status, enabling this will be a waste of resources. You can find one in the Alert Rules Collection.
"},{"location":"Extensions/Fast-Ping-Check/#setting-the-ping-check-to-1-minute","title":"Setting the ping check to 1 minute","text":"
1: If you are using RRDCached, stop the service.
- This will flush all pending writes so that the rrdstep.php script can change the steps.\n
2: Change the ping_rrd_step setting in config.php
poller/rrdtool
lnms config:set ping_rrd_step 60\n
3: Update the rrd files to change the step (step is hardcoded at file creation in rrd files)
./scripts/rrdstep.php -h all\n
4: Add the following line to /etc/cron.d/librenms to allow 1 minute ping checks
NOTE: If you are using distributed pollers you can restrict a poller to a group by appending -g to the cron entry. Alternatively, you should only run ping.php on a single node.
Cron only has a resolution of one minute, so for sub-minute ping checks we need to adapt both ping and alerts entries. We add two entries per function, but add a delay before one of these entries.
Remember, you need to remove the original ping.php and alerts.php entries in crontab before proceeding!
1: Set ping_rrd_step
poller/rrdtool
lnms config:set ping_rrd_step 30\n
2: Update the rrd files
./scripts/rrdstep.php -h all\n
3: Update cron (removing any other ping.php or alert.php entries)
The ping.php script respects device dependencies, but the main poller does not (for technical reasons). However, using this script does not disable the icmp check in the poller and a child may be reported as down before the parent.
ping.php uses much the same settings as the poller fping with one exception: retries is used instead of count. ping.php does not measure loss and avg response time, only up/down, so once a device responds it stops pinging it.
This is currently being tested, use at your own risk.
LibreNMS can be used with a MariaDB Galera Cluster. This is a Multi Master cluster, meaning each node in the cluster can read and write to the database. They all have the same ability. LibreNMS will randomly choose a working node to read and write requests to.
For more information see https://laravel.com/docs/database#read-and-write-connections
It is best practice to have a minimum of 3 nodes in the cluster, A odd number of nodes is recommended in the event nodes have a disagreement on data, they will have a tie breaker.
It's recommended that all servers be similar in hardware performance, cluster performance can be affected by the slowest server in the cluster.
Backup the database before starting, and backing up the database regularly is still recommended even in a working cluster environment.
"},{"location":"Extensions/Galera-Cluster/#install-and-configure-galera","title":"Install and Configure Galera","text":""},{"location":"Extensions/Galera-Cluster/#install-galera4-and-mariadb-server","title":"Install Galera4 and MariaDB Server","text":"
These can be obtained from your OS package manager. For example in Ubuntu.
Change the following values for your environment. * wsrep_cluster_address - All the IP address's of your nodes. * wsrep_cluster_name - Name of cluster, should be the same for all nodes * wsrep_node_address - IP address of this node. * wsrep_node_name - Name of this node."},{"location":"Extensions/Galera-Cluster/#edit-librenms-env","title":"Edit LibreNMS .env","text":"
LibreNMS supports up to 9 galera nodes, you define these nodes in the .env file. For each node we have the ability to define if this librenms installation/poller is able to write, read or both to that node. The galera nodes you define here can be the same or differnt for each librenms poller. If you have a poller you only want to write/read to one galera node, you would simply add one DB_HOST, and omit all the rest. This allows you to precisely control what galera nodes a librenms poller is reading and or writing too.
DB_HOST is always set to read/write.
DB_HOST must be set, however, it does not have to be the same on each poller, it can be different as long as it's part of the same galera cluster.
If the node that is set to DB_HOST is down, things like lnms db command no longer work, as they only use DB_HOST and don't failover to other nodes.
Set DB_CONNECTION=mysql_cluster to enable
DB_STICKY can be used if you are pulling out of sync data form the database in a read request. For more information see https://laravel.com/docs/database#the-sticky-option
To see some stats on how the Galera cluster is preforming run the following.
lnms db\n
In the database run following mysql query
SHOW GLOBAL STATUS LIKE 'wsrep_%';\n
Variable Name Value Notes ----------------------------------- ---------------------------------------------------------------- --------------------------------------------------------- wsrep_cluster_size 2 Current number of nodes in Cluster wsrep_cluster_state_uuid e71582f3-cf14-11eb-bcf6-a23029e16405 Last Transaction UUID, Should be the same for each node wsrep_connected On On = Connected with other nodes wsrep_local_state_comment Synced Synced with other nodes"},{"location":"Extensions/Galera-Cluster/#restarting-the-entire-cluster","title":"Restarting the Entire Cluster","text":"
In a cluster environment, steps should be taken to ensure that ALL nodes are not offline at the same time. Failed nodes can recover without issue as long as one node remains online. In the event that ALL nodes are offline, the following should be done to ensure you are starting the cluster with the most up-to-date database. To do this login to each node and running the following
We have simple integration for GateOne, you will be redirected to your Gateone command line frontend to access your equipment. (Currently this only works with SSH)
GateOne itself isn't included within LibreNMS, you will need to install this separately either on the same infrastructure as LibreNMS or as a totally standalone appliance. The installation is beyond the scope of this document.
Config is simple, include the following in your config.php:
Note: You must use the full url including the trailing /!
We also support prefixing the currently logged in Librenms user to the SSH connection URL that is created, eg. ssh://admin@localhost To enable this, put the following in your config.php:
We have simple integration for Graylog, you will be able to view any logs from within LibreNMS that have been parsed by the syslog input from within Graylog itself. This includes logs from devices which aren't in LibreNMS still, you can also see logs for a specific device under the logs section for the device.
Currently, LibreNMS does not associate shortnames from Graylog with full FQDNS. If you have your devices in LibreNMS using full FQDNs, such as hostname.example.com, be aware that rsyslogd, by default, sends the shortname only. To fix this, add
$PreserveFQDN on
to your rsyslog config to send the full FQDN so device logs will be associated correctly in LibreNMS. Also see near the bottom of this document for tips on how to enable/suppress the domain part of hostnames in syslog-messages for some platforms.
Graylog itself isn't included within LibreNMS, you will need to install this separately either on the same infrastructure as LibreNMS or as a totally standalone appliance.
Config is simple, here's an example based on Graylog 2.4:
Graylog messages are stored using GMT timezone. You can display graylog messages in LibreNMS webui using your desired timezone by setting the following option using lnms config:set:
If you don't want to use an admin account for connection to Graylog Log into http:///api/api-browser/global/index.html using graylog admin credentials Browse to: Roles: User roles Click on: Create a new role In JSON body paste this:
If you have enabled TLS for the Graylog API and you are using a self-signed certificate, please make sure that the certificate is trusted by your LibreNMS host, otherwise the connection will fail. Additionally, the certificate's Common Name (CN) has to match the FQDN or IP address specified in
external/graylog
lnms config:set graylog.server example.com\n
"},{"location":"Extensions/Graylog/#match-any-address","title":"Match Any Address","text":"
If you want to match the source address of the log entries against any IP address of a device instead of only against the primary address and the host name to assign the log entries to a device, you can activate this function using
There are 2 configuration parameters to influence the behaviour of the \"Recent Graylog\" table on the overview page of the devices.
external/graylog
lnms config:set graylog.device-page.rowCount 10\n
Sets the maximum number of rows to be displayed (default: 10)
external/graylog
lnms config:set graylog.device-page.loglevel 7\n
You can set which loglevels that should be displayed on the overview page. (default: 7, min: 0, max: 7)
external/graylog
lnms config:set graylog.device-page.loglevel 4\n
Shows only entries with a log level less than or equal to 4 (Emergency, Alert, Critical, Error, Warning).
You can set a default Log Level Filter with
lnms config:set graylog.loglevel 7\n
(applies to /graylog and /device/device=/tab=logs/section=graylog/ (min: 0, max: 7)"},{"location":"Extensions/Graylog/#domain-and-hostname-handling","title":"Domain and hostname handling","text":"
Suppressing/enabling the domain part of a hostname for specific platforms
You should see if what you get in syslog/Graylog matches up with your configured hosts first. If you need to modify the syslog messages from specific platforms, this may be of assistance:
Okay this is a very quick walk-through in writing your own commands for the IRC-Bot.
First of all, create a file in includes/ircbot, the file-name should be in this format: command.inc.php.
When editing the file, do not open nor close PHP-tags. Any variable you assign will be discarded as soon as your command returns. Some variables, specially all listed under $this->, have special meanings or effects. Before a command is executed, the IRC-Bot ensures that the MySQL-Socket is working, that $this->user points to the right user and that the user is authenticated. Below you will find a table with related functions and attributes. You can chain-load any built-in command by calling $this->_command(\"My Parameters\"). You cannot chain-load external commands.
To enable your command, edit your config.php and add something like this:
"},{"location":"Extensions/IRC-Bot-Extensions/#functions-and-attributes","title":"Functions and Attributes","text":"
... that are accessible from within an extension
"},{"location":"Extensions/IRC-Bot-Extensions/#functions","title":"Functions","text":"Function( (Type) $Variable [= Default] [,...] ) Returns Description $this->getChan( )String Returns channel of current event. $this->getData( (boolean) $Block = false )String/Boolean Returns a line from the IRC-Buffer if it's not matched against any other command. If $Block is true, wait until a suitable line is returned. $this->getUser( )String Returns nick of current user. Not to confuse with $this->user! $this->get_user( )Array See $this->user in Attributes. $this->irc_raw( (string) $Protocol )Boolean Sends raw IRC-Protocol. $this->isAuthd( )Booleantrue if the user is authenticated. $this->joinChan( (string) $Channel )Boolean Joins given $Channel. $this->log( (string) $Message )Boolean Logs given $Message into STDOUT. $this->read( (string) $Buffer )String/Boolean Returns a line from given $Buffer or false if there's nothing suitable inside the Buffer. Please use $this->getData() for handler-safe data retrieval. $this->respond( (string) $Message )Boolean Responds to the request auto-detecting channel or private message."},{"location":"Extensions/IRC-Bot-Extensions/#attributes","title":"Attributes","text":"Attribute Type Description $paramsString Contains all arguments that are passed to the .command. $this->chanArray Channels that are configured. $this->commandsArray Contains accessible commands. $this->configArray Contains $config from config.php. $this->dataString Contains raw IRC-Protocol. $this->debugBoolean Debug-Flag. $this->externalArray Contains loaded extra commands. $this->nickString Bot's nick on the IRC. $this->passString IRC-Server's passphrase. $this->portInt IRC-Server's port-number. $this->serverString IRC-Server's hostname. $this->sslBoolean SSL-Flag. $this->tickInt Interval to check buffers in microseconds. $this->userArray Array containing details about the user that sent the request."},{"location":"Extensions/IRC-Bot-Extensions/#example","title":"Example","text":"
includes/ircbot/join-ng.inc.php
if( $this->user['level'] != 10 ) {\n return $this->respond(\"Sorry only admins can make me join.\");\n }\n if( $this->getChan() == \"#noc\") {\n $this->respond(\"Joining $params\");\n $this->joinChan($params);\n } else {\n $this->respond(\"Sorry, only people from #noc can make join.\");\n }\n
LibreNMS has an easy to use IRC-Interface for basic tasks like viewing last log-entry, current device/port status and such.
By default the IRC-Bot will not start when executed and will return an error until at least $config['irc_host'] and $config['irc_port'] has been specified inside config.php. (To start the IRC-Bot run ./irc.php )
If no channel has been specified with $config['irc_chan'], ##librenms will be used. The default Nick for the bot is LibreNMS.
The Bot will reply the same way it's being called. If you send it the commands via Query, it will respond in the Query. If you send the commands via a Channel, then it will respond in the Channel.
"},{"location":"Extensions/IRC-Bot/#configuration-defaults","title":"Configuration & Defaults","text":"Option Default-Value Notes $config['irc_alert']false Optional; Enables Alerting-Socket. EXPERIMENTAL$config['irc_alert_chan']false Optional; Multiple channels can be defined as Array or delimited with ,. EXPERIMENTAL$config['irc_alert_utf8']false Optional; Enables use of strikethrough in alerts via UTF-8 encoded characters. Might cause trouble for some clients. $config['irc_alert_short']false Optional; Send a one line alert summary instead of multi-line detailed alert. $config['irc_authtime']3 Optional; Defines how long in Hours an auth-session is valid. $config['irc_chan']##librenms Optional; Multiple channels can be defined as Array or delimited with ,. Passwords are defined after a space-character. $config['irc_debug']false Optional; Enables debug output (Wall of text) $config['irc_external'] Optional; Array or , delimited string with commands to include from includes/ircbot/*.inc.php$config['irc_host'] Required; Domain or IP to connect. If it's an IPv6 Address, embed it in []. (Example: [::1]) $config['irc_maxretry']5 Optional; How many connection attempts should be made before giving up $config['irc_nick']LibreNMS Optional; $config['irc_pass'] Optional; This sends the IRC-PASS Sequence to IRC-Servers that require Password on Connect $config['irc_port']6667 Required; To enable SSL append a + before the Port. (Example: +6697) $config['irc_ctcp']false Optional; Enable/disable ctcp-replies from the bot (currently VERSION, PING and TIME). $config['irc_ctcp_version']LibreNMS IRCbot. https://www.librenms.org/ Optional; Reply-string to CTCP VERSION requests $config['irc_auth'] Optional; Array of hostmasks that are automatically authenticated."},{"location":"Extensions/IRC-Bot/#irc-commands","title":"IRC-Commands","text":"Command Description .auth <User/Token> If <user>: Request an Auth-Token. If <token>: Authenticate session. .device <hostname> Prints basic information about given hostname. .down List hostnames that are down, if any. .help List available commands. .join <channel> Joins <channel> if user has admin-level. .listdevices Lists the hostnames of all known devices. .log [<N>] Prints N lines or last line of the eventlog. .port <hostname> <ifname> Prints Port-related information from ifname on given hostname. .quit Disconnect from IRC and exit. .reload Reload configuration. .status <type> Prints status information for given type. Type can be devices, services, ports. Shorthands are: dev,srv,prt.version Prints $this->config['project_name_version'].
( /!\\ All commands are case-insensitive but their arguments are case-sensitive)
Any client matching one of the first two hostmasks will automatically be authenticated as the \"admin\" user in LibreNMS, and clients matching the last line will be authenticated as the user \"john\" in LibreNMS, without using .auth and a waiting for a valid token.
The bot is coded in a unified way. This makes writing extensions by far less painful. Simply add your command to the $config['irc_external'] directive and create a file called includes/ircbot/command.inc.php containing your code. The string behind the call of .command is passed as $params. The user who requested something is accessible via $this->user. Send your reply/ies via $this->respond($string).
A more detailed documentation of the functions and variables available for extensions can be found at IRC-Bot Extensions;
Librenms can interpret, display and group certain additional information on ports. This is done based on the format that the port description is written although it's possible to customise the parser to be specific for your setup.
By default we ship all metrics to RRD files, either directly or via RRDCached. On top of this you can ship metrics to Graphite, InfluxDB (v1 or v2 API), OpenTSDB or Prometheus. At present you can't use these backends to display graphs within LibreNMS and will need to use something like Grafana.
For further information on configuring LibreNMS to ship data to one of the other backends then please see the documentation below.
If you wish to render info for configure channels for a device, you need add the various profile-stat directories your system uses, which for most systems will be as below.
When adding sources to nfsen.conf, it is important to use the hostname that matches what is configured in LibreNMS, because the rrd files NfSen creates is named after the source name (ident), and it doesn't allow you to use an IP address instead. However, in LibreNMS, if your device is added by an IP address, add your source with any name of your choice, and create a symbolic link to the rrd file.
cd /var/nfsen/profiles-stat/sitea/\nln -s mychannel.rrd librenmsdeviceIP.rrd\n
external/nfsen
lnms config:set nfsen_split_char '_'\n
This value tells us what to replace the full stops . in the devices hostname with.
external/nfsen
lnms config:set nfsen_suffix '_yourdomain_com'\n
The above is a very important bit as device names in NfSen are limited to 21 characters. This means full domain names for devices can be very problematic to squeeze in, so therefor this chunk is usually removed.
On a similar note, NfSen profiles for channels should be created with the same name.
"},{"location":"Extensions/NFSen/#stats-defaults-and-settings","title":"Stats Defaults and Settings","text":"
Below are the default settings used with nfdump for stats.
For more defaulted information on that, please see nfdump(1). The default location for nfdump is /usr/bin/nfdump. If nfdump is located elsewhere, set it with
The above is a array containing a list for the drop down menu how many top items should be returned.
external/nfsen
lnms config:set nfsen_top_default 20\n
The above sets default top number to use from the drop down.
external/nfsen
lnms config:set nfsen_stat_default srcip\n
The above sets default stat type to use from the drop down.
record Flow Records\nip Any IP Address\nsrcip SRC IP Address\ndstip DST IP Address\nport Any Port\nsrcport SRC Port\ndstport DST Port\nsrctos SRC TOS\ndsttos DST TOS\ntos TOS\nas AS\nsrcas SRC AS\ndstas DST AS\n
external/nfsen
lnms config:set nfsen_order_default packets\n
The above sets default order type to use from the drop down. Any of the following below are currently supported.
flows Number of total flows for the time period.\npacket Number of total packets for the time period.\nbytes Number of total bytes for the time period.\npps Packets Per Second\nbps Bytes Per Second\nbpp Bytes Per Packet\n
external/nfsen
lnms config:set nfsen_last_default 900\n
The above is the last default to use from the drop down.
The above associative array contains time intervals for how far back to go. The keys are the length in seconds and the value is just a description to display.
LibreNMS has the ability to show you a dynamic network map based on data collected from devices. These maps are accessed through the following menu options:
Overview -> Maps -> Network
Overview -> Maps -> Device Group Maps
The Neighbours -> Map tab when viewing a single device (the Neighbours tab will only show if a device has xDP neighbours)
These network maps can be based on:
xDP Discovery
MAC addresses (ARP entries matching interface IP and MAC)
By default, both are are included but you can enable / disable either one using the following config option:
Either remove mac or xdp depending on which you want. XDP is based on FDP, CDP and LLDP support based on the device type.
It is worth noting that the global map could lead to a large network map that is slow to render and interact with. The network map on the device neighbour page, or building device groups and using the device group maps will be more usable on large networks.
The map display can be configured by altering the Vis JS Options
"},{"location":"Extensions/OAuth-SAML/","title":"OAuth and SAML Support","text":""},{"location":"Extensions/OAuth-SAML/#introduction","title":"Introduction","text":"
LibreNMS has support for Laravel Socialite to try and simplify the use of OAuth 1 or 2 providers such as using GitHub, Microsoft, Twitter + many more and SAML.
Socialite Providers supports more than 100+ 3rd parties so you will most likely find support for the SAML or OAuth provider you need without too much trouble.
Please do note however, these providers are not maintained by LibreNMS so we cannot add support for new ones and we can only provide you basic help with general configuration. See the Socialite Providers website for more information on adding a new OAuth provider.
Below we will guide you on how to install SAML or some of these OAth providers, you should be able to use these as a guide on how to install any others you may need but please, please, ensure you read the Socialite Providers documentation carefully.
GitHub Provider Microsoft Provider Okta Provider SAML2
Please ensure you set APP_URL within your .env file so that callback URLs work correctly with the identify provider.
Note
Once you have configured your OAuth or SAML2 provider, please ensure you check the Post configuration settings section at the end.
"},{"location":"Extensions/OAuth-SAML/#github-and-microsoft-examples","title":"GitHub and Microsoft Examples","text":""},{"location":"Extensions/OAuth-SAML/#install-plugin","title":"Install plugin","text":"
Note
First we need to install the plugin itself. The plugin name can be slightly different so be sure to check the Socialite Providers documentation and look for this line, composer require socialiteproviders/github which will give you the name you need for the command, i.e: socialiteproviders/github.
GitHubMicrosoftOkta
lnms plugin:add socialiteproviders/github
lnms plugin:add socialiteproviders/microsoft
lnms plugin:add socialiteproviders/okta
"},{"location":"Extensions/OAuth-SAML/#find-the-provider-name","title":"Find the provider name","text":"
Next we need to find the provider name and writing it down
Note
It's almost always the name of the provider in lowercase but can be different so check the Socialite Providers documentation and look for this line, github => [ which will give you the name you need for the above command: github.
So our provider name is okta, write this down."},{"location":"Extensions/OAuth-SAML/#register-oauth-application","title":"Register OAuth application","text":""},{"location":"Extensions/OAuth-SAML/#register-a-new-application","title":"Register a new application","text":"
Now we need some values from the OAuth provider itself, in most cases you need to register a new \"OAuth application\" at the providers site. This will vary from provider to provider but the process itself should be similar to the examples below.
Note
The callback URL is always: https://your-librenms-url/auth/provider/callback It doesn't need to be a public available site, but it almost always needs to support TLS (https)!
GitHubMicrosoftOkta
For our example with GitHub we go to GitHub Developer Settings and press \"Register a new application\":
Fill out the form accordingly (with your own values):
For our example with Microsoft we go to \"Azure Active Directory\" > \"App registrations\" and press \"New registration\"
Fill out the form accordingly using your own values):
Copy the value of the Application (client) ID and Directory (tenant) ID and save them, you will need them in the next step.
For our example with Okta, we go to Applications>Create App Integration, Select OIDC - OpenID Connect, then Web Application.
Fill in the Name, Logo, and Assignments based on your preferred settings. Leave the Sign-In Redirect URI field, this is where you will edit this later:
Note your Okta domain or login url. Sometimes this can be a vanity url like login.company.com, or sometimes just company.okta.com.
Click save.
"},{"location":"Extensions/OAuth-SAML/#generate-a-new-client-secret","title":"Generate a new client secret","text":"GitHubMicrosoftOkta
Press 'Generate a new client secret' to get a new client secret.
Select Certificates & secrets under Manage. Select the 'New client secret' button. Enter a value in Description and select one of the options for Expires and select 'Add'.
Copy the client secret Value (not Secret ID!) before you leave this page. You will need it in the next step.
This step is done for you when creating the app. All you have to do is copy down the client secret. You will need it in the next step.
Now we need to set the configuration options for your provider within LibreNMS itself. Please replace the values in the examples below with the values you collected earlier:
The format of the configuration string is auth.socialite.configs.*provider name*.*value*
Now you are done with setting up the OAuth provider! If it doesn't work, please double check your configuration values by using the config:get command below.
Since most Socialite Providers don't provide Authorization only Authentication it is possible to set the default User Role for Authorized users. Appropriate care should be taken.
none: No Access: User has no access
normal: Normal User: You will need to assign device / port permissions for users at this level.
global-read: Global Read: Read only Administrator.
admin: Administrator: This is a global read/write admin account.
Socialite can specifiy scopes that should be included with in the authentication request. (see Larvel docs )
For example, if Okta is configured to expose group information it is possible to use these group names to configure User Roles.
This requires configuration in Okta. You can set the 'Groups claim type' to 'Filter' and supply a regex of which groups should be returned which can be mapped below.
First enable sending the 'groups' claim (along with the normal openid, profile, and email claims). Be aware that the scope name must match the claim name. For identity providers where the scope does not match (e.g. Keycloak: roles -> groups) you need to configure a custom scope.
settings/auth/socialite
lnms config:set auth.socialite.scopes.+ groups\n
Then setup mappings from the returned claim arrays to the User levels you want
Depending on what your identity provider (Google, Azure, ...) supports, the configuration could look different from what you see next so please use this as a rough guide. It is up the IdP to provide the relevant details that you will need for configuration.
GoogleAzure
Go to https://admin.google.com/ac/apps/unified
Press \"DOWNLOAD METADATA\" and save the file somewhere accessible by your LibreNMS server
ACS URL = https://your-librenms-url/auth/saml2/callback Entity ID = https://your-librenms-url/auth/saml2 Name ID format = PERSISTANT Name ID = Basic Information > Primary email
First name = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname Last name = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname Primary email = http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress
"},{"location":"Extensions/OAuth-SAML/#manually-configuring-the-identity-provider-with-a-certificate-string","title":"Manually configuring the Identity Provider with a certificate string","text":"
"},{"location":"Extensions/OAuth-SAML/#manually-configuring-the-identity-provider-with-a-certificate-file","title":"Manually configuring the Identity Provider with a certificate file","text":"
You most likely will need to set SESSION_SAME_SITE_COOKIE=none in .env if you use SAML2! If you get an error with http code 419, you should try to remove SESSION_SAME_SITE_COOKIE=none from your .env.
Note
Don't forget to run lnms config:clear after you modify .env to flush the config cache
If you have a need to, then you can override redirect url with the following commands:
OAuthSAML2
Replace github and the relevant URL below with your identity provider details. lnms config:set auth.socialite.configs.github.redirect https://demo.librenms.org/auth/github/callback
From here you can configure the settings for any identity providers you have configured along with some bespoke options.
Redirect Login page: This setting will skip your LibreNMS login and take the end user straight to the first idP you configured.
Allow registration via provider: If this setting is disabled, new users signing in via the idP will not be authenticated. This setting allows a local user to be automatically created which permits their login.
Integrating LibreNMS with Oxidized brings the following benefits:
Config viewing: Current, History, and Diffs all under the Configs tab of each device
Automatic addition of devices to Oxidized: Including filtering and grouping to ease credential management
Configuration searching (Requires oxidized-web 0.8.0 or newer)
First you will need to install Oxidized following their documentation.
Then you can procede to the LibreNMS Web UI and go to Oxidized Settings in the External Settings section of Global Settings. Enable it and enter the url to your oxidized instance.
To have devices automatically added, you will need to configure oxidized to pull them from LibreNMS Feeding Oxidized Note: this means devices will be controlled by the LibreNMS API, and not router.db, passwords will still need to be in the oxidized config file.
LibreNMS will automatically map the OS to the Oxidized model name if they don't match. this means you shouldn't need to use the model_map config option within Oxidized.
This is a straight forward use of Oxidized, it relies on you having a working Oxidized setup which is already taking config snapshots for your devices. When you have that, you only need the following config to enable the display of device configs within the device page itself:
Oxidized supports various ways to utilise credentials to login to devices, you can specify global username/password within Oxidized, Group level username/password or per device. LibreNMS currently supports sending groups back to Oxidized so that you can then define group credentials within Oxidized. To enable this support please switch on 'Enable the return of groups to Oxidized':
external/oxidized
lnms config:set oxidized.group_support true\n
You can set a default group that devices will fall back to with:
If you're running SELinux, you'll need to allow httpd to connect outbound to the network, otherwise Oxidized integration in the web UI will silently fail:
Oxidized has support for feeding devices into it via an API call, support for Oxidized has been added to the LibreNMS API. A sample config for Oxidized is provided below.
You will need to configure default credentials for your devices in the Oxidized config, LibreNMS doesn't provide login credentials at this time.
LibreNMS is able to reload the Oxidized list of nodes, each time a device is added to LibreNMS. To do so, edit the option in Global Settings>External Settings>Oxidized Integration or add the following to your config.
To return an override to Oxidized you can do this by providing the override key, followed by matching a lookup for a host (or hosts), and finally by defining the overriding value itself. LibreNMS does not check for the validity of these attributes but will deliver them to Oxidized as defined.
Matching of hosts can be done using hostname, sysname, os, location, sysDescr, hardware, purpose or notes and including either a 'match' key and value, or a 'regex' key and value. The order of matching is:
hostname
sysName
sysDescr
hardware
os
location
ip
purpose
notes
To match on the device hostnames or sysNames that contain 'lon-sw' or if the location contains 'London' then you would set the following:
This allows extending the configuration further by providing a completely flexible model for custom flags and settings, for example, below shows the ability to add an ssh_proxy host within Oxidized simply by adding the below to your configuration:
Or of course, any custom value that could be needed or wanted can be applied, for example, setting a \"myAttribute\" to \"Super cool value\" for any configured and enabled \"routeros\" device.
If you have devices which you do not wish to appear in Oxidized then you can edit those devices in Device -> Edit -> Misc and enable \"Exclude from Oxidized?\"
The use of custom ssh and telnet ports can be set through device settings misc tab, and can be passed on to oxidized with the following vars_map
Using the Oxidized REST API and Syslog Hooks, Oxidized can trigger configuration downloads whenever a configuration change event has been logged. An example script to do this is included in ./scripts/syslog-notify-oxidized.php. Oxidized can spawn a new worker thread and perform the download immediately with the following configuration
You can perform basic validation of the Oxidized configuration by going to the Overview -> Tools -> Oxidized link and in the Oxidized config validation page, paste your yaml file into the input box and click 'Validate YAML'.
We check for yaml syntax errors and also actual config values to ensure they are used in the correct location.
"},{"location":"Extensions/Oxidized/#accessing-configuration-of-a-disabledremoved-device","title":"Accessing configuration of a disabled/removed device","text":"
When you're disabling or removing a device from LibreNMS, the configuration will no longer be available via the LibreNMS web interface. You can gain access to these configurations directly in the Git repository of Oxidized (if using Git for version control).
1: Check in your Oxidized where are stored your Git repositories:
/home/oxidized/.config/oxidized/config\n
2: Go the correct Git repository for the needed device (the .git one) and get the list of devices using this command:
git ls-files -s\n
3: Save the object ID of the device, and run the command to get the file content:
LibreNMS has integration with PeeringDB to match up your BGP sessions with the peering exchanges you are connected to.
To enable the integration please do so within the WebUI
external/peeringdb
lnms config:set peeringdb.enabled true\n
Data will be collated the next time daily.sh is run or you can manually force this by running php daily.php -f peeringdb, the initial collection is delayed for a random amount of time to avoid overloading the PeeringDB API.
Once enabled you will have an additional menu item under Routing -> PeeringDB
"},{"location":"Extensions/Plugin-System/","title":"Developing for the Plugin System","text":"
With plugins you can extend LibreNMS with special functions that are specific to your setup or are not relevant or interesting for all community members.
You are able to intervene in defined places in the behavior of the website, without it coming to problems with future updates.
This documentation will give you a basis for writing a plugin for LibreNMS. An example plugin is included in the LibreNMS distribution.
"},{"location":"Extensions/Plugin-System/#version-2-plugin-system-structure","title":"Version 2 Plugin System structure","text":"
Plugins in version 2 need to be installed into app/Plugins
Note: Plugins are disabled when the have an error, to show errors instead set plugins.show_errors
The above structure is checked before a plugin can be installed.
All file/folder names are case sensitive and must match the structure.
Only the blade files that are really needed need to be created. A plugin manager will then load a hook that has a basic functionality.
If you want to customize the basic behavior of the hooks, you can create a class in 'app/Plugins/PluginName' and overload the hook methods.
device-overview.blade.php :: This is called in the Device Overview page. You receive the $device as a object per default, you can do your work here and display your results in a frame.
port-tab.blade.php :: This is called in the Port page, in the \"Plugins\" menu_option that will appear when your plugin gets enabled. In this blade, you can do your work and display your results in a frame.
menu.blade.php :: For a menu entry
page.blade.pho :: Here is a good place to add a own LibreNMS page without dependence with a device. A good place to create your own lists with special requirements and behavior.
settings.blade.php :: If you need your own settings and variables, you can have a look in the ExamplePlugin.
PHP code should run inside your hooks method and not your blade view. The built in hooks support authorize and data methods.
These methods are called with Dependency Injection Hooks with relevant database models will include them in these calls. Additionally, the settings argument may be included to inject the plugin settings into the method.
You can overrid the data method to supply data to your view. You should also do any processing here. You can do things like access the database or configuration settings and more.
In the data method we are injecting settings here to count how many we have for display in the menu entry blade view. Note that you must specify a default value (= [] here) for any arguments that don't exist on the parent method.
class Menu extends MenuEntryHook\n{\n public function data(array $settings = []): array\n {\n return [\n 'count' => count($settings),\n ];\n }\n}\n
By default hooks are always shown, but you may control when the user is authorized to view the hook content.
As an example, you could imagine that the device-overview.blade.php should only be displayed when the device is in maintanence mode and the current user has the admin role.
class DeviceOverview extends DeviceOverviewHook\n{\n public function authorize(User $user, Device $device): bool\n {\n return $user->can('admin') && $device->isUnderMaintenance();\n }\n}\n
You may create a full plugin that can publish multiple routes, views, database migrations and more. Create a package according to the Laravel documentation you may call any of the supported hooks to tie into LibreNMS.
https://laravel.com/docs/packages
This is untested, please come to discord and share any expriences and update this documentation!
"},{"location":"Extensions/Plugin-System/#version-1-plugin-system-structure-legacy-version","title":"Version 1 Plugin System structure (legacy version)","text":"
The above structure is checked before a plugin can be installed.
All files / folder names are case sensitive and must match.
PluginName - This is a directory and needs to be named as per the plugin you are creating.
PluginName.php :: This file is used to process calls into the plugin from the main LibreNMS install. Here only functions within the class for your plugin that LibreNMS calls will be executed. For a list of currently enabled system hooks, please see further down. The minimum code required in this file is (replace Test with the name of your plugin):
<?php\n\nclass Test {\n}\n\n?>\n
PluginName.inc.php :: This file is the main included file when browsing to the plugin itself. You can use this to display / edit / remove whatever you like. The minimum code required in this file is:
System hooks are called as functions within your plugin class. The following system hooks are currently available:
menu() :: This is called to build the plugin menu system and you can use this to link to your plugin (you don't have to).
public static function menu() {\n echo('<li><a href=\"plugin/p='.get_class().'\">'.get_class().'</a></li>');\n }\n
device_overview_container($device) :: This is called in the Device Overview page. You receive the $device as a parameter, can do your work here and display your results in a frame.
public static function device_overview_container($device) {\n echo('<div class=\"container-fluid\"><div class=\"row\"> <div class=\"col-md-12\"> <div class=\"panel panel-default panel-condensed\"> <div class=\"panel-heading\"><strong>'.get_class().' Plugin </strong> </div>');\n echo(' Example plugin in \"Device - Overview\" tab <br>');\n echo('</div></div></div></div>');\n }\n
port_container($device, $port) :: This is called in the Port page, in the \"Plugins\" menu_option that will appear when your plugin gets enabled. In this function, you can do your work and display your results in a frame.
public static function port_container($device, $port) {\n echo('<div class=\"container-fluid\"><div class=\"row\"> <div class=\"col-md-12\"> <div class=\"panel panel-default panel-condensed\"> <div class=\"panel-heading\"><strong>'.get_class().' plugin in \"Port\" tab</strong> </div>');\n echo ('Example display in Port tab</br>');\n echo('</div></div></div></div>');\n }\n
It is possible to create graphs of the Proxmox VMs that run on your monitored machines. Currently, only traffic graphs are created. One for each interface on each VM. Possibly, IO graphs will be added later on.
The ultimate goal is to be able to create traffic bills for VMs, no matter on which physical machine that VM runs.
Then in LibreNMS active the librenms-agent and proxmox application flag for the device you are monitoring. You should now see an application in LibreNMS, as well as a new menu-item in the topmenu, allowing you to choose which cluster you want to look at.
"},{"location":"Extensions/Proxmox/#note-if-you-want-to-use-use-xinetd-instead-of-systemd","title":"Note, if you want to use use xinetd instead of systemd","text":"
Its possible to use the librenms-agent started by xinetd instead of systemd. One use case is if you are forced to use a old Proxmox installation. After installing the librenms-agent (see above) please copy enable the xinetd config, then restart the xinetd service:
"},{"location":"Extensions/RRDCached/","title":"Setting up RRDCached","text":"
This document will explain how to set up RRDCached for LibreNMS.
Since version 1.5, rrdtool / rrdcached now supports creating rrd files over rrdcached. If you have rrdcached 1.5.5 or above, you can also tune over rrdcached. To enable this set the following config:
poller/rrdtool
lnms config:set rrdtool_version '1.5.5'\n
This setting has to be the exact version of rrdtool you are running.
NOTE: This feature requires your client version of rrdtool to be 1.5.5 or newer, in addition to your rrdcached version.
"},{"location":"Extensions/RRDCached/#distributed-poller-support-matrix","title":"Distributed Poller Support Matrix","text":"
Shared FS: Is a shared filesystem required?
Features: Supported features in the version indicated.
Check to see if the graphs are being drawn in LibreNMS. This might take a few minutes. After at least one poll cycle (5 mins), check the LibreNMS disk I/O performance delta. Disk I/O can be found under the menu Devices>All Devices>[localhost hostname]>Health>Disk I/O.
Depending on many factors, you should see the Ops/sec drop by ~30-40%.
According to the man page, under \"SECURITY CONSIDERATIONS\", rrdcached has no authentication or security except for running under a unix socket. If you choose to use a network socket instead of a unix socket, you will need to secure your rrdcached installation. To do so you can proxy rrdcached using nginx to allow only specific IPs to connect.
Using the same setup above, using nginx version 1.9.0 or later, you can follow this setup to proxy the default rrdcached port to the local unix socket.
(You can use ./conf.d for your configuration as well)
mkdir /etc/nginx/streams-{available,enabled}
add the following to your nginx.conf file:
#/etc/nginx/nginx.conf\n...\nstream {\n include /etc/nginx/streams-enabled/*;\n}\n
Replace $LibreNMS_IP with the ip of the server that will be using rrdcached. You can specify more than one allow statement. This will bind nginx to TCP 42217 (the default rrdcached port), allow the specified IPs to connect, and deny all others.
next, we'll symlink the config to streams-enabled: ln -s /etc/nginx/streams-{available,enabled}/rrd
When we create rrd files for ports, we currently do so with a max value of 12500000000 (100G). Because of this if a device sends us bad data back then it can appear as though a 100M port is doing 40G+ which is impossible. To counter this you can enable the rrdtool tune option which will fix the max value to the interfaces physical speed (minimum of 10M).
To enable this you can do so in three ways!
Globally under Global Settings -> Poller -> Datastore: RRDTool
For the actual device, Edit Device -> Misc
For each port, Edit Device -> Port Settings
Now when a port interface speed changes (this can happen because of a physical change or just because the device has misreported) the max value is set. If you don't want to wait until a port speed changes then you can run the included script:
./scripts/tune_port.php -h <hostname> -p <ifName>
Wildcards are supported using *, i.e:
./scripts/tune_port.php -h local* -p eth*
This script will then perform the rrdtool tune on each port found using the provided ifSpeed for that port.
Librenms can generate a list of hosts that can be monitored by RANCID. We assume you have currently a running Rancid, and you just need to create and update the file 'router.db'
To generate the config file (maybe even add a cron to schedule this). We've assumed a few locations for Rancid, the config file you want to call it and where LibreNMS is:
cd /opt/librenms/scripts/\nphp ./gen_rancid.php > /the/path/where/is/rancid/core/router.db\n
Test config: sudo /usr/lib/rancid/bin/clogin -f /var/lib/rancid/.cloginrc <device hostname>
NOTE: IF you run into a 'diffie-hellmen' kind of error, then it is because your Linux distro is using newer encryption methods etc. This is basically just letting you know that the device you tested on is running an outdated encryption type. I recommend updating downstream device if able. If not, the following should fix:
sudo vi /etc/ssh/ssh_config
Add:
KexAlgorithms diffie-hellman-group1-sha1
Re-try logging into your device again
Upon success, run rancid:
sudo su -c /var/lib/rancid/bin/rancid-run -s /bin/bash -l rancid
If you have machines that you want to monitor but are not reachable directly, you can use SNMPD Proxy. This will use the reachable SNMPD to proxy requests to the unreachable SNMPD.
'hereweare.example.com'. Use the following config:
On 'hereweare.example.com':
view all included .1\n com2sec -Cn ctx_unreachable readonly <poller-ip> unreachable\n access MyROGroup ctx_unreachable any noauth prefix all none none\n proxy -Cn ctx_unreachable -v 2c -c private unreachable.example.com .1.3\n
On 'unreachable.example.com':
view all included .1 80\n com2sec readonly <hereweare.example.com ip address> private\n group MyROGroup v1 readonly\n group MyROGroup v2c readonly\n group MyROGroup usm readonly\n access MyROGroup \"\" any noauth exact all none none\n
You can now poll community 'private' on 'unreachable.example.com' via community 'unreachable' on host 'hereweare.example.com'. Please note that requests on 'unreachable.example.com' will be coming from 'hereweare.example.com', not your poller.
Currently, LibreNMS supports a lot of trap handlers. You can check them on GitHub here. To add more see Adding new SNMP Trap handlers. Traps are handled via snmptrapd.
snmptrapd is an SNMP application that receives and logs SNMP TRAP and INFORM messages.
The default is to listen on UDP port 162 on all IPv4 interfaces. Since 162 is a privileged port, snmptrapd must typically be run as root.
Make the folder /etc/systemd/system/snmptrapd.service.d/ and edit the file /etc/systemd/system/snmptrapd.service.d/mibs.conf and add the following content.
You may want to tweak to add vendor directories for devices you care about. In the example below, standard and cisco directories are defined, and only IF-MIB is loaded.
In Ubuntu 18 is service located by default in /etc/systemd/system/multi-user.target.wants/snmptrapd.service
Here is a list of snmptrapd options:
Option Description -a Ignore authenticationFailure traps. [OPTIONAL] -f Do not fork from the shell -n Use numeric addresses instead of attempting hostname lookups (no DNS) [OPTIONAL] -m MIBLIST: use MIBLIST (FILE1-MIB:FILE2-MIB). ALL = Load all MIBS in DIRLIST. (usually fails) -M DIRLIST: use DIRLIST as the list of locations to look for MIBs. Option is not recursive, so you need to specify each DIR individually, separated by :. (For example: /opt/librenms/mibs:/opt/librenms/mibs/cisco:/opt/librenms/mibs/edgecos)
Good practice is to avoid -m ALL because then it will try to load all the MIBs in DIRLIST, which will typically fail (snmptrapd cannot load that many mibs). Better is to specify the exact MIB files defining the traps you are interested in, for example for LinkDown and LinkUp as well as BGP traps, use -m IF-MIB:BGP4-MIB. Multiple files can be added, separated with :.
If you want to test or store original TRAPS in log then:
Create a folder for storing traps for example in file traps.log
sudo mkdir /var/log/snmptrap\n
Add the following config to your snmptrapd.service after ExecStart=/usr/sbin/snmptrapd -f -m ALL -M /opt/librenms/mibs
-tLf /var/log/snmptrap/traps.log\n
On SELinux, you need to configure SELinux for SNMPd to communicate to LibreNMS:
cat > snmptrap.te << EOF\nmodule snmptrap 1.0;\n\nrequire {\n type httpd_sys_rw_content_t;\n type snmpd_t;\n class file { append getattr open read };\n class capability dac_override;\n}\n\n#============= snmpd_t ==============\n\nallow snmpd_t httpd_sys_rw_content_t:file { append getattr open read };\nallow snmpd_t self:capability dac_override;\nEOF\ncheckmodule -M -m -o snmptrap.mod snmptrap.te\nsemodule_package -o snmptrap.pp -m snmptrap.mod\nsemodule -i snmptrap.pp\n
After successfully configuring the service, reload service files, enable, and start the snmptrapd service:
The easiest test is to generate a trap from your device. Usually, changing the configuration on a network device, or plugging/unplugging a network cable (LinkUp, LinkDown) will generate a trap. You can confirm it using a with tcpdump, tshark or wireshark.
You can also generate a trap using the snmptrap command from the LibreNMS server itself (if and only if the LibreNMS server is monitored).
"},{"location":"Extensions/SNMP-Trap-Handler/#how-to-send-snmp-v2-trap","title":"How to send SNMP v2 Trap","text":"
"},{"location":"Extensions/SNMP-Trap-Handler/#why-we-need-uptime","title":"Why we need Uptime","text":"
When you send a trap, it must of course conform to a set of standards. Every trap needs an uptime value. Uptime is how long the system has been running since boot. Sometimes this is the operating system, other devices might use the SNMP engine uptime. Regardless, a value will be sent.
So what value should you type in the commands below? Oddly enough, simply supplying no value by using two single quotes '' will instruct the command to obtain the value from the operating system you are executing this on.
You can configure generic event logging for snmp traps. This will log an event of the type trap for received traps. These events can be used for alerting. By default, only the TrapOID is logged. But you can enable the \"detailed\" variant, and all the data received with the trap will be logged.
The parameter can be found in General Settings / External / SNMP Traps Integration.
Services within LibreNMS provides the ability to leverage Nagios plugins to perform additional monitoring outside of SNMP. Services can also be used in conjunction with your SNMP monitoring for larger monitoring functionality.
"},{"location":"Extensions/Services/#setting-up-services","title":"Setting up Services","text":"
Services must be tied to a device to function properly. A good generic option is to use localhost, but it is suggested to attach the check to the device you are monitoring.
Note: Plugins will only load if they are prefixed with check_. The check_ prefix is stripped out when displaying in the \"Add Service\" GUI \"Type\" dropdown list.
Service Templates within LibreNMS provides the same ability as Nagios does with Host Groups. Known as Device Groups in LibreNMS. They are applied devices that belong to the specified Device Group.
Use the Apply buttons to manually create or update Services for the Service Template. Use the Remove buttons to manually remove Services for the Service Template.
After you Edit a Service Template, and then use Apply, all relevant changes are pushed to existing Services previously created.
You can also enable Service Templates Auto Discovery to have Services added / removed / updated on regular discover intervals.
When a Device is a member of multiple Device Groups, templates from all of those Device Groups are applied.
If a Device is added or removed from a Device Group, when the Apply button is used or Auto Discovery runs Services will be added / removed as appropriate.
Service Templates are tied into Device Groups, you need at least one Device Group to be able to add Service Templates - You can define a dummy one. The Device Group does not need members to add Service Templates.
"},{"location":"Extensions/Services/#service-auto-discovery","title":"Service Auto Discovery","text":"
To automatically create services for devices with available checks.
You need to enable the discover services within config.php with the following:
$config['discover_services'] = true;\n
"},{"location":"Extensions/Services/#service-templates-auto-discovery","title":"Service Templates Auto Discovery","text":"
To automatically create services for devices with configured Service Templates.
You need to enable the discover services within config.php with the following:
Service checks are now distributable if you run a distributed setup. To leverage this, use the dispatch service. Alternatively, you could also replace check-services.php with services-wrapper.py in cron instead to run across all polling nodes.
If you need to debug the output of services-wrapper.py then you can add -d to the end of the command - it is NOT recommended to do this in cron.
Now you can add services via the main Services link in the navbar, or via the 'Add Service' link within the device, services page.
Note that some services (procs, inodes, load and similar) will always poll the local LibreNMS server it's running on, regardless of which device you add it to.
By default, the check-services script will collect all performance data that the Nagios script returns and display each datasource on a separate graph. LibreNMS expects scripts to return using Nagios convention for the response message structure: AEN200
However for some modules it would be better if some of this information was consolidated on a single graph. An example is the ICMP check. This check returns: Round Trip Average (rta), Round Trip Min (rtmin) and Round Trip Max (rtmax). These have been combined onto a single graph.
If you find a check script that would benefit from having some datasources graphed together, please log an issue on GitHub with the debug information from the script, and let us know which DS's should go together. Example below:
./check-services.php -d\n -- snip --\n Nagios Service - 26\n Request: /usr/lib/nagios/plugins/check_icmp localhost\n Perf Data - DS: rta, Value: 0.016, UOM: ms\n Perf Data - DS: pl, Value: 0, UOM: %\n Perf Data - DS: rtmax, Value: 0.044, UOM: ms\n Perf Data - DS: rtmin, Value: 0.009, UOM: ms\n Response: OK - localhost: rta 0.016ms, lost 0%\n Service DS: {\n \"rta\": \"ms\",\n \"pl\": \"%\",\n \"rtmax\": \"ms\",\n \"rtmin\": \"ms\"\n }\n OK u:0.00 s:0.00 r:40.67\n RRD[update /opt/librenms/rrd/localhost/services-26.rrd N:0.016:0:0.044:0.009]\n -- snip --\n
Service check is skipped when the associated device is not pingable, and an appropriate entry is populated in the event log. Service check is polled if it's IP address parameter is not equal to associated device's IP address, even when the associated device is not pingable.
To override the default logic and always poll service checks, you can disable ICMP testing for any device by switching Disable ICMP Test setting (Edit -> Misc) to ON.
Service checks will never be polled on disabled devices.
In most cases, only Nagios plugins that run against a remote host with the -H option are available as services. However, if you're remote host is running the Check_MK agent you may be able to use MRPE to monitor Nagios plugins that only execute locally as services.
For example, consider the fairly common check_cpu.sh Nagios plugin. If you added..
...to /etc/check_mk/mrpe.cfg on your remote host, you should be able to check its output by configuring a service using the check_mrpe script.
Add check_mrpe to the Nagios plugins directory on your LibreNMS server and make it executable.
In LibreNMS, add a new service to the desired device with the type mrpe.
Enter the IP address of the remote host and in parameters enter -a cpu_check (this should match the name used at the beginning of the line in the mrpe.cfg file).
All installation steps assume a clean configuration - if you have an existing smokeping setup, you'll need to adapt these steps somewhat.
"},{"location":"Extensions/Smokeping/#install-and-integrate-smokeping-backend-rhel-centos-and-alike","title":"Install and integrate Smokeping Backend - RHEL, CentOS and alike","text":"
Smokeping is available via EPEL, which if you're running LibreNMS, you probably already have. If you want to do something like run Smokeping on a seperate host and ship data via RRCached though, here's the install command:
Once installed, you should need a cron script installed to make sure that the configuration file is updated. You can find an example in misc/librenms-smokeping-rhel.example. Put this into /etc/cron.d/hourly, and mark it executable:
*** Targets ***\n\nprobe = FPing\n\nmenu = Top\ntitle = Network Latency Grapher\nremark = Welcome to the SmokePing website of <b>Insert Company Name Here</b>. \\\n Here you will learn all about the latency of our network.\n\n@include /etc/smokeping/librenms-targets.conf\n
Note there may be other stanza's (possibly *** Slaves ***) between the *** Probes *** and *** Targets *** stanza's - leave these intact.
Leave everything else untouched. If you need to add other configuration, make sure it comes after the LibreNMS configuration, and keep in mind that Smokeping does not allow duplicate modules, and cares about the configuration file sequence.
Once you're happy, manually kick off the cron once, then enable and start smokeping:
"},{"location":"Extensions/Smokeping/#install-and-integrate-smokeping-backend-ubuntu-debian-and-alike","title":"Install and integrate Smokeping Backend - Ubuntu, Debian and alike","text":"
Smokeping is available via the default repositories.
sudo apt-get install smokeping\n
Once installed, you should need a cron script installed to make sure that the configuration file is updated. You can find an example in misc/librenms-smokeping-debian.example. Put this into /etc/cron.d/hourly, and mark it executable:
Strip everything from /etc/smokeping/config.d/Targets and replace with:
*** Targets ***\n\nprobe = FPing\n\nmenu = Top\ntitle = Network Latency Grapher\nremark = Welcome to the SmokePing website of <b>Insert Company Name Here</b>. \\\n Here you will learn all about the latency of our network.\n\n@include /etc/smokeping/config.d/librenms-targets.conf\n
Leave everything else untouched. If you need to add other configuration, make sure it comes after the LibreNMS configuration, and keep in mind that Smokeping does not allow duplicate modules, and cares about the configuration file sequence.
"},{"location":"Extensions/Smokeping/#configure-librenms-all-operating-systems","title":"Configure LibreNMS - All Operating Systems","text":"
dir should match the location that smokeping writes RRD's to pings should match the default smokeping value, default 20 probes should be the number of processes to spread pings over, default 2
These settings can also be set in the Web UI.
"},{"location":"Extensions/Smokeping/#configure-smokepings-web-ui-optional","title":"Configure Smokeping's Web UI - Optional","text":"
This section covers the required configuration for your web server of choice. This covers the required configuration for either Apache or Nginx.
LibreNMS does not need the Web UI - you can find the graphs in LibreNMS on the latency tab.
"},{"location":"Extensions/Smokeping/#apache-configuration-ubuntu-debian-and-alike","title":"Apache Configuration - Ubuntu, Debian and alike","text":"
Edit the General configuration file's Owner and contact, and cgiurl hostname details:
After creating the symlink, restart Apache with sudo systemctl apache2 restart
You should be able to load the Smokeping web interface at http://yourhost/cgi-bin/smokeping.cgi
"},{"location":"Extensions/Smokeping/#nginx-configuration-rhel-centos-and-alike","title":"Nginx Configuration - RHEL, CentOS and alike","text":"
This section assumes you have configured LibreNMS with Nginx as specified in Configure Nginx.
Note, you need to install fcgiwrap for CGI wrapper interact with Nginx
yum install fcgiwrap\n
Then create a new configuration file for fcgiwrap in /etc/nginx/fcgiwrap.conf
# Include this file on your nginx.conf to support debian cgi-bin scripts using\n# fcgiwrap\nlocation /cgi-bin/ {\n # Disable gzip (it makes scripts feel slower since they have to complete\n # before getting gzipped)\n gzip off;\n\n # Set the root to /usr/lib (inside this location this means that we are\n # giving access to the files under /usr/lib/cgi-bin)\n #root /usr/lib;\n root /usr/share/nginx;\n\n # Fastcgi socket\n fastcgi_pass unix:/var/run/fcgiwrap.socket;\n\n # Fastcgi parameters, include the standard ones\n include /etc/nginx/fastcgi_params;\n\n # Adjust non standard parameters (SCRIPT_FILENAME)\n fastcgi_param SCRIPT_FILENAME /usr/lib$fastcgi_script_name;\n} \n
Be sure to create the folder cgi-bin folder with required permissions (755)
mkdir /usr/share/nginx/cgi-bin\n
Create fcgiwrap systemd service in /usr/lib/systemd/system/fcgiwrap.service
If images/js/css don't load, you might have to add
location ^~ /smokeping/css {\n alias /usr/share/smokeping/htdocs/css/;\n gzip off;\n}\nlocation ^~ /smokeping/js {\n alias /usr/share/smokeping/htdocs/js/;\n gzip off;\n}\nlocation ^~ /smokeping/images {\n alias /opt/librenms/rrd/smokeping/images;\n gzip off;\n}\n
After saving the configuration file, verify your Nginx configuration file syntax is OK with sudo nginx -t, then restart Nginx with sudo systemctl restart nginx
You should be able to load the Smokeping web interface at http://yourlibrenms/smokeping
"},{"location":"Extensions/Smokeping/#nginx-configuration-ubuntu-debian-and-alike","title":"Nginx Configuration - Ubuntu, Debian and alike","text":"
This section assumes you have configured LibreNMS with Nginx as specified in Configure Nginx.
Note, you need to install fcgiwrap for CGI wrapper interact with Nginx
apt install fcgiwrap\n
Then configure Nginx with the default configuration
After saving the configuration file, verify your Nginx configuration file syntax is OK with sudo nginx -t, then restart Nginx with sudo systemctl restart nginx
You should be able to load the Smokeping web interface at http://yourlibrenms/smokeping
You can use the purpose-made htpasswd utility included in the apache2-utils package (Nginx password files use the same format as Apache). You can install it on Ubuntu with
apt install apache2-utils\n
After that you need to create password for your user
htpasswd -c /etc/nginx/.htpasswd USER\n
You can verify your user and password with
cat /etc/nginx/.htpasswd\n
Then you just need to add to your config auth_basic parameters
location ^~ /smokeping/ {\n alias /usr/share/smokeping/www/;\n index smokeping.cgi;\n gzip off;\n auth_basic \"Private Property\";\n auth_basic_user_file /etc/nginx/.htpasswd;\n }\n
There is a problem writing to the RRD directory. This is somewhat out of scope of LibreNMS, but make sure that file permissions and SELinux labels allow the smokeping user to write to the directory.
If you're using RRDCacheD, make sure that the permissions are correct there too, and that if you're using -B that the smokeping RRD's are inside the base directory; update the smokeping rrd directory if required.
It's not recommended to run RRDCachedD without the -B switch.
"},{"location":"Extensions/Smokeping/#share-rrdcached-with-librenms","title":"Share RRDCached with LibreNMS","text":"
Move the RRD's and give smokeping access rights to the LibreNMS RRD directory:
If you have SELinux on, see next section before starting smokeping. Finally restart the smokeping service:
sudo systemctl start smokeping\n
Remember to update your config with the new locations.
"},{"location":"Extensions/Smokeping/#configure-selinux-to-allow-smokeping-to-write-in-librenms-directory-on-centos-rhel","title":"Configure SELinux to allow smokeping to write in LibreNMS directory on Centos / RHEL","text":"
If you are using RRDCached with the -B switch and smokeping RRD's inside the LibreNMS RRD base directory, you can install this SELinux profile:
"},{"location":"Extensions/Smokeping/#probe-fping-missing-missing-from-the-probes-section","title":"Probe FPing missing missing from the probes section","text":"
Take a look at the instructions again - something isn't correct in your configuration.
"},{"location":"Extensions/Smokeping/#section-or-variable-already-exists","title":"Section or variable already exists","text":"
Most likely, content wasn't fully removed from the *** Probes ****** Targets*** stanza's as instructed. If you're trying to integrate LibreNMS, smokeping and another source of configuration, you're probably trying to redefine a module (e.g. '+ FPing' more than once) or stanza. Otherwise, look again at the instructions.
"},{"location":"Extensions/Smokeping/#mandatory-variable-probe-not-defined","title":"Mandatory variable 'probe' not defined","text":"
The target block must have a default probe. If you follow the instructions you will have one. If you're trying to integrate LibreNMS, smokeping and another source of configuration, you need to make sure there are no duplicate or missing definitions.
"},{"location":"Extensions/Smokeping/#file-usrsbinsendmail-does-not-exist","title":"File '/usr/sbin/sendmail' does not exist`","text":"
If you got this error at the end of the installation, simply edit or comment out the sendmail entry in the configuration:
To run LibreNMS under a subdirectory on your Apache server, the directives for the LibreNMS directory are placed in the base server configuration, or in a virtual host container of your choosing. If using a virtual host, place the directives in the file where the virtual host is configured. If using the base server on RHEL distributions (CentOS, Scientific Linux, etc.) the directives can be placed in /etc/httpd/conf.d/librenms.conf. For Debian distributions (Ubuntu, etc.) place the directives in /etc/apache2/sites-available/default.
#These directives can be inside a virtual host or in the base server configuration\nAllowEncodedSlashes On\nAlias /librenms /opt/librenms/html\n\n<Directory \"/opt/librenms/html\">\n AllowOverride All\n Options FollowSymLinks MultiViews\n</Directory>\n
The RewriteBase directive in html/.htaccess must be rewritten to reference the subdirectory name. Assuming LibreNMS is running at http://example.com/librenms/, you will need to change RewriteBase / to RewriteBase /librenms.
Finally, set APP_URL=/librenms/ in .env and lnms config:set base_url '/librenms/'.
This section explain different ways to recieve and process syslog with LibreNMS. Except of graylog, all Syslogs variants store their logs in the LibreNMS database. You need to enable the Syslog extension in config.php:
$config['enable_syslog'] = 1;\n
A Syslog integration gives you a centralized view of information within the LibreNMS (device view, traps, event). Further more you can trigger alerts based on syslog messages (see rule collections)."},{"location":"Extensions/Syslog/#traditional-syslog-server","title":"Traditional Syslog server","text":""},{"location":"Extensions/Syslog/#syslog-ng","title":"syslog-ng","text":"Debian / UbuntuCentOS / RedHat
apt-get install syslog-ng-core\n
yum install syslog-ng\n
Once syslog-ng is installed, create the config file (/etc/syslog-ng/conf.d/librenms.conf) and paste the following:
If no messages make it to the syslog tab in LibreNMS, chances are you experience an issue with SELinux. If so, create a file mycustom-librenms-rsyslog.te , with the following content:
module mycustom-librenms-rsyslog 1.0;\n\nrequire {\n type syslogd_t;\n type httpd_sys_rw_content_t;\n type ping_exec_t;\n class process execmem;\n class dir { getattr search write };\n class file { append getattr execute open read };\n}\n\n#============= syslogd_t ==============\nallow syslogd_t httpd_sys_rw_content_t:dir { getattr search write };\nallow syslogd_t httpd_sys_rw_content_t:file { open read append getattr };\nallow syslogd_t self:process execmem;\nallow syslogd_t ping_exec_t:file execute;\n
If you prefer rsyslog, here are some hints on how to get it working.
Add the following to your rsyslog config somewhere (could be at the top of the file in the step below, could be in rsyslog.conf if you are using remote logs for something else on this host)
# Listen for syslog messages on UDP:514\n$ModLoad imudp\n$UDPServerRun 514\n
Create a file called /etc/rsyslog.d/30-librenms.confand add the following depending on your version of rsyslog.
If your rsyslog server is receiving messages relayed by another syslog server, you may try replacing %fromhost% with %hostname%, since fromhost is the host the message was received from, not the host that generated the message. The fromhost property is preferred as it avoids problems caused by devices sending incorrect hostnames in syslog messages.
Next, create a logstash configuration file (ex. /etc/logstash/conf.d/logstash-simple.conf), and add the following:
input {\nsyslog {\n port => 514\n }\n}\n\n\noutput {\n exec {\n command => \"echo `echo %{host},,,,%{facility},,,,%{priority},,,,%{severity},,,,%{facility_label},,,,``date --date='%{timestamp}' '+%Y-%m-%d %H:%M:%S'``echo ',,,,%{message}'``echo ,,,,%{program} | sed 's/\\x25\\x7b\\x70\\x72\\x6f\\x67\\x72\\x61\\x6d\\x7d/%{facility_label}/'` | sed 's/,,,,/||/g' | /opt/librenms/syslog.php &\"\n }\n elasticsearch {\n hosts => [\"10.10.10.10:9200\"]\n index => \"syslog-%{+YYYY.MM.dd}\"\n }\n}\n
Replace 10.10.10.10 with your primary elasticsearch server IP, and set the incoming syslog port. Alternatively, if you already have a logstash config file that works except for the LibreNMS export, take only the \"exec\" section from output and add it.
"},{"location":"Extensions/Syslog/#remote-logstash-or-any-json-source","title":"Remote Logstash (or any json source)","text":"
If you have a large logstash / elastic installation for collecting and filtering syslogs, you can simply pass the relevant logs as json to the LibreNMS API \"syslog sink\". This variant may be more flexible and secure in transport. It does not require any major changes to existing ELK setup. You can also pass simple json kv messages from any kind of application or script (example below) to this sink.
For long term or advanced aggregation searches you might still use Kibana/Grafana/Graylog etc. It is recommended to keep config['syslog_purge'] short.
A minimal Logstash http output configuration can look like this:
output {\n....\n #feed it to LibreNMS\n http {\n http_method => \"post\"\n url => \"https://sink.librenms.org/api/v0/syslogsink/ # replace with your librenms host\n format => \"json_batch\" # put multiple syslogs in on HTTP message\n retry_failed => false # if true, logstash is blocking if the API is unavailable, be careful! \n headers => [\"X-Auth-Token\",\"xxxxxxxLibreNMSApiToken]\n\n # optional if your mapping is not already done before or does not match. \"msg\" and \"host\" is mandatory. \n # you might also use out the clone {} function to duplicate your log stream and a dedicated log filtering/mapping etc.\n # mapping => {\n # \"host\"=> \"%{host}\"\n # \"program\" => \"%{program}\"\n # \"facility\" => \"%{facility_label}\"\n # \"priority\" => \"%{syslog5424_pri}\"\n # \"level\" => \"%{facility_label}\" \n # \"tag\" => \"%{topic}\"\n # \"msg\" => \"%{message}\"\n # \"timestamp\" => \"%{@timestamp}\"\n # }\n }\n}\n
Below are sample configurations for a variety of clients. You should understand the config before using it as you may want to make some slight changes. Further configuration hints may be found in the file Graylog.md.
Replace librenms.ip with IP or hostname of your LibreNMS install.
Replace any variables in with the relevant information."},{"location":"Extensions/Syslog/#syslog","title":"syslog","text":"
set system syslog host librenms.ip authorization any\nset system syslog host librenms.ip daemon any\nset system syslog host librenms.ip kernel any\nset system syslog host librenms.ip user any\nset system syslog host librenms.ip change-log any\nset system syslog host librenms.ip source-address <management ip>\nset system syslog host librenms.ip exclude-hostname\nset system syslog time-format\n
info-center loghost librenms.ip\ninfo-center timestamp debugging short-date without-timezone // Optional\ninfo-center timestamp log short-date // Optional\ninfo-center timestamp trap short-date // Optional\n//This is optional config, especially if the device is in public ip and you dont'want to get a lot of messages of ACL\ninfo-center filter-id bymodule-alias VTY ACL_DENY\ninfo-center filter-id bymodule-alias SSH SSH_FAIL\ninfo-center filter-id bymodule-alias SNMP SNMP_FAIL\ninfo-center filter-id bymodule-alias SNMP SNMP_IPLOCK\ninfo-center filter-id bymodule-alias SNMP SNMP_IPUNLOCK\ninfo-center filter-id bymodule-alias HTTP ACL_DENY\n
log date-format iso // Required so syslog-ng/LibreNMS can correctly interpret the log message formatting.\nlog host x.x.x.x\nlog host x.x.x.x level <errors> // Required. A log-level must be specified for syslog messages to send.\nlog host x.x.x.x level notices program imish // Useful for seeing all commands executed by users.\nlog host x.x.x.x level notices program imi // Required for Oxidized Syslog hook log message.\nlog host source <eth0>\n
If you have permitted udp and tcp 514 through any firewall then that should be all you need. Logs should start appearing and displayed within the LibreNMS web UI.
Trigger external scripts based on specific syslog patterns being matched with syslog hooks. Add the following to your LibreNMS config.php to enable hooks:
$config['enable_syslog_hooks'] = 1;\n
The below are some example hooks to call an external script in the event of a configuration change on Cisco ASA, IOS, NX-OS and IOS-XR devices. Add to your config.php file to enable.
Note: At least software version 5.4.8-2.1 is required. log host x.x.x.x level notices program imi may also be required depending on configuration. This is to ensure the syslog hook log message gets sent to the syslog server.
The cleanup is run by daily.sh and any entries over X days old are automatically purged. Values are in days. See here for more Clean Up Options Link
"},{"location":"Extensions/Syslog/#matching-syslogs-to-hosts-with-different-names","title":"Matching syslogs to hosts with different names","text":"
In some cases, you may get logs that aren't being associated with the device in LibreNMS. For example, in LibreNMS the device is known as \"ne-core-01\", and that's how DNS resolves. However, the received syslogs are for \"loopback.core-nw\".
To fix this issue, you can configure LibreNMS to translate the incoming syslog hostname into another hostname, so that the logs get associated with the correct device.
Over the last couple of years, the primary attack vector for internet accounts has been static passwords. Therefore static passwords are no longer sufficient to protect unauthorized access to accounts. Two Factor Authentication adds a variable part in authentication procedures. A user is now required to supply a changing 6-digit passcode in addition to their password to obtain access to the account.
LibreNMS has a RFC4226 conformant implementation of both Time and Counter based One-Time-Passwords. It also allows the administrator to configure a throttle time to enforce after 3 failures exceeded. Unlike RFC4226 suggestions, this throttle time will not stack on the amount of failures.
In general, these two types do not differ in algorithmic terms. The types only differ in the variable being used to derive the passcodes from. The underlying HMAC-SHA1 remains the same for both types, security advantages or disadvantages of each are discussed further down.
Like the name suggests, this type uses the current Time or a subset of it to generate the passcodes. These passcodes solely rely on the secrecy of their Secretkey in order to provide passcodes. An attacker only needs to guess that Secretkey and the other variable part is any given time, presumably the time upon login. RFC4226 suggests a resynchronization attempt in case the passcode mismatches, providing the attacker a range of up to +/- 3 Minutes to create passcodes.
This type uses an internal counter that needs to be in sync with the server's counter to successfully authenticate the passcodes. The main advantage over timebased OTP is the attacker doesn't only need to know the Secretkey but also the server's Counter in order to create valid passcodes. RFC4226 suggests a resynchronization attempt in case the passcode mismatches, providing the attacker a range of up to +4 increments from the actual counter to create passcodes.
Enable 'Two-Factor' Via Global Settings in the Web UI under Authentication -> General Authentication Settings.
Optionally enter a throttle timer in seconds. This will unlock an account after this time once it has failed 3 attempt to authenticate. Set to 0 (default) to disable this feature, meaning accounts will remain locked after 3 attempts and will need an administrator to clear.
If Two-Factor is enabled, the Settings -> Manage Users grid will show a '2FA' column containing a green tick for users with active 2FA.
There is no functionality to mandate 2FA for users.
If a user has failed 3 attempts, their account can be unlocked or 2FA disabled by editing the user from the Manage Users table.
If a throttle timer is set, it will unlock accounts after this time. If set to the default of 0, accounts will need to be manually unlocked by an administrator after 3 failed attempts.
Locked accounts will report to the user stating to wait for the throttle time period, or to contact the administrator if no timer set.
This document explains how to install Varnish Reverse Proxy for LibreNMS.
Varnish is caching software that sits logically between an HTTP client and an HTTP server. Varnish caches HTTP responses from the HTTP server. If an HTTP request can not be responded to by the Varnish cache it directs the request to the HTTP Server. This type of HTTP caching is called a reverse proxy server. Caching your HTTP server can decrease page load times significantly.
In this example we will assume your Apache 2.4.X HTTP server is working and configured to process HTTP requests on port 80. If not, please see Installing LibreNMS
Using a web browser navigate to :6081 or 127.0.0.1:6081. You should see a Varnish error message, this shows that Varnish is working. Example error message:
Now we need to configure Varnish to listen to HTTP requests on port 80 and relay those requests to the Apache HTTP server on port 8080 (see block diagram).
Stop Varnish.
systemctl stop varnish\n
Create a back-up of varnish.params just in case you make a mistake.
# Set this to 1 to make systemd reload try to switch vcl without restart.\nRELOAD_VCL=1\n\n# Main configuration file. You probably want to change it.\nVARNISH_VCL_CONF=/etc/varnish/librenms.vcl\n\n# Default address and port to bind to. Blank address means all IPv4\n# and IPv6 interfaces, otherwise specify a host name, an IPv4 dotted\n# quad, or an IPv6 address in brackets.\nVARNISH_LISTEN_ADDRESS=192.168.1.10\nVARNISH_LISTEN_PORT=80\n\n# Admin interface listen address and port\nVARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1\nVARNISH_ADMIN_LISTEN_PORT=6082\n\n# Shared secret file for admin interface\nVARNISH_SECRET_FILE=/etc/varnish/secret\n\n# Backend storage specification, see Storage Types in the varnishd(5)\n# man page for details.\nVARNISH_STORAGE=\"malloc,512M\"\n\n# Default TTL used when the backend does not specify one\nVARNISH_TTL=120\n\n# User and group for the varnishd worker processes\nVARNISH_USER=varnish\nVARNISH_GROUP=varnish\n\n# Other options, see the man page varnishd(1)\nDAEMON_OPTS=\"-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300\"\n
"},{"location":"Extensions/Varnish/#configure-apache-for-varnish","title":"Configure Apache for Varnish","text":"
Edit librenms.conf and modify the Apache Virtual Host listening port.
Modify: <VirtualHost *:80> to <VirtualHost *:8080>
vim /etc/httpd/conf.d/librenms.conf\n
Varnish can not share a port with Apache. Change the Apache listening port to 8080.
Paste example VCL config, read config comments for more information.
#\n# This is an example VCL file for Varnish.\n#\n# It does not do anything by default, delegating control to the\n# builtin VCL. The builtin VCL is called when there is no explicit\n# return statement.\n#\n# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/\n# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.\n\n# Marker to tell the VCL compiler that this VCL has been adapted to the\n# new 4.0 format.\nvcl 4.0;\n\n# Default backend definition. Set this to point to your Apache server.\nbackend librenms {\n .host = \"127.0.0.1\";\n .port = \"8080\";\n}\n\n# In this example our objective is to cache static content with Varnish and temporarily\n# cache dynamic content in the client web browser.\n\nsub vcl_recv {\n # HTTP requests from client web browser.\n # Here we remove any cookie HTTP requests for the 'librenms.domain.net' host\n # containing the matching file extensions. We don't have to match by host if you\n # only have LibreNMS running on Apache.\n # If the cookies are not removed from the HTTP request then Varnish will not cache\n # the files. 'else' function is set to 'pass', or don't cache anything that doesn't\n # match.\n\n if (req.http.host ~ \"^librenms.domain.net\") {\n set req.backend_hint = librenms;\n if (req.url ~ \"\\.(png|gif|jpg|jpeg|ico|pdf|js|css|svg|eot|otf|woff|woff2|ttf)$\") {\n unset req.http.Cookie;\n }\n\n else{\n return(pass);\n }\n }\n}\n\nsub vcl_backend_response {\n # 'sub vcl_backend_response' is the same function as 'sub vcl_fetch' in Varnish 3, however,\n # the syntax is slightly different\n # This function happens after we read the response headers from the backend (Apache).\n # First function 'if (bereq.url ~ \"\\' removes cookies from the Apache HTTP responses\n # that match the file extensions that are between the quotes, and cache the files for 24 hours.\n # This assumes you update LibreNMS once a day, otherwise restart Varnish to clear cache.\n # Second function 'if (bereq.url ~ \"^/' removes the Pragma no-cache statements and sets the age\n # of how long the client browser will cache the matching urls.\n # LibreNMS graphs are updated every 300 seconds, 'max-age=300' is set to match this behavior.\n # We could cache these URLs in Varnish but it would add to the complexity of the config.\n\n if (bereq.http.host ~ \"^librenms.domain.net\") {\n if (bereq.url ~ \"\\.(png|gif|jpg|jpeg|ico|pdf|js|css|svg|eot|otf|woff|woff2|ttf)$\") {\n unset beresp.http.Set-cookie;\n set beresp.ttl = 24h;\n }\n\n if (bereq.url ~ \"^/graph.php\" || \"^/device/\" || \"^/iftype/\" || \"^/customers/\" || \"^/health/\" || \"^/apps/\" || \"^/(plugin)$\" || \"^/(alert)$\" || \"^/eventlog/\" || \"^/graphs/\" || \"^/ports/\" ) {\n unset beresp.http.Pragma;\n set beresp.http.Cache-Control = \"max-age=300\";\n }\n }\n}\n\nsub vcl_deliver {\n # Happens when we have all the pieces we need, and are about to send the\n # response to the client.\n # You can do accounting or modifying the final object here.\n\n return (deliver);\n}\n
Reload rules to remove the temporary port rule we added earlier.
firewall-cmd --reload\n
Varnish caching does not take effect immediately. You will need to browse the LibreNMS website to build up the cache.
Use the command varnishstat to monitor Varnish caching. Over time you should see 'MAIN.cache_hit' and 'MAIN.client_req' increase. With the above VCL the hit to request ratio is approximately 84%.
The Network Maps and Dependency Maps all use a common configuration for the vis.js library, which affects the way the maps are rendered, as well as the way that users can interact with the maps. This configuration can be adjusted by following the instructions below.
This link will show you all the options and explain what they do.
You may also access the dynamic configuration interface example here from within LibreNMS by adding the following to config.php
You may want to disable the automatic page refresh while you're tweaking your configuration, as the refresh will reset the dynamic configuration UI to the values currently saved in config.php This can be done by clicking on the Settings Icon then Refresh Pause.
Once you've achieved your desired map appearance, click the generate options button at the bottom to be given the necessary parameters to add to your config.php file. You will need to paste the generated config into config.php the format will need to look something like this. Note that the configurator will output the config with var options you will need to strip them out and at the end of the config you need to add an }'; see the example below.
Extract to your LibreNMS plugins directory /opt/librenms/html/plugins so you should see something like /opt/librenms/html/plugins/Weathermap/ The best way to do this is via git. Go to your install directory and then /opt/librenms/html/plugins enter:
Now you should see Weathermap Overview -> Plugins -> Weathermap Create your maps, please note when you create a MAP, please click Map Style, ensure Overlib is selected for HTML Style and click submit. Also, ensure you set an output image filename and output HTML filename in Map Properties. I'd recommend you use the output folder as this is excluded from git updates (i.e. use output/mymap.png and output/mymap.html).
Optional: If your install is in another directory than standard, set $basehref within map-poller.php.
Automatically generate weathermaps from a LibreNMS database using WeatherMapper.
"},{"location":"Extensions/Weathermap/#adding-your-network-weathermaps-to-the-dashboards","title":"Adding your Network Weathermaps to the Dashboards","text":"
Once you have created your Network Weather Map you can add it to a dashboard page by doing the following.
World Map Widget, requires you to have properly formatted addresses in sysLocation or sysLocation override. As part of the standard poller these addresses will be Geocoded by Google and stored in the database.
Location resolution happens as follows
If device['location'] contains [lat, lng] (note the square brackets), that is used
If there is a location overide for the device in the WebUI and it contains [lat, lng] (note the square brackets), that is used.
Attempt to resolve lat, lng using lnms config:set geoloc.engine
Properly formatted addresses in sysLocation or sysLocation override, under device settings.
Example:
[40.424521, -86.912755]\n
or
1100 Congress Ave, Austin, TX 78701 (3rd floor cabinet)\n
Information inside parentheses is ignored during GEO lookup
Initial Latitude / Longitude: The map will be centered on those coordinates.
Initial Zoom: Initial zoom of the map. More information about zoom levels.
Grouping radius: Markers are grouped by area. This value define the maximum size of grouping areas.
Show devices: Show devices based on status.
Example Settings:
"},{"location":"Extensions/World-Map/#device-overview-world-map-settings","title":"Device Overview World Map Settings","text":"
If a device has a location with a valid latitude and logitude, the device overview page will have a panel showing the device on a world map. The following settings affect this map:
# Does the world map start opened, or does the user need to clivk to view\nlnms config:set device_location_map_open false\n# Do we show all other devices on the map as well\nlnms config:set device_location_map_show_devices false\n# Do we show a network map based on device dependencies\nlnms config:set device_location_map_show_device_dependencies false\n
lnms config:set map.engine leaflet\nlnms config:set leaflet.default_lat \"51.981074\"\nlnms config:set leaflet.default_lng \"5.350342\"\nlnms config:set leaflet.default_zoom 8\n# Device grouping radius in KM default 80KM\nlnms config:set leaflet.group_radius 1\n# Enable network map on world map\nlnms config:set network_map_show_on_worldmap true\n# Use CDP/LLDP for network map, or device dependencies\nlnms config:set network_map_worldmap_link_type xdp/depends\n# Do not show devices that have notifications disabled\nlnms config:set network_map_worldmap_show_disabled_alerts false\n
Further custom options are available to load different maps of the world, set default coordinates of where the map will zoom and the zoom level by default. An example of this is:
Your metric path can be prefixed if required, otherwise the metric path for Graphite will be in the form of hostname.measurement.fieldname, interfaces will be stored as hostname.ports.ifName.fieldname.
The same data then stored within rrd will be sent to Graphite and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/InfluxDB/","title":"Enabling support for InfluxDB","text":"
Before we get started it is important that you know and understand that InfluxDB support is currently alpha at best. All it provides is the sending of data to a InfluxDB install. Due to the current changes that are constantly being made to InfluxDB itself then we cannot guarantee that your data will be ok so enabling this support is at your own risk!
No credentials are needed if you don't use InfluxDB authentication.
The same data then stored within rrd will be sent to InfluxDB and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/InfluxDBv2/","title":"Enabling support for InfluxDBv2","text":"
Before we get started it is important that you know and understand that InfluxDBv2 support is currently alpha at best. All it provides is the sending of data to a InfluxDBv2 bucket. Due to the current changes that are constantly being made to InfluxDB itself then we cannot guarantee that your data will be ok so enabling this support is at your own risk!
It is also important to understand that InfluxDBv2 only supports the InfluxDBv2 API used in InfluxDB version 2.0 or higher. If you are looking to send data to any other version of InfluxDB than you should use the InfluxDB datastore instead.
The same data stored within rrd will be sent to InfluxDB and recorded. You can then create graphs within Grafana or InfluxDB to display the information you need.
Please note that polling will slow down when the poller isn't able to reach or write data to InfluxDBv2.
"},{"location":"Extensions/metrics/OpenTSDB/","title":"Enabling support for OpenTSDB","text":"
This module sends all metrics to OpenTSDB server. You need something like Grafana for graphing.
The same data than the one stored within rrd will be sent to OpenTSDB and recorded. You can then create graphs within Grafana to display the information you need.
"},{"location":"Extensions/metrics/Prometheus/","title":"Enabling support for Prometheus","text":"
Please be aware Prometheus support is alpha at best, It hasn't been extensively tested and is still in development All it provides is the sending of data to a a Prometheus PushGateway. Please be careful when enabling this support you use it at your own risk!
"},{"location":"Extensions/metrics/Prometheus/#requirements-older-versions-may-work-but-havent-been-tested","title":"Requirements (Older versions may work but haven't been tested","text":"
Prometheus >= 2.0
PushGateway >= 0.4.0
Grafana
PHP-CURL
The setup of the above is completely out of scope here and we aren't really able to provide any help with this side of things.
"},{"location":"Extensions/metrics/Prometheus/#what-you-dont-get","title":"What you don't get","text":"
Pretty graphs, this is why at present you need Grafana. You need to build your own graphs within Grafana.
Support for Prometheus or Grafana, we would highly recommend that you have some level of experience with these.
RRD will continue to function as normal so LibreNMS itself should continue to function as normal.
The same data then stored within rrd will be sent to Prometheus and recorded. You can then create graphs within Grafana to display the information you need.
LibreNMS wouldn't be what it is today without the use of some other amazing projects. We list below what we make use of including the license compliance.
"},{"location":"General/Acknowledgement/#3rd-party-gplv3-compliant","title":"3rd Party GPLv3 Compliant","text":"
Bootstrap: MIT
Font Awesome: MIT License
Jquery Bootgrid: MIT License
Pace: Open License
Twitter typeahead: Open License
Vis: MIT / Apache 2.0
TCPDF: LGPLv3
Bootstrap 3 Datepicker:MIT
Bootstrap Dropdown Hover Plugin: MIT
Bootstrap Switch: Apache 2.0
Handlebars: Open License
Cycle2: MIT/GPL
Jquery: MIT
Jquery UI: MIT
Jquery QRCode: MIT
Mktree: Open License
Moment: MIT
Tag Manager: MIT
TW Sack: GPLv3
Gridster: MIT
Pure PHP radius class: GPLv3
GeSHi - Generic Syntax Highlighter: GPLv2+
MalaysiaMap.svg - By Exiang CC BY 3.0, via Wikimedia Commons
Code for UBNT Devices Mark Gibbons mgibbons@oemcomp.com Initial code base submitted via PR721
Jquery LazyLoad: MIT License
influxdb-php: MIT License
influxdb-client-php: MIT License
HTML Purifier: LGPL v2.1
Symfony Yaml: MIT
PHPMailer: LGPL v2.1
pbin: GPLv2 (or later - see script header)
CorsSlim: MIT
Confluence HTTP Authenticator
Graylog SSO Authentication Plugin
Select2: MIT License
JustGage: MIT
jQuery.extendext: MIT
doT: MIT
jQuery-queryBuilder: MIT
sql-parser: MIT (Currently a custom build is used)
"},{"location":"General/Acknowledgement/#3rd-party-gplv3-non-compliant","title":"3rd Party GPLv3 Non-compliant","text":"
"},{"location":"General/Callback-Stats-and-Privacy/","title":"Submitting Stats","text":""},{"location":"General/Callback-Stats-and-Privacy/#stats-data-and-your-privacy","title":"Stats data and your privacy","text":"
This document has been put together to explain what LibreNMS does when it calls back home to report some anonymous statistics.
Let's start off by saying, all of the code that processes the data and submits it is included in the standard LibreNMS branch you've installed, the code that accepts this data and in turn generates some pretty graphs is all open source and available on GitHub. Please feel free to review the code, comment on it and suggest changes / improvements. Also, don't forget - by default installations DO NOT call back home, you need to opt into this.
Above all we respect users privacy which is why this system has been designed like it has.
Now onto the bit you're interested in, what is submitted and what we do with that data.
"},{"location":"General/Callback-Stats-and-Privacy/#what-is-submitted","title":"What is submitted","text":"
All data is anonymous.
Generic statistics are taken from the database, these include things like device count, device type, device OS, port types, port speeds, port count and BGP peer count. Take a look at the code for full details.
Pairs of sysDescr and sysObjectID from devices with a small amount of sanitation to prevent things like hostnames from being submitted.
We record version numbers of php, mysql, net-snmp and rrdtool
A random UUID is generated on your own install.
That's it!
Your IP isn't logged, even via our web service accepting the data. We don't need to know who you are so we don't ask.
"},{"location":"General/Callback-Stats-and-Privacy/#what-we-do-with-the-data","title":"What we do with the data","text":"
We store it, not for long - 3 months at the moment although this could change.
We use it to generate pretty graphs for people to see.
We use it to help prioritise issues and features that need to be worked on.
We use sysDescr and sysObjectID to create unit tests and improve OS discovery
"},{"location":"General/Callback-Stats-and-Privacy/#how-do-i-enable-stats-submission","title":"How do I enable stats submission?","text":"
If you're happy with all of this - please consider switching the call back system on, you can do this within the About LibreNMS page within your control panel. In the Statistics section you will find a toggle switch to enable / disable the feature. If you've previously had it switched on and want to opt out and remove your data, click the 'Clear remote stats' button and on the next submission all the data you've sent us will be removed!
"},{"location":"General/Callback-Stats-and-Privacy/#questions","title":"Questions?","text":""},{"location":"General/Callback-Stats-and-Privacy/#how-often-is-data-submitted","title":"How often is data submitted?","text":"
We submit the data once a day according to running daily.sh via cron. If you disable this then opting in will not have any affect.
"},{"location":"General/Callback-Stats-and-Privacy/#where-can-i-see-the-data-i-submitted","title":"Where can I see the data I submitted?","text":"
You can't see the data raw, but we collate all of the data together and provide a dynamic site so you can see the results of all contributed stats here
"},{"location":"General/Callback-Stats-and-Privacy/#i-want-my-data-removed","title":"I want my data removed.","text":"
That's easy, simply press 'Clear remote stats' in the About LibreNMS page of your control panel, the next time the call back script is run it will remove all the data we have.
"},{"location":"General/Callback-Stats-and-Privacy/#i-clicked-the-clear-remote-stats-button-by-accident","title":"I clicked the 'Clear remote stats' button by accident.","text":"
No problem, before daily.sh runs again - just opt back in, all of your existing data will stay.
Hopefully this answers the questions you might have on why and what we are doing here, if not, please pop into our discord server or community forum and ask any questions you like.
Bump phpseclib/phpseclib from 3.0.21 to 3.0.34 (#15600) - dependabot
"},{"location":"General/Changelog/#old-changelogs","title":"Old Changelogs","text":""},{"location":"General/Releases/","title":"Choosing a release","text":"
We try to ensure that breaking changes aren't introduced by utilising various automated code testing, syntax testing and unit testing along with manual code review. However bugs can and do get introduced as well as major refactoring to improve the quality of the code base.
We have two branches available for you to use. The default is the master branch.
Our master branch is our dev branch, this is actively commited to and it's not uncommon for multiple commits to be merged in daily. As such sometimes changes will be introduced which will cause unintended issues. If this happens we are usually quick to fix or revert those changes.
We appreciate everyone that runs this branch as you are in essence secondary testers to the automation and manually testing that is done during the merge stages.
You can configure your install (this is the default) to use this branch by setting lnms config:set update_channel master and ensuring you switch to the master branch with:
With this in mind, we provide a monthly stable release which is released on or around the last Sunday of the month. Code pull requests (aside from Bug fixes) are stopped days leading up to the release to ensure that we have a clean working branch at that point.
The changelog is also updated and will reference the release number and date so you can see what changes have been made since the last release.
To switch to using stable branches you can set lnms config:set update_channel release
This will pause updates until the next stable release, at that time LibreNMS will update to the stable release and continue to only update to stable releases. Downgrading is not supported on LibreNMS and will likely cause bugs.
Like any good software we take security seriously. However, bugs do make it into the software along with the history of the code base we inherited. It's how we deal with identified vulnerabilities that should show that we take things seriously.
"},{"location":"General/Security/#securing-your-install","title":"Securing your install","text":"
As with any system of this nature, we highly recommend that you restrict access to the install via a firewall or VPN.
Once you have enabled HTTPS for your install, you should set SESSION_SECURE_COOKIE=true in your .env file. This will require cookies to be transferred by secure protocol and prevent any MiM attacks against it.
When using a reverse proxy, you may restrict the hosts allowed to forward headers to LibreNMS. By default this allows all proxies, due to legacy reasons.
Set APP_TRUSTED_PROXIES in your .env to an empty string or the urls to the proxies allowed to forward.
Like anyone, we appreciate the work people put in to find flaws in software and welcome anyone to do so with LibreNMS, this will lead to better quality and more secure software for everyone.
If you think you've found a vulnerability and want to discuss it with some of the core team then you can contact us on Discord and we will endeavour to get back to as quick as we can, this is usually within 24 hours.
We are happy to attribute credit to the findings, but we ask that we're given a chance to patch any vulnerability before public disclosure so that our users can update as soon as a fix is available.
"},{"location":"General/Updating/","title":"Updating an Install","text":"
By default, LibreNMS is set to automatically update. If you have disabled this feature then you can perform a manual update.
LibreNMS by default performs updates on a daily basis. This can be disabled in the WebUI Global Settings under System -> Updates, or using lnms
Warning
You should never remove daily.sh from the cronjob! This does database cleanup and other processes in addition to updating.
settings/system/updates
lnms config:set update false\n
"},{"location":"General/Welcome-to-Observium-users/","title":"Welcome to Observium users","text":"
LibreNMS is a fork of Observium. The reason for the fork has nothing to do with Observium's move to community vs. paid versions. It is simply that we have different priorities and values to the Observium development team. We decided to fork (reluctantly) because we like using Observium, but we want to collaborate on a community-based project with like-minded IT professionals. See README.md and the references there for more information about the kind of community we're trying to promote.
LibreNMS was forked from the last GPL-licensed version of Observium.
Thanks to one of our users, Dan Brown, who has written a migration script, you can easily move your Observium install over to LibreNMS. This also takes care of moving from one CPU architecture to another. Give it a try :)
How LibreNMS will be different from Observium:
We will have an inclusive community, where it's OK to ask stupid questions, and OK to ask for things that aren't on the roadmap. If you'd like to see something added, add or comment on the relevant issue in our Community forum.
Development decisions will be community-driven. We want to make software that fulfills its users' needs.
There are no plans for a paid version, and we don't anticipate this ever changing.
There are no current plans for paid support, but this may be added later if there is sufficient demand.
We use git for version control and GitHub for hosting to make it as easy and painless as possible to create forked or private versions.
Reasons why you might want to use Observium instead of LibreNMS:
You have a financial investment in Observium and aren't concerned about community contributions.
You don't like the GNU General Public License, version 3 or the philosophy of Free Software/copyleft in general.
Reasons why you might want to use LibreNMS instead of Observium:
You want to work with others on the project, knowing that your investment of time and effort will not be wasted.
You want to add and experiment with features that are not a priority for the Observium developers. See CONTRIBUTING for more details.
You want to make use of the additional features LibreNMS can offer.
All images can be downloaded from GitHub. The tags follow the main LibreNMS repo. When a new LibreNMS release is available we will push new images out running that version. Please do note that if you download an older release with a view to running that specific version, you will need to disable updates lnms config:set update false.
If you are using the VirtualBox image then to access your newly imported VM, these ports are forwarded from your machine to the VM: 8080 for WebUI and 2023 for SSH. Remember to edit/remove them if you change (and you should) the VM network configuration.
If you would like to help with these images whether it's add additional features or default software / settings then you can do so on GitHub.
"},{"location":"Installation/Install-LibreNMS/","title":"Install LibreNMS","text":""},{"location":"Installation/Install-LibreNMS/#prepare-linux-server","title":"Prepare Linux Server","text":"
You should have an installed Linux server running one of the supported OS. Make sure you select your server's OS in the tabbed options below. Choice of web server is your preference, NGINX is recommended.
Connect to the server command line and follow the instructions below.
Note
These instructions assume you are the root user. If you are not, prepend sudo to the shell commands (the ones that aren't at mysql> prompts) or temporarily become a user with root privileges with sudo -s or sudo -i.
Please note the minimum supported PHP version is 8.1
su - librenms\n./scripts/composer_wrapper.php install --no-dev\nexit\n
Sometimes when there is a proxy used to gain internet access, the above script may fail. The workaround is to install the composer package manually. For a global installation:
See https://php.net/manual/en/timezones.php for a list of supported timezones. Valid examples are: \"America/New_York\", \"Australia/Brisbane\", \"Etc/UTC\". Ensure date.timezone is set in php.ini to your preferred time zone.
NOTE: Change the 'password' below to something secure.
CREATE DATABASE librenms CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;\nCREATE USER 'librenms'@'localhost' IDENTIFIED BY 'password';\nGRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'localhost';\nexit\n
Change listen to a unique path that must match your webserver's config (fastcgi_pass for NGINX and SetHandler for Apache) :
listen = /run/php-fpm-librenms.sock\n
If there are no other PHP web applications on this server, you may remove www.conf to save some resources. Feel free to tune the performance settings in librenms.conf to meet your needs.
"},{"location":"Installation/Install-LibreNMS/#configure-web-server","title":"Configure Web Server","text":"Ubuntu 24.04Ubuntu 22.04Ubuntu 20.04CentOS 8Debian 12 NGINX
vi /etc/nginx/conf.d/librenms.conf\n
Add the following config, edit server_name as required:
NOTE: If this is the only site you are hosting on this server (it should be :)) then you will need to disable the default site. rm -f /etc/httpd/conf.d/welcome.conf
semanage fcontext -a -t httpd_sys_content_t '/opt/librenms/html(/.*)?'\nsemanage fcontext -a -t httpd_sys_rw_content_t '/opt/librenms/(rrd|storage)(/.*)?'\nsemanage fcontext -a -t httpd_log_t \"/opt/librenms/logs(/.*)?\"\nsemanage fcontext -a -t httpd_cache_t '/opt/librenms/cache(/.*)?'\nsemanage fcontext -a -t bin_t '/opt/librenms/librenms-service.py'\nrestorecon -RFvv /opt/librenms\nsetsebool -P httpd_can_sendmail=1\nsetsebool -P httpd_execmem 1\nchcon -t httpd_sys_rw_content_t /opt/librenms/.env\n
Allow fping
Create the file http_fping.tt with the following contents. You can create this file anywhere, as it is a throw-away file. The last step in this install procedure will install the module in the proper location.
NOTE: Keep in mind that cron, by default, only uses a very limited set of environment variables. You may need to configure proxy variables for the cron invocation. Alternatively adding the proxy settings in config.php is possible too. The config.php file will be created in the upcoming steps. Review the following URL after you finished librenms install steps: https://docs.librenms.org//Support/Configuration/#proxy-support
"},{"location":"Installation/Install-LibreNMS/#enable-the-scheduler","title":"Enable the scheduler","text":"
LibreNMS keeps logs in /opt/librenms/logs. Over time these can become large and be rotated out. To rotate out the old logs you can use the provided logrotate config file:
Now head to the web installer and follow the on-screen instructions.
http://librenms.example.com/install
The web installer might prompt you to create a config.php file in your librenms install location manually, copying the content displayed on-screen to the file. If you have to do this, please remember to set the permissions on config.php after you copied the on-screen contents to the file. Run:
That's it! You now should be able to log in to http://librenms.example.com/. Please note that we have not covered HTTPS setup in this example, so your LibreNMS install is not secure by default. Please do not expose it to the public Internet unless you have configured HTTPS and taken appropriate web server hardening steps.
"},{"location":"Installation/Install-LibreNMS/#add-the-first-device","title":"Add the first device","text":"
We now suggest that you add localhost as your first device from within the WebUI.
We hope you enjoy using LibreNMS. If you do, it would be great if you would consider opting into the stats system we have, please see this page on what it is and how to enable it.
If you would like to help make LibreNMS better there are many ways to help. You can also back LibreNMS on Open Collective.
"},{"location":"Installation/Migrating-from-Observium/","title":"Migrating from Observium","text":"
A LibreNMS user, Dan, has kindly provided full details and scripts to be able to migrate from Observium to LibreNMS.
We have mirrored the scripts he's provided with consent, these are available in the scripts\\Migration folder of your installation..
There are two versions of the scripts available for you to download: - One converts the RRDs to XML and then back to RRD files when they hit the destination. This is a requirement if you are moving from x86 to x64. - Assuming you\u2019re moving servers that are on the same architecture, we can skip that step and just SCP the original RRD files.
For everything to work as originally intended, you\u2019ll need four files.\u00a0Put all four files on both servers, the scripts default to /tmp/:
nodelist.txt \u2013 this file contains the list of hosts you would like to move. This must match exactly to the hostname Observium uses
mkdir.sh \u2013 this script creates the necessary directories on your LibreNMS server
destwork.sh \u2013 depending on the version you choose, this script will add the device to LibreNMS and possibly convert from XML to RRD
convert.sh \u2013 convert is the main script we\u2019ll be calling. All of the magic happens here.
Feel free to crack open the scripts and modify them to suit you. Each file has a handful of variables you\u2019ll need to set for your conversion. They should be self-explanatory, but please leave a comment if you have trouble.
All four files have been placed in the tmp directory of both servers
I would strongly suggest you start with just one or two hosts and see how things work. For me, 10 standard sized devices took about 20 minutes with the RRD to XML conversion. Every environment will be different, so start slow and work your way up to full automation.
First thing we will want to do is exchange SSH keys so that we can automate the login process used by the scripts. Perform these steps on your Observium server:
ssh-keygen -t rsa
Accept the defaults and enter a passphrase if you wish. Then:
ssh-copy-id librenms
Where librenms is the hostname or IP of your destination server.
The nodelist.txt file contains a list of hosts we want to migrate from Observium. These names must match the name of the RRD folder on Observium. You can get those names by running the following \u2013
ls /opt/observium/rrd/
Also important, the nodelist.txt file must be on\u00a0both your Observium and LibreNMS server. Once you have your list, edit nodelist.txt with\u00a0nano:
nano /tmp/nodelist.txt
And replace the dummy data with the hosts you are converting. CTRL+X and then Y to save your modifications. Make the same changes on the LibreNMS server.
Now that we have nodelist.txt setup correctly, it is time to set the variables in all three shell scripts. We are going to start with convert.sh. Edit it with nano:
nano /tmp/convert.sh
and change the variables to suit your environment. Here is a quick list of them:
DEST \u2013 This should be the IP or hostname of your LibreNMS server
L_RRDPATH \u2013 This signifies the location of the LibreNMS RRD directory. The default value is the default install location
O_RRDPATH \u2013 Location of the Observium RRD directory. The default value is the default install location
MKDIR \u2013 Location of the mkdir.sh script
DESTSCRIPT \u2013 Location of the destwork.sh script
NODELIST \u2013 Location of the nodelist.txt file
Next, edit the destwork.sh script:
nano /tmp/destwork.sh
"},{"location":"Support/","title":"How to get Help","text":"
We now have support for polling data at intervals to fit your needs.
Please be aware of the following:
If you just want faster up/down alerts, Fast Ping is a much easier path to that goal.
You must also change your cron entry for poller-wrapper.py for this to work (if you change from the default 300 seconds).
Your polling MUST complete in the time you configure for the heartbeat step value. See /poller in your WebUI for your current value.
This will only affect RRD files created from the moment you change your settings.
This change will affect all data storage mechanisms such as MySQL, RRD and InfluxDB. If you decrease the values then please be aware of the increase in space use for MySQL and InfluxDB.
It's highly recommended to configure some performance optimizations. Keep in mind that all your devices will write all graphs every minute to the disk and that every device has many graphs. The most important thing is probably the RRDCached configuration that can save a lot of write IOPS.
To make the changes, please navigate to /settings/poller/rrdtool/ within your WebUI. Select RRDTool Setup and then update the two values for step and heartbeat intervals:
Step is how often you want to insert data, so if you change to 1 minute polling then this should be 60.
Heartbeat is how long to wait for data before registering a null value, i.e 120 seconds.
We provide a basic script to convert the default rrd files we generate to utilise your configured step and heartbeat values. Please do ensure that you backup your RRD files before running this just in case. The script runs on a per device basis or all devices at once.
The rrd files must be accessible from the server you run this script from.
./scripts/rrdstep.php
This will provide the help information. To run it for localhost just run:
Using the web interface, go to Devices and click Add Device. Enter the details required for the device that you want to add and then click 'Add Host'. As an example, if your device is configured to use the community my_company using snmp v2c then you would enter: SNMP Port defaults to 161.
By default Hostname will be used for polling data. If you want to get polling Device data via a specific IP-Address (e.g. Management IP) fill out the optional field Overwrite IP with it's IP-Address.
Using the command line via ssh you can add a new device by changing to the directory of your LibreNMS install and typing (be sure to put the correct details).
Please note that if the community contains special characters such as $ then you will need to wrap it in '. I.e: 'Pa$$w0rd'.
"},{"location":"Support/Adding-a-Device/#ping-only-device","title":"Ping Only Device","text":"
You can add ping only devices into LibreNMS through the WebUI or CLI. When adding the device switch the SNMP button to \"off\". Device will be added into LibreNMS as Ping Only Device and will show ICMP Response Graph.
Hostname: IP address or DNS name.
Hardware: Optional you can type in whatever you like.
OS: Optional this will add the Device's OS Icon.
Via CLI this is done with ./lnms device:add [-P|--ping-only] yourhostname
A How-to video can be found here: How to add ping only devices
"},{"location":"Support/Adding-a-Device/#automatic-discovery-and-api","title":"Automatic Discovery and API","text":"
If you would like to add devices automatically then you will probably want to read the Auto-discovery Setup guide.
You may also want to add devices programmatically, if so, take a look at our API documentation
This script provides CLI access to the \"delete port\" function of the WebUI. This might come in handy when trying to clean up old ports after large changes within the network or when hacking on the poller/discovery functions.
LibreNMS Port purge tool\n-p port_id Purge single port by it's port-id\n-f file Purge a list of ports, read port-ids from _file_, one on each line\n A filename of - means reading from STDIN.\n
"},{"location":"Support/CLI-Tools/#querying-port-ids-from-the-database","title":"Querying port IDs from the database","text":"
One simple way to obtain port IDs is by querying the SQL database.
If you wanted to query all deleted ports from the database, you could to this with the following query:
echo 'SELECT port_id, hostname, ifDescr FROM ports, devices WHERE devices.device_id = ports.device_id AND deleted = 1' | mysql -h your_DB_server -u your_DB_user -p --skip-column-names your_DB_name\n
When you are sure that the list of ports is correct and you want to delete all of them, you can write the list into a file and call purge-ports.php with that file as input:
echo 'SELECT port_id FROM ports, devices WHERE devices.device_id = ports.device_id AND deleted = 1' | mysql -h your_DB_server -u your_DB_user -p --skip-column-names your_DB_name > ports_to_delete\n./purge-port.php -f ports_to_delete\n
As the number of devices starts to grow in your LibreNMS install, so will things such as the RRD files, MySQL database containing eventlogs, Syslogs and performance data etc. Your LibreNMS install could become quite large so it becomes necessary to clean up those entries. With Cleanup Options, you can stay in control.
These options rely on daily.sh running from cron as per the installation instructions.
These options will ensure data within LibreNMS over X days old is automatically purged. You can alter these individually, values are in days.
NOTE: Please be aware that rrd_purge is NOT set by default. This option will remove any RRD files that have not been updated for the set amount of days automatically - only enable this if you are comfortable with that happening. (All active RRD files are updated every polling period.)
The config is stored in two places: Database: This applies to all pollers and can be set with either lnms config:set or in the Web UI. Database config takes precedence over config.php. config.php: This applies to the local poller only. Configs set here will be disabled in the Web UI to prevent unexpected behaviour.
The documentation has not been updated to reflect using lnms config:set to set config items, but it will work for all settings. Not all settings have been defined in LibreNMS, but they can still be set with the --ignore-checks option. Without that option input is checked for correctness, that does not mean it is not possible to set bad values. Please report missing settings.
lnms config:get will fetch the current config settings (composite of database, config.php, and defaults). lnms config:set will set the config setting in the database. Calling lnms config:set on a setting with no value will reset it to the default value.
If you set up bash completion, you can use tab completion to find config settings.
"},{"location":"Support/Configuration/#getting-a-list-of-all-current-values","title":"Getting a list of all current values","text":"
To get a complete list of all the current values, you can use the command lnms config:get --dump. The output may not be desirable, so you can use the jq package to pretty print it. Then it would be lnms config:get --dump | jq.
This feature is primarily for docker images and other automation. When installing LibreNMS for the first time with a new database you can place yaml key value files in database/seeders/config to pre-populate the config database.
A lot of these are self explanatory so no further information may be provided. Any extensions that have dedicated documentation page will be linked to rather than having the config provided.
timeout (fping parameter -t): Amount of time that fping waits for a response to its first request (in milliseconds). See note below
count (fping parameter -c): Number of request packets to send to each target.
interval (fping parameter -p): Time in milliseconds that fping waits between successive packets to an individual target.
tos (fpingparameter -O): Set the type of service flag (TOS). Value can be either decimal or hexadecimal (0xh) format. Can be used to ensure that ping packets are queued in following QOS mecanisms in the network. Table is accessible in the TOS Wikipedia page.
NOTE: Setting a higher timeout value than the interval value can lead to slowing down poller. Example:
timeout: 3000
count: 3
interval: 500
In this example, interval will be overwritten by the timeout value of 3000 which is 3 seconds. As we send three icmp packets (count: 3), each one is delayed by 3 seconds which will result in fping taking > 6 seconds to return results.
You can disable the fping / icmp check that is done for a device to be determined to be up on a global or per device basis. We don't advise disabling the fping / icmp check unless you know the impact, at worst if you have a large number of devices down then it's possible that the poller would no longer complete in 5 minutes due to waiting for snmp to timeout.
Globally disable fping / icmp check:
lnms config:set icmp_check false\n
If you would like to do this on a per device basis then you can do so under Device -> Edit -> Misc -> Disable ICMP Test? On
You can override a large number of visual elements by creating your own css stylesheet and referencing it here, place any custom css files into html/css/custom so they will be ignored by auto updates. You can specify as many css files as you like, the order they are within your config will be the order they are loaded in the browser.
You can override the default logo with yours, place any custom images files into html/images/custom so they will be ignored by auto updates.
lnms config:set page_refresh 300\n
Set how often pages are refreshed in seconds. The default is every 5 minutes. Some pages don't refresh at all by design.
lnms config:set front_page default\n
You can create your own front page by adding a blade file in resources/views/overview/custom/ and setting front_page to it's name. For example, if you create resources/views/overview/custom/foobar.blade.php, set front_page to foobar.
webui/dashboard
lnms config:set webui.default_dashboard_id 0\n
Allows the specification of a global default dashboard page for any user who has not set one in their user preferences. Should be set to dashboard_id of an existing dashboard that is shared or shared(read). Otherwise, the system will automatically create each user an empty dashboard called Default on their first login.
lnms config:set login_message \"Unauthorised access or use shall render the user liable to criminal and/or civil prosecution.\"\n
This is the default message on the login page displayed to users.
lnms config:set public_status true\n
If this is set to true then an overview will be shown on the login page of devices and the status.
lnms config:set show_locations true # Enable Locations on menu\nlnms config:set show_locations_dropdown true # Enable Locations dropdown on menu\nlnms config:set show_services false # Disable Services on menu\nlnms config:set int_customers true # Enable Customer Port Parsing\nlnms config:set summary_errors false # Show Errored ports in summary boxes on the dashboard\nlnms config:set customers_descr '[\"cust\"]' # The description to look for in ifDescr. Can have multiple '[\"cust\",\"cid\"]'\nlnms config:set transit_descr '[\"transit\"]' # Add custom transit descriptions (array)\nlnms config:set peering_descr '[\"peering\"]' # Add custom peering descriptions (array)\nlnms config:set core_descr '[\"core\"]' # Add custom core descriptions (array)\nlnms config:set custom_descr '[\"This is Custom\"]' # Add custom interface descriptions (array)\nlnms config:set int_transit true # Enable Transit Types\nlnms config:set int_peering true # Enable Peering Types\nlnms config:set int_core true # Enable Core Port Types\nlnms config:set int_l2tp false # Disable L2TP Port Types\n
Enable / disable certain menus from being shown in the WebUI.
You are able to adjust the number and time frames of the quick select time options for graphs and the mini graphs shown per row.
This is a simple template to control the display of device names by default. You can override this setting per-device.
You may enter any free-form text including one or more of the following template replacements:
Template Replacement {{ $hostname }} The hostname or IP of the device that was set when added *default {{ $sysName_fallback }} The hostname or sysName if hostname is an IP {{ $sysName }} The SNMP sysName of the device, falls back to hostname/IP if missing {{ $ip }} The actual polled IP of the device, will not display a hostname
For example, {{ $sysName_fallback }} ({{ $ip }}) will display something like server (192.168.1.1)
Interface types that aren't graphed in the WebUI. The default array contains more items, please see misc/config_definitions.json for the full list.
lnms config:set enable_clear_discovery true\n
Administrators are able to clear the last discovered time of a device which will force a full discovery run within the configured 5 minute cron window.
lnms config:set enable_footer true\n
Disable the footer of the WebUI by setting enable_footer to 0.
You can enable the old style network map (only available for individual devices with links discovered via xDP) by setting:
lnms config:set gui.network-map.style old\n
lnms config:set percentile_value 90\n
Show the Xth percentile in the graph instead of the default 95th percentile.
webui/graph
lnms config:set shorthost_target_length 15\n
The target maximum hostname length when applying the shorthost() function. You can increase this if you want to try and fit more of the hostname in graph titles. The default value is 12 However, this can possibly break graph generation if this is very long.
You can enable dynamic graphs within the WebUI under Global Settings -> Webui Settings -> Graph Settings.
Graphs will be movable/scalable without reloading the page:
You can enable stacked graphs instead of the default inverted graphs. Enabling them is possible via webui Global Settings -> Webui Settings -> Graph settings -> Use stacked graphs
The following setting controls how hosts are added. If a host is added as an ip address it is checked to ensure the ip is not already present. If the ip is present the host is not added. If host is added by hostname this check is not performed. If the setting is true hostnames are resolved and the check is also performed. This helps prevents accidental duplicate hosts.
lnms config:set addhost_alwayscheckip false # true - check for duplicate ips even when adding host by name.\n # false- only check when adding host by ip.\n
By default we allow hosts to be added with duplicate sysName's, you can disable this with the following config:
discovery/general
lnms config:set allow_duplicate_sysName false\n
"},{"location":"Support/Configuration/#global-poller-and-discovery-modules","title":"Global poller and discovery modules","text":"
Enable or disable discovery or poller modules.
This setting has an order of precedence Device > OS > Global. So if the module is set at a more specific level, it will override the less specific settings.
What type of mail transport to use for delivering emails. Valid options for email_backend are mail, sendmail or smtp. The varying options after that are to support the different transports.
Rancid configuration, rancid_configs is an array containing all of the locations of your rancid files. Setting rancid_ignorecomments will disable showing lines that start with #
Specify the location of the collectd rrd files. Note that the location in config.php should be consistent with the location set in /etc/collectd.conf and etc/collectd.d/rrdtool.conf
Specify the location of the collectd unix socket. Using a socket allows the collectd graphs to be flushed to disk before being drawn. Be sure that your web server has permissions to write to this socket.
Next it will attempt to look up the sysLocation with a map engine provided you have configured one under $config['geoloc']['engine']. The information has to be accurate or no result is returned, when it does it will ignore any information inside parentheses, allowing you to add details that would otherwise interfeeer with the lookup.
Example:
1100 Congress Ave, Austin, TX 78701 (3rd floor)\nGeocoding lookup is:\n1100 Congress Ave, Austin, TX 78701\n
If you just want to set GPS coordinates on a location, you should visit Devices > Geo Locations > All Locations and edit the coordinates there.
Exact Matching:
lnms config:set location_map '{\"Under the Sink\": \"Under The Sink, The Office, London, UK\"}'\n
Regex Matching:
lnms config:set location_map_regex '{\"/Sink/\": \"Under The Sink, The Office, London, UK\"}'\n
Regex Match Substitution:
lnms config:set location_map_regex_sub '{\"/Sink/\": \"Under The Sink, The Office, London, UK [lat, long]\"}'\n
If you have an SNMP SysLocation of \"Rack10,Rm-314,Sink\", Regex Match Substition yields \"Rack10,Rm-314,Under The Sink, The Office, London, UK [lat, long]\". This allows you to keep the SysLocation string short and keeps Rack/Room/Building information intact after the substitution.
The above are examples, these will rewrite device snmp locations so you don't need to configure full location within snmp.
"},{"location":"Support/Configuration/#interfaces-to-be-ignored","title":"Interfaces to be ignored","text":"
Interfaces can be automatically ignored during discovery by modifying bad_if* entries in a default array, unsetting a default array and customizing it, or creating an OS specific array. The preferred method for ignoring interfaces is to use an OS specific array. The default arrays can be found in misc/config_definitions.json. OS specific definitions (includes/definitions/_specific_os_.yaml) can contain bad_if* arrays, but should only be modified via pull-request as manipulation of the definition files will block updating:
good_if is matched against ifDescr value. This can be a bad_if value as well which would stop that port from being ignored. i.e. if bad_if and good_if both contained FastEthernet then ports with this value in the ifDescr will be valid.
"},{"location":"Support/Configuration/#interfaces-to-be-rewritten","title":"Interfaces to be rewritten","text":"
Entries defined in rewrite_if are being replaced completely. Entries defined in rewrite_if_regexp only replace the match. Matches are compared case-insensitive.
"},{"location":"Support/Configuration/#entity-sensors-to-be-ignored","title":"Entity sensors to be ignored","text":"
Some devices register bogus sensors as they are returned via SNMP but either don't exist or just don't return data. This allows you to ignore those based on the descr field in the database. You can either ignore globally or on a per os basis.
lnms config:set bad_entity_sensor_regex.+ '/Physical id [0-9]+/'\nlnms config:set os.ios.bad_entity_sensor_regex '[\"/Physical id [0-9]+/\"]'\n
Vendors may give some limit values (or thresholds) for the discovered sensors. By default, when no such value is given, both high and low limit values are guessed, based on the value measured during the initial discovery.
When it is preferred to have no high and/or low limit values at all if these are not provided by the vendor, the guess method can be disabled:
lnms config:set sensors.guess_limits false\n
"},{"location":"Support/Configuration/#ignoring-health-sensors","title":"Ignoring Health Sensors","text":"
It is possible to filter some sensors from the configuration:
Enable this to switch on support for libvirt along with libvirt_protocols to indicate how you connect to libvirt. You also need to:
Generate a non-password-protected ssh key for use by LibreNMS, as the user which runs polling & discovery (usually librenms).
On each VM host you wish to monitor:
Configure public key authentication from your LibreNMS server/poller by adding the librenms public key to ~root/.ssh/authorized_keys.
(xen+ssh only) Enable libvirtd to gather data from xend by setting (xend-unix-server yes) in /etc/xen/xend-config.sxp and restarting xend and libvirtd.
To test your setup, run virsh -c qemu+ssh://vmhost/system list or virsh -c xen+ssh://vmhost list as your librenms polling user.
LibreNMS has a standard for device sensors they are split into categories. This doc is to help users understand device sensors in general, if you need help with developing sensors for a device please see the Contributing + Developing section.
The High and Low values of these sensors can be edited in Web UI by going to the device settings -> Health. There you can set your own custom High and Low values. List of these sensors can be found here Link
Note Some values are defined by the manufactures and others are auto calculated when you add the device into librenms. Keep in mind every environment is different and may require user input.
Some Wireless have High and Low values of these sensors can be edited in Web UI by going to the device settings -> Wireless Sensors There you can set your own custom High and Low values. List of these sensors can be found here Link
Note Some values are defined by the manufactures and others are auto calculated when you add the device into librenms. Keep in mind every environment is different and may require user input.
These alert rules can be found inside the Alert Rules Collection. The alert rules below are the default alert rules, there are more device-specific alert rules in the alerts collection.
Sensor Over Limit Alert Rule: Will alert on any sensor value that is over the limit.
Sensor Under Limit Alert Rule: Will alert on any sensor value that is under the limit.
Remember you can set these limits inside device settings in the Web UI.
State Sensor Critical: Will alert on any state that returns critical = 2
State Sensor Warning: Will alert on any state that returns warning = 1
Wireless Sensor Over Limit Alert Rule: Will Alert on sensors that listed in device settings under Wireless.
Wireless Sensor Under Limit Alert Rule: Will Alert on sensors that listed in device settings under Wireless.
You can use this feature to run Debug on Discovery, Poller, SNMP, Alerts. This output information could be helpful for you in troubleshooting a device or when requesting help.
This feature can be found by going to the device that you are troubleshooting in the webui, clicking on the settings icon menu on far right and selecting Capture.
-h <device id> | <device hostname wildcard> Poll single device\n-h odd Poll odd numbered devices (same as -i 2 -n 0)\n-h even Poll even numbered devices (same as -i 2 -n 1)\n-h all Poll all devices\n-h new Poll all devices that have not had a discovery run before\n--os <os_name> Poll devices only with specified operating system\n--type <type> Poll devices only with specified type\n-i <instances> -n <number> Poll as instance <number> of <instances>\n Instances start at 0. 0-3 for -n 4\n\nDebugging and testing options:\n-d Enable debugging output\n-v Enable verbose debugging output\n-m Specify module(s) to be run. Comma separate modules, submodules may be added with /\n
-h Use this to specify a device via either id or hostname (including wildcard using *). You can also specify odd and even. all will run discovery against all devices whilst new will poll only those devices that have recently been added or have been selected for rediscovery.
-i This can be used to stagger the discovery process.
-d Enables debugging output (verbose output but with most sensitive data masked) so that you can see what is happening during a discovery run. This includes things like rrd updates, SQL queries and response from snmp.
-v Enables verbose debugging output with all data in tact.
-m This enables you to specify the module you want to run for discovery.
We have a discovery-wrapper.py script which is based on poller-wrapper.py by Job Snijders. This script is currently the default.
If you need to debug the output of discovery-wrapper.py then you can add -d to the end of the command - it is NOT recommended to do this in cron.
You also may use -m to pass a list of comma-separated modules. Please refer to Command options of discovery.php. Example: /opt/librenms/discovery-wrapper.py 1 -m bgp-peers
If you want to switch back to discovery.php then you can replace:
These are the default discovery config items. You can globally disable a module by setting it to 0. If you just want to disable it for one device then you can do this within the WebUI -> Device -> Settings -> Modules.
"},{"location":"Support/Discovery%20Support/#os-based-discovery-config","title":"OS based Discovery config","text":"
You can enable or disable modules for a specific OS by using lnms config:set OS based settings have preference over global. Device based settings have preference over all others
Discover performance improvement can be achieved by deactivating all modules that are not supported by specific OS.
E.g. to deactivate spanning tree but activate discovery-arp module for linux OS
os: Os detection. This module will pick up the OS of the device.
ports: This module will detect all ports on a device excluding ones configured to be ignored by config options.
ports-stack: Same as ports except for stacks.
xdsl: Module to collect more metrics for xDSL interfaces.
entity-physical: Module to pick up the devices hardware support.
processors: Processor support for devices.
mempools: Memory detection support for devices.
cisco-vrf-lite: VRF-Lite detection and support.
ipv4-addresses: IPv4 Address detection
ipv6-addresses: IPv6 Address detection
route: This module will load the routing table of the device. The default route limit is 1000 (configurable with lnms config:set routes.max_number 1000), with history data.
sensors: Sensor detection such as Temperature, Humidity, Voltages + More
storage: Storage detection for hard disks
hr-device: Processor and Memory support via HOST-RESOURCES-MIB.
discovery-protocols: Auto discovery module for xDP, OSPF and BGP.
arp-table: Detection of the ARP table for the device.
fdb-table: Detection of the Forwarding DataBase table for the device, with history data.
discovery-arp: Auto discovery via ARP.
junose-atm-vp: Juniper ATM support.
bgp-peers: BGP detection and support.
vlans: VLAN detection and support.
cisco-mac-accounting: MAC Address account support.
cisco-pw: Pseudowires wires detection and support.
vrf: VRF detection and support.
cisco-cef: CEF detection and support.
slas: SLA detection and support.
vminfo: Detection of vm guests for VMware ESXi and libvert
To provide debugging output you will need to run the discovery process with the -d flag. You can do this either against all modules, single or multiple modules:
Using -d shouldn't output much sensitive information, -v will so it is then advisable to sanitise the output before pasting it somewhere as the debug output will contain snmp details amongst other items including port descriptions.
The information in this document is direct from users, it's a place for people to share their setups so you have an idea of what may be required for your install.
To obtain the device, port and sensor counts you can run:
select count(*) from devices;\nselect count(*) from ports where `deleted` = 0;\nselect count(*) from sensors where `sensor_deleted` = 0;\n
LibreNMS MySQL Type Virtual Virtual OS CentOS 7 CentOS 7 CPU 2 Sockets, 4 Cores 1 Socket, 2 Cores Memory 2GB 2GB Disk Type Raid 1, SSD Raid 1, SSD Disk Space 18GB 30GB Devices 20 - Ports 133 - Health sensors 47 - Load < 0.1 < 0.1"},{"location":"Support/Example-Hardware-Setup/#vente-privee","title":"Vente-Priv\u00e9e","text":"
NOC
LibreNMS MariaDB Type Dell R430 Dell R430 OS Debian 7 (dotdeb) Debian 7 (dotdeb) CPU 2 Sockets, 14 Cores 1 Socket, 2 Cores Memory 256GB 256GB Disk Type Raid 10, SSD Raid 10, SSD Disk Space 1TB 1TB Devices 1028 - Ports 26745 - Health sensors 6238 - Load < 0.5 < 0.5"},{"location":"Support/Example-Hardware-Setup/#kkrumm","title":"KKrumm","text":"
Home
LibreNMS MySQL Type VM Same Server OS CentOS 7 CPU 2 Sockets, 4 Cores Memory 4GB Disk Type Raid 10, SAS Drives Disk Space 40 GB Devices 12 Ports 130 Health sensors 44 Load < 2.5"},{"location":"Support/Example-Hardware-Setup/#kkrumm_1","title":"KKrumm","text":"
Work
LibreNMS MySQL Type HP Proliantdl380gen8 Same Server OS CentOS 7 CPU 2 Sockets, 24 Cores Memory 32GB Disk Type Raid 10, SAS Drives Disk Space 250 GB Devices 390 Ports 16167 Health sensors 3223 Load < 14.5"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85","title":"CppMonkey(KodApa85)","text":"
Home
LibreNMS MariaDB Type i5-4690K Same Workstation OS Ubuntu 18.04.2 CPU 4 Cores Memory 16GB Disk Type Hybrid SATA Disk Space 2 TB Devices 14 Ports 0 Health sensors 70 Load < 0.5"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85_1","title":"CppMonkey(KodApa85)","text":"
Dev
Running in Ganeti
LibreNMS MariaDB Type VM Same VM OS CentOS 7.5 CPU 2 Cores Memory 4GB Disk Type M.2 Disk Space 40 GB Devices 38 Ports 1583 Health sensors 884 Load < 1.0"},{"location":"Support/Example-Hardware-Setup/#cppmonkeykodapa85_2","title":"CppMonkey(KodApa85)","text":"
Work NOC
Running in Ganeti Cluster with 2x Dell PER730xd - 64GB, Dual E5-2660 v3
LibreNMS MariaDB Type VM VM OS Debian Stretch Debian Stretch CPU 4 Cores 2 Cores Memory 8GB 4GB Disk Type Raid 6, SAS Drives Disk Space 100 GB 40GB Devices 179 Ports 14495 Health sensors 2329 Load < 2.5 < 1.5"},{"location":"Support/Example-Hardware-Setup/#lazydk","title":"LaZyDK","text":"
Home
LibreNMS MariaDB Type VM - QNAP TS-453 Pro Same Server OS Ubuntu 16.04 CPU 1 vCore Memory 2GB Disk Type Raid 1, SATA Drives Disk Space 10 GB Devices 26 Ports 228 Health sensors 117 Load < 0.92"},{"location":"Support/Example-Hardware-Setup/#sirmaple","title":"SirMaple","text":"
Home
LibreNMS MariaDB Type VM Same Server OS Debian 11 CPU 4 vCore Memory 4GB Disk Type Raid 1, SSD Disk Space 50 GB Devices 41 Ports 317 Health sensors 243 Load < 3.15"},{"location":"Support/Example-Hardware-Setup/#vvelox","title":"VVelox","text":"
Home / Dev
LibreNMS MariaDB Type Supermicro X7SPA-HF Same Server OS FreeBSD 12-STABLE CPU Intel Atom D525 Memory 4GB Disk Type Raid 1, SATA Disk Space 1TB Devices 17 Ports 174 Health sensors 76 Load < 3"},{"location":"Support/Example-Hardware-Setup/#sourcedoctor","title":"SourceDoctor","text":"
Home / Dev
Running in VMWare Workstation Pro
LibreNMS MariaDB Type VM Same Server OS Debian Buster CPU 2 vCore Memory 2GB Disk Type Raid 5, SSD Disk Space 20GB Devices 35 Ports 245 Health sensors 101 Load < 1"},{"location":"Support/Example-Hardware-Setup/#lazyb0nes","title":"lazyb0nes","text":"
Lab
LibreNMS MariaDB Type VM Same Server OS RHEL 7.7 CPU 32 cores Memory 64GB Disk Type Flash San Array Disk Space 400GB Devices 670 Ports 25678 Health sensors 2457 Load 10.92"},{"location":"Support/Example-Hardware-Setup/#dagb","title":"dagb","text":"
Work
Running in VMware.
LibreNMS MariaDB Type Virtual Same Server OS CentOS 7 CPU 12 Cores Xeon 6130 Memory 8GB Disk Type SAN (SSD) Disk Space 26GB/72GB/7GB (logs/RRDs/db) Devices 650 Ports 34300 Health sensors 10500 Load 5.5 (45%)"},{"location":"Support/FAQ/","title":"FAQ","text":""},{"location":"Support/FAQ/#getting-started","title":"Getting started","text":""},{"location":"Support/FAQ/#how-do-i-install-librenms","title":"How do I install LibreNMS?","text":"
This is currently well documented within the doc folder of the installation files.
Please see the following doc
"},{"location":"Support/FAQ/#how-do-i-add-a-device","title":"How do I add a device?","text":"
You have two options for adding a new device into LibreNMS.
1: Using the command line via ssh you can add a new device by changing to the directory of your LibreNMS install and typing:
lnms device:add [hostname or ip]\n
To see all options run: lnms device:add -h
Please note that if the community contains special characters such as $ then you will need to wrap it in '. I.e: 'Pa$$w0rd'.
2: Using the web interface, go to Devices and then Add Device. Enter the details required for the device that you want to add and then click 'Add Host'.
"},{"location":"Support/FAQ/#how-do-i-get-help","title":"How do I get help?","text":"
Getting Help
"},{"location":"Support/FAQ/#what-are-the-supported-oses-for-installing-librenms-on","title":"What are the supported OSes for installing LibreNMS on?","text":"
Supported is quite a strong word :) The 'officially' supported distros are:
Ubuntu / Debian
Red Hat / CentOS
Gentoo
However we will always aim to help wherever possible so if you are running a distro that isn't one of the above then give it a try anyway and if you need help then jump on the discord server.
"},{"location":"Support/FAQ/#do-you-have-a-demo-available","title":"Do you have a demo available?","text":"
We do indeed, you can find access to the demo here
"},{"location":"Support/FAQ/#support","title":"Support","text":""},{"location":"Support/FAQ/#how-does-librenms-use-mibs","title":"How does LibreNMS use MIBs?","text":"
LibreNMS does not parse MIBs to discover sensors for devices. LibreNMS uses static discovery definitions written in YAML or PHP. Therefore, updating a MIB alone will not improve OS support, the definitions must be updated. LibreNMS only uses MIBs to make OIDs easier to read.
"},{"location":"Support/FAQ/#why-do-i-get-blank-pages-sometimes-in-the-webui","title":"Why do I get blank pages sometimes in the WebUI?","text":"
You can enable debug information by setting APP_DEBUG=true in your .env. (Do not leave this enabled, it could leak private data)
If the page you are trying to load has a substantial amount of data in it then it could be that the php memory limit needs to be increased in config.php.
"},{"location":"Support/FAQ/#why-do-i-not-see-any-graphs","title":"Why do I not see any graphs?","text":"
The easiest way to check if all is well is to run ./validate.php as librenms from within your install directory. This should give you info on why things aren't working.
One other reason could be a restricted snmpd.conf file or snmp view which limits the data sent back. If you use net-snmp then we suggest using the included snmpd.conf file.
"},{"location":"Support/FAQ/#how-do-i-debug-pages-not-loading-correctly","title":"How do I debug pages not loading correctly?","text":"
A debug system is in place which enables you to see the output from php errors, warnings and notices along with the MySQL queries that have been run for that page.
You can enable debug information by setting APP_DEBUG=true in your .env. (Do not leave this enabled, it could leak private data) To see additional information, run ./scripts/composer_wrapper.php install, to install additional debug tools. This will add a debug bar at the bottom of every page that will show you detailed debug information.
"},{"location":"Support/FAQ/#how-do-i-debug-the-discovery-process","title":"How do I debug the discovery process?","text":"
Please see the Discovery Support document for further details.
"},{"location":"Support/FAQ/#how-do-i-debug-the-poller-process","title":"How do I debug the poller process?","text":"
Please see the Poller Support document for further details.
"},{"location":"Support/FAQ/#why-do-i-get-a-lot-apache-or-rrdtool-zombies-in-my-process-list","title":"Why do I get a lot apache or rrdtool zombies in my process list?","text":"
If this is related to your web service for LibreNMS then this has been tracked down to an issue within php which the developers aren't fixing. We have implemented a work around which means you shouldn't be seeing this. If you are, please report this in issue 443.
"},{"location":"Support/FAQ/#why-do-i-see-traffic-spikes-in-my-graphs","title":"Why do I see traffic spikes in my graphs?","text":"
This occurs either when a counter resets or the device sends back bogus data making it look like a counter reset. We have enabled support for setting a maximum value for rrd files for ports.
Before this all rrd files were set to 100G max values, now you can enable support to limit this to the actual port speed.
rrdtool tune will change the max value when the interface speed is detected as being changed (min value will be set for anything 10M or over) or when you run the included script (./scripts/tune_port.php) - see RRDTune doc
SNMP ifInOctets and ifOutOctets are counters, which means they start at 0 (at device boot) and count up from there. LibreNMS records the value every 5 minutes and uses the difference between the previous value and the current value to calculate rate. (Also, this value resets to 0 when it hits the max value)
Now, when the value is not recorded for awhile RRD (our time series storage) does not record a 0, it records the last value, otherwise, there would be even worse problems. Then finally we get the current ifIn/OutOctets value and record that. Now, it appears as though all of the traffic since it stopped getting values have occurred in the last 5 minute interval.
So whenever you see spikes like this, it means we have not received data from the device for several polling intervals. The cause can vary quite a bit: bad snmp implementations, intermittent network connectivity, broken poller, and more.
"},{"location":"Support/FAQ/#why-do-i-see-gaps-in-my-graphs","title":"Why do I see gaps in my graphs?","text":"
This is most commonly due to the poller not being able to complete it's run within 300 seconds. Check which devices are causing this by going to /poll-log/ within the Web interface.
When you find the device(s) which are taking the longest you can then look at the Polling module graph under Graphs -> Poller -> Poller Modules Performance. Take a look at what modules are taking the longest and disabled un used modules.
If you poll a large number of devices / ports then it's recommended to run a local recursive dns server such as pdns-recursor.
Running RRDCached is also highly advised in larger installs but has benefits no matter the size.
"},{"location":"Support/FAQ/#how-do-i-change-the-ip-hostname-of-a-device","title":"How do I change the IP / hostname of a device?","text":"
There is a host rename tool called renamehost.php in your librenms root directory. When renaming you are also changing the device's IP / hostname address for monitoring.
Usage:
./renamehost.php <old hostname> <new hostname>\n
You can also rename a device in the Web UI by going to the device, then clicking settings Icon -> Edit.
"},{"location":"Support/FAQ/#my-device-doesnt-finish-polling-within-300-seconds","title":"My device doesn't finish polling within 300 seconds","text":"
We have a few things you can try:
Disable unnecessary polling modules under edit device.
Set a max repeater value within the snmp settings for a device. What to set this to is tricky, you really should run an snmpbulkwalk with -Cr10 through -Cr50 to see what works best. 50 is usually a good choice if the device can cope.
"},{"location":"Support/FAQ/#things-arent-working-correctly","title":"Things aren't working correctly?","text":"
Run ./validate.php as librenms from within your install.
Re-run ./validate.php once you've resolved any issues raised.
You have an odd issue - we'd suggest you join our discord server to discuss.
"},{"location":"Support/FAQ/#what-do-the-values-mean-in-my-graphs","title":"What do the values mean in my graphs?","text":"
The values you see are reported as metric values. Thanks to a post on Reddit here are those values:
10^-18 a - atto\n10^-15 f - femto\n10^-12 p - pico\n10^-9 n - nano\n10^-6 u - micro\n10^-3 m - milli\n0 (no unit)\n10^3 k - kilo\n10^6 M - mega\n10^9 G - giga\n10^12 T - tera\n10^15 P - peta\n
"},{"location":"Support/FAQ/#why-does-a-device-show-as-a-warning","title":"Why does a device show as a warning?","text":"
This is indicating that the device has rebooted within the last 24 hours (by default). If you want to adjust this threshold then you can do so by setting $config['uptime_warning'] = '86400'; in config.php. The value must be in seconds.
"},{"location":"Support/FAQ/#why-do-i-not-see-all-interfaces-in-the-overall-traffic-graph-for-a-device","title":"Why do I not see all interfaces in the Overall traffic graph for a device?","text":"
By default numerous interface types and interface descriptions are excluded from this graph. The excluded defaults are:
"},{"location":"Support/FAQ/#how-do-i-migrate-my-librenms-install-to-another-server","title":"How do I migrate my LibreNMS install to another server?","text":"
If you are moving from one CPU architecture to another then you will need to dump the rrd files and re-create them. If you are in this scenario then you can use Dan Brown's migration scripts.
If you are just moving to another server with the same CPU architecture then the following steps should be all that's needed:
Install LibreNMS as per our normal documentation; you don't need to run through the web installer or building the sql schema.
Stop cron by commenting out all lines in /etc/cron.d/librenms
Dump the MySQL database librenms from your old server (mysqldump librenms -u root -p > librenms.sql)...
and import it into your new server (mysql -u root -p librenms < librenms.sql).
Copy the rrd/ folder to the new server.
Copy the .env and config.php files to the new server.
Check for modified files (eg specific os, ...) with git status and migrate them.
Ensure ownership of the copied files and folders (substitute your user if necessary) - chown -R librenms:librenms /opt/librenms
Delete old pollers on the GUI (gear icon --> Pollers --> Pollers)
Validate your installation (/opt/librenms/validate.php)
Re-enable cron by uncommenting all lines in /etc/cron.d/librenms
"},{"location":"Support/FAQ/#why-is-my-edgerouter-device-not-detected","title":"Why is my EdgeRouter device not detected?","text":"
If you have service snmp description set in your config then this will be why, please remove this. For some reason Ubnt have decided setting this value should override the sysDescr value returned which breaks our detection.
If you don't have that set then this may be then due to an update of EdgeOS or a new device type, please create an issue.
"},{"location":"Support/FAQ/#why-are-some-of-my-disks-not-showing","title":"Why are some of my disks not showing?","text":"
If you are monitoring a linux server then net-snmp doesn't always expose all disks via hrStorage (HOST-RESOURCES-MIB). We have additional support which will retrieve disks via dskTable (UCD-SNMP-MIB). To expose these disks you need to add additional config to your snmpd.conf file. For example, to expose /dev/sda1 which may be mounted as /storage you can specify:
disk /dev/sda1
Or
disk /storage
Restart snmpd and LibreNMS should populate the additional disk after a fresh discovery.
"},{"location":"Support/FAQ/#why-are-my-disks-reporting-an-incorrect-size","title":"Why are my disks reporting an incorrect size?","text":"
There is a known issue for net-snmp, which causes it to report incorrect disk size and disk usage when the size of the disk (or raid) are larger then 16TB, a workaround has been implemented but is not active on Centos 6.8 by default due to the fact that this workaround breaks official SNMP specs, and as such could cause unexpected behaviour in other SNMP tools. You can activate the workaround by adding to /etc/snmp/snmpd.conf :
realStorageUnits 0
"},{"location":"Support/FAQ/#what-does-mean-ignore-alert-tag-on-device-component-service-and-port","title":"What does mean \\\"ignore alert tag\\\" on device, component, service and port?","text":"
Tag device, component, service and port to ignore alerts. Alert checks will still run. However, ignore tag can be read in alert rules. For example on device, if devices.ignore = 0 or macros.device = 1 condition is is set and ignore alert tag is on, the alert rule won't match. The alert rule is ignored.
"},{"location":"Support/FAQ/#how-do-i-clean-up-alerts-from-my-switches-and-routers-about-ports-being-down-or-changing-speed","title":"How do I clean up alerts from my switches and routers about ports being down or changing speed","text":"
Some properties used for alerting (ending in _prev) are only updated when a change is detected, and not every time the poller runs. This means that if you make a permanant change to your network such as removing a device, performing a major firmware upgrade, or downgrading a WAN connection, you may be stuck with some unresolvable alerts.
If a port will be permantly down, it's best practice to configure it to be administratively down on the device to prevent malicious access. You can then only run alerts on ports with ifAdminStatus = up. Otherwise, you'll need to reset the device port state history.
On the device generating alerts, use the cog button to go to the edit device page. At the top of the device settings pane is a button labelled Reset Port State - this will clear the historic state for all ports on that device, allowing any active alerts to clear.
"},{"location":"Support/FAQ/#why-cant-normal-and-global-view-users-see-oxidized","title":"Why can't Normal and Global View users see Oxidized?","text":"
Configs can often contain sensitive data. Because of that only global admins can see configs.
"},{"location":"Support/FAQ/#what-is-the-demo-user-for","title":"What is the Demo User for?","text":"
Demo users allow full access except adding/editing users and deleting devices and can't change passwords.
"},{"location":"Support/FAQ/#why-does-modifying-default-alert-template-fail","title":"Why does modifying 'Default Alert Template' fail?","text":"
This template's entry could be missing in the database. Please run this from the LibreNMS directory:
"},{"location":"Support/FAQ/#why-would-alert-un-mute-itself","title":"Why would alert un-mute itself?","text":"
If alert un-mutes itself then it most likely means that the alert cleared and is then triggered again. Please review eventlog as it will tell you in there.
"},{"location":"Support/FAQ/#how-do-i-change-the-device-type","title":"How do I change the Device Type?","text":"
You can change the Device Type by going to the device you would like to change, then click on the Gear Icon -> Edit. If you would like to define custom types, we suggest using Device Groups. They will be listed in the menu similarly to device types.
"},{"location":"Support/FAQ/#editing-large-device-groups-gives-error-messages","title":"Editing large device groups gives error messages","text":"
If the device group contains large amount of devices, editing it from the UI might cause errors on the form even when all the data seems correct. This is caused by PHP's max_input_vars-variable. You should be able to confirm that this is the case by inspecting the PHP's error logs.
With the basic installation on Ubuntu 22.04 LTS with Nginx and PHP 8.1 FPM this value can be tuned by editing the file /etc/php/8.1/fpm/php.ini and adjusting the value of max_input_vars to be at least the size of the large group. In larger installations a value such as 10000 should suffice.
"},{"location":"Support/FAQ/#where-do-i-update-my-database-credentials","title":"Where do I update my database credentials?","text":"
If you've changed your database credentials then you will need to update LibreNMS with those new details. Please edit .env
"},{"location":"Support/FAQ/#my-reverse-proxy-is-not-working","title":"My reverse proxy is not working","text":"
Make sure your proxy is passing the proper variables. At a minimum: X-Forwarded-For and X-Forwarded-Proto (X-Forwarded-Port if needed)
You also need to Set the proxy or proxies as trusted
If you are using a subdirectory on the reverse proxy and not on the actual web server, you may need to set APP_URL and $config['base_url'].
"},{"location":"Support/FAQ/#my-alerts-arent-being-delivered-on-time","title":"My alerts aren't being delivered on time","text":"
If you're running MySQL/MariaDB on a separate machine or container make sure the timezone is set properly on both the LibreNMS and MySQL/MariaDB instance. Alerts will be delivered according to MySQL/MariaDB's time, so a mismatch between the two can cause alerts to be delivered late if LibreNMS is on a timezone later than MySQL/MariaDB.
You should probably have a look in the documentation concerning the new template syntax. Since version 1.42, syntax changed, and you basically need to convert your templates to this new syntax (including the titles).
"},{"location":"Support/FAQ/#how-do-i-use-trend-prediction-in-graphs","title":"How do I use trend prediction in graphs","text":"
As of Ver. 1.55 a new feature has been added where you can view a simple linear prediction in port graphs.
It doesn't work on non-port graphs or consolidated graphs at the time this FAQ entry was written.
To view a prediction:
Click on any port graph of any network device
Select a From date to your liking (not earlier than the device was actually added to LNMS), and then select a future date in the To field.
Click update
You should now see a linear prediction line on the graph.
"},{"location":"Support/FAQ/#how-do-i-move-only-the-db-to-another-server","title":"How do I move only the DB to another server?","text":"
There is already a reference how to move your whole LNMS installation to another server. But the following steps will help you to split up an \"All-in-one\" installation to one LibreNMS installation with a separate database install. *Note: This section assumes you have a MySQL/MariaDB instance
Stop the apache and mysql service in you LibreNMS installation.
Edit out all the cron entries in /etc/cron.d/librenms.
Dump your librenmsdatabase on your current install by issuing mysqldump librenms -u root -p > librenms.sql.
Stop and disable the MySQL server on your current install.
On your new server make sure you create a new database with the standard install command, no need to add a user for localhost though.
Copy this over to your new database server and import it with mysql -u root -p librenms < librenms.sql.
Enter to mysql and add permissions with the following two commands:
GRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'IP_OF_YOUR_LNMS_SERVER' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;\nGRANT ALL PRIVILEGES ON librenms.* TO 'librenms'@'FQDN_OF_YOUR_LNMS_SERVER' IDENTIFIED BY 'PASSWORD' WITH GRANT OPTION;\nFLUSH PRIVILEGES;\nexit;\n
Enable and restart MySQL server.
Edit your config.php file to point the install to the new database server location.
Very important: On your LibreNMS server, inside your install directory is a .env file, in it you need to edit the DBHOST paramater to point to your new server location.
After all this is done, enable all the cron entries again and start apache.
"},{"location":"Support/FAQ/#what-are-the-optional-requirements-message-when-i-add-snmpv3-devices","title":"What are the \"optional requirements message\" when I add SNMPv3 devices?","text":"
When you add a device via the WebUI you may see a little message stating \"Optional requirements are not met so some options are disabled\". Do not panic. This simply means your system does not contain openssl >= 1.1 and net-snmp >= 5.8, which are the minimum specifications needed to be able to use SHA-224|256|384|512 as auth algorithms. For crypto algorithms AES-192, AES-256 you need net-snmp compiled with --enable-blumenthal-aes.
"},{"location":"Support/FAQ/#developing","title":"Developing","text":""},{"location":"Support/FAQ/#how-do-i-add-support-for-a-new-os","title":"How do I add support for a new OS?","text":"
Please see Supporting a new OS if you are adding all the support yourself, i.e. writing all of the supporting code. If you are only able to supply supporting info, and would like the help of others to write up the code, please follow the below steps.
"},{"location":"Support/FAQ/#what-information-do-you-need-to-add-a-new-os","title":"What information do you need to add a new OS?","text":"
Please open a feature request in the community forum and provide the output of Discovery, Poller, and Snmpwalk as separate non-expiring https://p.libren.ms/ links :
Please use preferably the command line to obtain the information. Especially, if snmpwalk results in a large amount of data. Replace the relevant information in these commands such as HOSTNAME and COMMUNITY. Use snmpwalk instead of snmpbulkwalk for v1 devices.
These commands will automatically upload the data to LibreNMS servers.
You can use the links provided by these commands within the community post.
If possible please also provide what the OS name should be if it doesn't exist already, as well as any useful link (MIBs from vendor, logo, etc etc)
"},{"location":"Support/FAQ/#what-can-i-do-to-help","title":"What can I do to help?","text":"
Thanks for asking, sometimes it's not quite so obvious and everyone can contribute something different. So here are some ways you can help LibreNMS improve.
Code. This is a big thing. We want this community to grow by the software developing and evolving to cater for users needs. The biggest area that people can help make this happen is by providing code support. This doesn't necessarily mean contributing code for discovering a new device:
Web UI, a new look and feel has been adopted but we are not finished by any stretch of the imagination. Make suggestions, find and fix bugs, update the design / layout.
Poller / Discovery code. Improving it (we think a lot can be done to speed things up), adding new device support and updating old ones.
The LibreNMS main website, this is hosted on GitHub like the main repo and we accept use contributions here as well :)
Hardware. We don't physically need it but if we are to add device support, it's made a whole lot easier with access to the kit via SNMP.
If you've got MIBs, they are handy as well :)
If you know the vendor and can get permission to use logos that's also great.
Bugs. Found one? We want to know about it. Most bugs are fixed after being spotted and reported by someone, I'd love to say we are amazing developers and will fix all bugs before you spot them but that's just not true.
Feature requests. Can't code / won't code. No worries, chuck a feature request into our community forum with enough detail and someone will take a look. A lot of the time this might be what interests someone, they need the same feature or they just have time. Please be patient, everyone who contributes does so in their own time.
Documentation. Documentation can always be improved and every little bit helps. Not all features are currently documented or documented well, there's spelling mistakes etc. It's very easy to submit updates through the GitHub website, no git experience needed.
Be nice, this is the foundation of this project. We expect everyone to be nice. People will fall out, people will disagree but please do it so in a respectable way.
Ask questions. Sometimes just by asking questions you prompt deeper conversations that can lead us to somewhere amazing so please never be afraid to ask a question.
"},{"location":"Support/FAQ/#how-can-i-test-another-users-branch","title":"How can I test another users branch?","text":"
LibreNMS can and is developed by anyone, this means someone may be working on a new feature or support for a device that you want. It can be helpful for others to test these new features, using Git, this is made easy.
cd /opt/librenms\n
Firstly ensure that your current branch is in good state:
git status\n
If you see nothing to commit, working directory clean then let's go for it :)
Let's say that you want to test a users (f0o) new development branch (issue-1337) then you can do the following:
With a lot of configuration possibilities, manually editing config.php means it's not uncommon that mistakes get made. It's also impossible to validate user input in config.php when you're just using a text editor :)
So, to try and help with some of the general issues people come across we've put together a simple validation tool which at present will:
Validate config.php from a php perspective including whitespace where it shouldn't be.
Connection to your MySQL server to verify credentials.
Checks if you are running the older alerting system.
Checks your rrd directory setup if not running rrdcached.
Checks disk space for where /opt/librenms is installed.
Checks location to fping
Tests MySQL strict mode being enabled
Tests for files not owned by librenms user (if configured)
Optionally you can also pass -m and a module name for that to be tested. Current modules are:
mail - This will validate your mail transport configuration.
dist-poller - This will test your distributed poller configuration.
rrdcheck - This will test your rrd files to see if they are unreadable or corrupted (source of broken graphs).
You can run validate.php as root by executing ./validate.php within your install directory.
The output will provide you either a clean bill of health or a list of things you need to fix:
OK - This is a good thing, you can skip over these :)
WARN - You probably want to check this out.
FAIL - This is going to need your attention!
"},{"location":"Support/Install%20Validation/#validate-from-the-webui","title":"Validate from the WebUI","text":"
You can validate your LibreNMS install from the WebUI, using the nav bar and clicking on the little Gear Icon -> Validate Config.
It's advisable after 24 hours of running MySQL that you run MySQL Tuner which will make suggestions on things you can change specific to your setup.
One recommendation we can make is that you set the following in my.cnf under a [mysqld] group:
innodb_flush_log_at_trx_commit = 0\n
You can also set this to 2. This will have the possibility that you could lose up to 1 second on mysql data in the event MySQL crashes or your server does but it provides an amazing difference in IO use.
Review the graph of poller module time take under gear > pollers > performance to see what modules are consuming poller time. This data is shown per device under device > graphs > poller.
Disable polling (and discovery) modules that you do not need. You can do this globally in config.php like:
Disable OSPF polling
poller/poller_modules
lnms config:set poller_modules.ospf false\n
You can disable modules globally then re-enable the module per device or the opposite way. For a list of modules please see Poller modules
"},{"location":"Support/Performance/#snmp-max-repeaters","title":"SNMP Max Repeaters","text":"
We have support for SNMP Max repeaters which can be handy on devices where we poll a lot of ports or bgp sessions for instance and where snmpwalk or snmpbulkwalk is used. This needs to be enabled on a per device basis under edit device -> snmp -> Max repeaters.
You can also set this globally with the config option $config['snmp']['max_repeaters'] = X;.
It's advisable to test the time taken to snmpwalk IF-MIB or something similar to work out what the best value is. To do this run the following but replace -REPEATERS- with varying numbers from 10 upto around 50. You will also need to set the correct snmp version, hostname and community string:
NOTE: Do not go blindly setting this value as you can impact polling negatively.
"},{"location":"Support/Performance/#snmp-max-oids","title":"SNMP Max OIDs","text":"
For sensors polling we now do bulk snmp gets to speed things up. By default this is ten but you can overwrite this per device under edit device -> snmp -> Max OIDs.
You can also set this globally with the config option $config['snmp']['max_oid'] = X;.
NOTE: It is advisable to monitor sensor polling when you change this to ensure you don't set the value too high.
If your devices are slow to respond then you will need to increase the timeout value and potentially the interval value. However if your network is stable, you can increase poller performance by dropping the count value to 1 and/or the timeout+millsec value to 200 or 300:
This will mean that we no longer delay each icmp packet sent (we send 3 in total by default) by 0.5 seconds. With only 1 icmp packet being sent then we will receive a response quicker. The defaults mean it will take at least 1 second for a response no matter how quick the icmp packet is returned.
poller-wrapper.py defaults to using 16 threads, this isn't necessarily optimal. A general rule of thumb is 2 threads per core but we suggest that you play around with lowering / increasing the number until you get the optimal value. Note KEEP in MIND that this doesn't always help, it depends on your system and CPU. So be careful. This can be changed by going to the cron job for librenms. Usually in /etc/cron.d/librenms and changing the \"16\"
Please also see Dispatcher Service"},{"location":"Support/Performance/#recursive-dns","title":"Recursive DNS","text":"
If your install uses hostnames for devices and you have quite a lot then it's advisable to setup a local recursive dns instance on the LibreNMS server. Something like pdns-recursor can be used and then configure /etc/resolv.conf to use 127.0.0.1 for queries.
"},{"location":"Support/Performance/#per-port-polling-experimental","title":"Per port polling - experimental","text":"
By default the polling ports module will walk ifXEntry + some items from ifEntry regardless of the port. So if a port is marked as deleted because you don't want to see them or it's disabled then we still collect data. For the most part this is fine as the walks are quite quick. However for devices with a lot of ports and good % of those are either deleted or disabled then this approach isn't optimal. So to counter this you can enable 'selected port polling' per device within the edit device -> misc section or by globally enabling it (not recommended): $config['polling']['selected_ports'] = true;. This is truly not recommended, as it has been proven to affect cpu usage of your poller negatively. You can also set it for a specific OS: $config['os']['ios']['polling']['selected_ports'] = true;.
Running ./scripts/collect-port-polling.php will poll your devices with both full and selective polling, display a table with the difference and optionally enable or disable selected ports polling for devices which would benefit from a change. Note that it doesn't continuously re-evaluate this, it will only be updated when the script is run. There are a number of options:
-h <device id> | <device hostname wildcard> Poll single device or wildcard hostname\n-e <percentage> Enable/disable selected ports polling for devices which would benefit <percentage> from a change\n
If you want to run this script to have it set selected port polling on devices where a change of 10% or more is evaluated, run it with ./scripts/collect-port-polling.php -e 10. But note: it will not blindly use only the 10%. There is a second condition that the change has to be more than one second in polling time."},{"location":"Support/Performance/#web-interface","title":"Web interface","text":""},{"location":"Support/Performance/#http2","title":"HTTP/2","text":"
If you are running https then you should enable http/2 support in whatever web server you use:
For Nginx (1.9.5 and above) change listen 443 ssl; to listen 443 ssl http2; in the Virtualhost config.
For Apache (2.4.17 and above) set Protocols h2 http/1.1 in the Virtualhost config.
A lot of performance can be gained from setting up php-opcache correctly.
Note: Memory based caching with PHP cli will increase memory usage and slow things down. File based caching is not as fast as memory based and is more likely to have stale cache issues.
Some distributions allow separate cli, mod_php and php-fpm configurations, we can use this to set the optimal config.
"},{"location":"Support/Performance/#for-web-servers-using-mod_php-and-php-fpm","title":"For web servers using mod_php and php-fpm","text":"
Update your web PHP opcache.ini. Possible locations: /etc/php/8.1/fpm/conf.d/opcache.ini, /etc/php.d/opcache.ini, or /etc/php/conf.d/opcache.ini.
Create a cache directory that is writable by the librenms user first: sudo mkdir -p /tmp/cache && sudo chmod 775 /tmp/cache && sudo chown -R librenms /tmp/cache
Update your PHP opcache.ini. Possible locations: /etc/php/8.1/cli/conf.d/opcache.ini, /etc/php.d/opcache.ini, or /etc/php/conf.d/opcache.ini.
If you are having caching issues, you can clear the file based opcache with rm -rf /tmp/cache.
Debian 12 users, be aware php 8.2 current stable version (8.2.7) creates segmentation faults when opcache uses file cache. Issue should be this one https://github.com/php/php-src/issues/10914 Using sury packages or disabling file cache solves the issue
Description:\n Poll data from device(s) as defined by discovery\n\nUsage:\n device:poll [options] [--] <device spec>\n\nArguments:\n device spec Device spec to poll: device_id, hostname, wildcard (*), odd, even, all\n\nOptions:\n -m, --modules=MODULES Specify single module to be run. Comma separate modules, submodules may be added with /\n -x, --no-data Do not update datastores (RRD, InfluxDB, etc)\n -h, --help Display help for the given command. When no command is given display help for the list command\n -q, --quiet Do not output any message\n -V, --version Display this application version\n --ansi|--no-ansi Force (or disable --no-ansi) ANSI output\n -n, --no-interaction Do not ask any interactive question\n --env[=ENV] The environment the command should run under\n -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug\n
These are the default poller config items. You can globally disable a module by setting it to 0. If you just want to disable it for one device then you can do this within the WebUI Device -> Edit -> Modules.
"},{"location":"Support/Poller%20Support/#os-based-poller-config","title":"OS based Poller config","text":"
You can enable or disable modules for a specific OS by add corresponding line in config.php OS based settings have preference over global. Device based settings have preference over all others
Poller performance improvement can be achieved by deactivating all modules that are not supported by specific OS.
E.g. to deactivate spanning tree but activate unix-agent module for linux OS
To provide debugging output you will need to run the poller process with the -vv flag. You can do this either against all modules, single or multiple modules:
Using -vv shouldn't output much sensitive information, -vvv will so it is then advisable to sanitise the output before pasting it somewhere as the debug output will contain snmp details amongst other items including port descriptions.
The output will contain:
DB Updates
RRD Updates
SNMP Response
"},{"location":"Support/Remote-Monitoring-VPN/","title":"Remote monitoring using tinc VPN","text":"
This article describes how to use tinc to connect several remote sites and their subnets to your central monitoring server. This will let you connect to devices on remote private IP ranges through one gateway on each site, routing them securely back to your LibreNMS installation.
"},{"location":"Support/Remote-Monitoring-VPN/#configuring-the-monitoring-server","title":"Configuring the monitoring server","text":"
tinc should be available on nearly all Linux distributions via package management. If you are running something different, just take a look at tinc's homepage to find an appropriate version for your operating system: https://www.tinc-vpn.org/download/
I am going to describe the setup for Debian-based systems, but there are virtually no differences for e.g. CentOS or similar.
First make sure your firewall accepts connections on port 655 UDP and TCP.
Then install tinc via apt-get install tinc.
Create the following directory structure to hold all your configuration files: mkdir -p /etc/tinc/myvpn/hosts \"myvpn\" is your VPN network's name and can be chosen freely.
Create your main configuration file: vim /etc/tinc/myvpn/tinc.conf
Name = monitoring\nAddressFamily = ipv4\nDevice = /dev/net/tun\n
Next we need network up- and down scripts to define a few network settings for inside our VPN: vim /etc/tinc/myvpn/tinc-up
#!/bin/sh\nifconfig $INTERFACE 10.6.1.1 netmask 255.255.255.0\nip route add 10.6.1.1/24 dev $INTERFACE\nip route add 10.0.0.0/22 dev $INTERFACE\nip route add 10.100.0.0/22 dev $INTERFACE\nip route add 10.200.0.0/22 dev $INTERFACE\n
In this example we have 10.6.1.1 as the VPN IP address for the monitoring server on a /24 subnet. $INTERFACE will be automatically substituted with the name of the VPN, \"myvpn\" in this case. Then we have a route for the VPN subnet, so we can reach other sites via their VPN address. The last 3 lines designate the remote subnets. In the example I want to reach devices on three different remote private /22 subnets and be able to monitor devices on them from this server, so I set up routes for each of those remote sites in my tinc-up script.
The tinc-down script is relatively simple as it just removes the custom interface, which should get rid of the routes as well: vim /etc/tinc/myvpn/tinc-down
#!/bin/sh\nifconfig $INTERFACE down\n
Make sure your scripts scan be executed: chmod +x /etc/tinc/myvpn/tinc-*
As a last step we need a host configuration file. This should be named the same as the \"Name\" you defined in tinc.conf: vim /etc/tinc/myvpn/hosts/monitoring
Subnet = 10.6.1.1/32\n
On the monitoring server we will just fill in the subnet and not define its external IP address to make sure it listens on all available external interfaces.
It's time to use tinc to create our key-pair: tincd -n myvpn -K
Now the file /etc/tinc/myvpn/hosts/monitoring should have an RSA public key appended to it and your private key should reside in /etc/tinc/myvpn/rsa_key.priv.
To make sure that the connection will be restored after each reboot, you can add your VPN name to /etc/tinc/nets.boot.
Now you can start tinc with tincd -n myvpn and it will listen for your remote sites to connect to it.
"},{"location":"Support/Remote-Monitoring-VPN/#remote-site-configuration","title":"Remote site configuration","text":"
Essentially the same steps as for your central monitoring server apply for all remote gateway devices. These can be routers, or just any computer or VM running on the remote subnet, able to reach the internet with the ability to forward IP packets externally.
Create main configuration: vim /etc/tinc/myvpn/tinc.conf
Name = remote1\nAddressFamily = ipv4\nDevice = /dev/net/tun\nConnectTo = monitoring\n
Create up script: vim /etc/tinc/myvpn/tinc-up
#!/bin/sh\nifconfig $INTERFACE 10.6.1.2 netmask 255.255.255.0\nip route add 10.6.1.2/32 dev $INTERFACE\n
Create down script: vim /etc/tinc/myvpn/tinc-down
#!/bin/sh\nifconfig $INTERFACE down\n
Make executable: chmod +x /etc/tinc/myvpn/tinc*
Create device configuration: vim /etc/tinc/myvpn/hosts/remote1
Address = 198.51.100.2\nSubnet = 10.0.0.0/22\n
This defines the device IP address outside of the VPN and the subnet it will expose.
Copy over the monitoring server's host configuration (including the embedded public key) and add it's external IP address: vim /etc/tinc/myvpn/hosts/monitoring
Address = 203.0.113.6\nSubnet = 10.6.1.1/32\n\n-----BEGIN RSA PUBLIC KEY-----\nVeDyaqhKd4o2Fz...\n
Generate this device's keys: tincd -n myvpn -K
Copy over this devices host file including the embedded public key to your monitoring server.
Add the name for the VPN to/etc/tinc/nets.boot if you want to autostart the connection upon reboot.
Start tinc: tincd -n myvpn
These steps can basically be repeated for every remote site just choosing different names and other internal IP addresses. In my case I connected 3 remote sites running behind Ubiquiti EdgeRouters. Since those devices let me install software through Debian's package management it was very easy to set up. Just create the necessary configuration files and network scripts on each device and distribute the host configurations including the public keys to each device that will actively connect back.
Now you can add all devices you want to monitor in LibreNMS using their internal IP address on the remote subnets or using some form of name resolution. I opted to declare the most important devices in my /etc/hosts file on the monitoring server.
As an added bonus tinc is a mesh VPN, so in theory you could specify several \"ConnectTo\" on each device and they should hold connections even if one network path goes down.
# SNMPv2c\n\nsnmp-server community <YOUR-COMMUNITY> RO\nsnmp-server contact <YOUR-CONTACT>\nsnmp-server location <YOUR-LOCATION>\n\n# SNMPv3\n\nsnmp-server group <GROUP-NAME> v3 priv\nsnmp-server user <USER-NAME> <GROUP-NAME> v3 auth sha <AUTH-PASSWORD> priv aes 128 <PRIV-PASSWORD>\nsnmp-server contact <YOUR-CONTACT>\nsnmp-server location <YOUR-LOCATION>\n\n# Note: The following is also required if using SNMPv3 and you want to populate the FDB table, STP info and others.\n\nsnmp-server group <GROUP-NAME> v3 priv context vlan- match prefix\n
Note: If the device is unable to find the SNMP user, reboot the ASA. Once rebooted, continue the steps as normal.
Upgrade to the latest available manufacturer firmware which applies to your hardware revision. Refer to the release notes. For devices which can use the Lx releases, do install LD.
After rebooting the card (safe for connected load), configure Network, System and Access Control. Save config for each step.
Configure SNMP. The device defaults to both SNMP v1 and v3 enabled, with default credentials. Disable what you do not need. SNMP v3 works, but uses MD5/DES. You may have to add another section to your SNMP credentials table in LibreNMS. Save.
In some cases of advanced routing one may need to set explicitly the source IP address from which the SNMP daemon will reply - /snmp set src-address=<SELF_IP_ADDRESS>
Note that you need to allow SNMP on the needed interfaces. To do that you need to create a network \"Interface Mgmt\" profile for standard interface and allow SNMP under \"Device > Management > Management Interface Settings\" for out of band management interface.
One may also configure SNMP from the command line, which is useful when you need to configure more than one firewall for SNMP monitoring. Log into the firewall(s) via ssh, and perform these commands for basic SNMPv3 configuration:
username@devicename> configure\nusername@devicename# set deviceconfig system service disable-snmp no\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso oid 1.3.6.1\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso option include\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 views pa view iso mask 0xf0\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv authpwd YOUR_AUTH_SECRET\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv privpwd YOUR_PRIV_SECRET\nusername@devicename# set deviceconfig system snmp-setting access-setting version v3 users authpriv view pa\nusername@devicename# set deviceconfig system snmp-setting snmp-system location \"Yourcity, Yourcountry [60.4,5.31]\"\nusername@devicename# set deviceconfig system snmp-setting snmp-system contact noc@your.org\nusername@devicename# commit\nusername@devicename# exit\n
If you use the HTTP interface: 1. Access the legacy web admin page and log in 1. Go to System > Advanced Configuration 1. Go to the sub-tab \"SNMP\" > \"Community\" 1. Click \"Add Community Group\" 1. Enter your SNMP community, ip address and click submit 1. Go to System > Summary 1. Go to the sub-tab \"Description\" 1. Enter your System Name, System Location and System Contact. 1. Click submit 1. Click \"Save Configuration\"
Log on to your ESX server by means of ssh. You may have to enable the ssh service in the GUI first. From the CLI, execute the following commands:
esxcli system snmp set --authentication SHA1\nesxcli system snmp set --privacy AES128\nesxcli system snmp hash --auth-hash YOUR_AUTH_SECRET --priv-hash YOUR_PRIV_SECRET --raw-secret\n
esxcli system snmp set --users <username>/f3d8982fc28e8d1346c26eee49eb2c4a5950c934/0596ab30b315576a4e9f7d7bde65bf49b749e335/priv\nesxcli system snmp set -L \"Yourcity, Yourcountry [60.4,5.3]\"\nesxcli system snmp set -C noc@your.org\nesxcli system snmp set --enable true\n
Note: In case of snmp timeouts, disable the firewall with esxcli network firewall set --enabled false If snmp timeouts still occur with firewall disabled, migrate VMs if needed and reboot ESXi host.
Replace your snmpd.conf file by the example below and edit it with appropriate community in \"RANDOMSTRINGGOESHERE\".
vi /etc/snmp/snmpd.conf\n
# Change RANDOMSTRINGGOESHERE to your preferred SNMP community string\ncom2sec readonly default RANDOMSTRINGGOESHERE\n\ngroup MyROGroup v2c readonly\nview all included .1 80\naccess MyROGroup \"\" any noauth exact all none none\n\nsyslocation Rack, Room, Building, City, Country [GPSX,Y]\nsyscontact Your Name <your@email.address>\n\n#Distro Detection\nextend distro /usr/bin/distro\n#Hardware Detection (uncomment to enable)\n#extend hardware '/bin/cat /sys/devices/virtual/dmi/id/product_name'\n#extend manufacturer '/bin/cat /sys/devices/virtual/dmi/id/sys_vendor'\n#extend serial '/bin/cat /sys/devices/virtual/dmi/id/product_serial'\n
NOTE: On some systems the snmpd is running as its own user, which means it can't read /sys/devices/virtual/dmi/id/product_serial which is mode 0400. One solution is to include @reboot chmod 444 /sys/devices/virtual/dmi/id/product_serial in the crontab for root or equivalent.
Non-x86 or SMBIOS-based systems, such as ARM-based Raspberry Pi units should query device tree locations for this metadata, for example:
extend hardware '/bin/cat /sys/firmware/devicetree/base/model'\nextend serial '/bin/cat /sys/firmware/devicetree/base/serial-number'\n
The LibreNMS server include a copy of this example here:
/opt/librenms/snmpd.conf.example\n
The binary /usr/bin/distro must be copied from the original source repository:
Make sure the agent listens to all interfaces by adding the following line inside snmpd.conf:
agentAddress udp:161,udp6:161\n
This line simply means listen to connections across all interfaces IPv4 and IPv6 respectively
Uncomment and change the following line to give read access to the username created above (rouser is what LibreNMS uses) :
#rouser authPrivUser priv\n
Change the following details inside snmpd.conf
syslocation Rack, Room, Building, City, Country [GPSX,Y]\nsyscontact Your Name <your@email.address>\n
Save and exit the file
"},{"location":"Support/SNMP-Configuration-Examples/#restart-the-snmpd-service","title":"Restart the snmpd service","text":""},{"location":"Support/SNMP-Configuration-Examples/#centos-6-red-hat-6","title":"CentOS 6 / Red hat 6","text":"
service snmpd restart\n
"},{"location":"Support/SNMP-Configuration-Examples/#centos-7-red-hat-7","title":"CentOS 7 / Red hat 7","text":"
"},{"location":"Support/SNMP-Configuration-Examples/#arch-linux-snmpd-v2","title":"Arch Linux (snmpd v2)","text":"
Install SNMP Package pacman -S net-snmp
create SNMP folder mkdir /etc/snmp/
set community echo rocommunity read_only_community_string >> /etc/snmp/snmpd.conf
set contact echo syscontact Firstname Lastname >> /etc/snmp/snmpd.conf
set location echo syslocation L69 4RX >> /etc/snmp/snmpd.conf
enable startup systemctl enable snmpd.service
start snmp systemctl restart snmpd.service
"},{"location":"Support/SNMP-Configuration-Examples/#windows-server-2008-r2","title":"Windows Server 2008 R2","text":"
Log in to your Windows Server 2008 R2
Start \"Server Manager\" under \"Administrative Tools\"
Click \"Features\" and then click \"Add Feature\"
Check (if not checked) \"SNMP Service\", click \"Next\" until \"Install\"
Start \"Services\" under \"Administrative Tools\"
Edit \"SNMP Service\" properties
Go to the security tab
In \"Accepted community name\" click \"Add\" to add your community string and permission
In \"Accept SNMP packets from these hosts\" click \"Add\" and add your LibreNMS server IP address
Validate change by clicking \"Apply\"
"},{"location":"Support/SNMP-Configuration-Examples/#windows-server-2012-r2-and-newer","title":"Windows Server 2012 R2 and newer","text":""},{"location":"Support/SNMP-Configuration-Examples/#gui","title":"GUI","text":"
Log in to your Windows Server 2012 R2 or newer
Start \"Server Manager\" under \"Administrative Tools\"
Click \"Manage\" and then \"Add Roles and Features\"
Continue by pressing \"Next\" to the \"Features\" menu
Install (if not installed) \"SNMP Service\"
Start \"Services\" under \"Administrative Tools\"
Edit \"SNMP Service\" properties
Go to the security tab
In \"Accepted community name\" click \"Add\" to add your community string and permission
In \"Accept SNMP packets from these hosts\" click \"Add\" and add your LibreNMS server IP address
#Allow read-access with the following SNMP Community String:\nrocommunity public\n\n# all other settings are optional but recommended.\n\n# Location of the device\nsyslocation data centre A\n\n# Human Contact for the device\nsyscontact SysAdmin\n\n# System Name of the device\nsysName SystemName\n\n# the system OID for this device. This is optional but recommended,\n# to identify this as a MAC OS system.\nsysobjectid 1.3.6.1.4.1.8072.3.2.16\n
To use Wireless Sensors on AsuswrtMerlin, an agent of sorts is required. The purpose of the agent is to execute on the client (AsuswrtMerlin) side, to ensure that the needed Wireless Sensor information is returned for SNMP queries (from LibreNMS).
Two items are required on the AsuswrtMerlin side - scripts to generate the necessary information (for SNMP replies), and an SNMP extend configuration update (to return the information vs. the expected query).
1: Install the scripts:
Copy the scripts from librenms-agent/snmp/Openwrt - preferably inside /etc/librenms on AsuswrtMerlin (and add this directory to /etc/sysupgrade.conf, to survive firmware updates).
The only file that needs to be edited is wlInterfaces.txt, which is a mapping from the wireless interfaces, to the desired display name in LibreNMS. For example,
wlan0,wl-2.4G\nwlan1,wl-5.0G\n
2: Update the AsuswrtMerlin SNMP configuration, adding extend support for the Wireless Sensor queries:
vi /etc/config/snmpd, adding the following entries (assuming the scripts are installed in /etc/librenms, and are executable), and update the network interfaces as needed to match the hardware,
config extend\n option name interfaces\n option prog \"/bin/cat /etc/librenms/wlInterfaces.txt\"\nconfig extend\n option name clients-wlan0\n option prog \"/etc/librenms/wlClients.sh wlan0\"\nconfig extend\n option name clients-wlan1\n option prog \"/etc/librenms/wlClients.sh wlan1\"\nconfig extend\n option name clients-wlan\n option prog \"/etc/librenms/wlClients.sh\"\nconfig extend\n option name frequency-wlan0\n option prog \"/etc/librenms/wlFrequency.sh wlan0\"\nconfig extend\n option name frequency-wlan1\n option prog \"/etc/librenms/wlFrequency.sh wlan1\"\nconfig extend\n option name rate-tx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 tx min\"\nconfig extend\n option name rate-tx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 tx avg\"\nconfig extend\n option name rate-tx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 tx max\"\nconfig extend\n option name rate-tx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 tx min\"\nconfig extend\n option name rate-tx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 tx avg\"\nconfig extend\n option name rate-tx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 tx max\"\nconfig extend\n option name rate-rx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 rx min\"\nconfig extend\n option name rate-rx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 rx avg\"\nconfig extend\n option name rate-rx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 rx max\"\nconfig extend\n option name rate-rx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 rx min\"\nconfig extend\n option name rate-rx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 rx avg\"\nconfig extend\n option name rate-rx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 rx max\"\nconfig extend\n option name noise-floor-wlan0\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan0\"\nconfig extend\n option name noise-floor-wlan1\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan1\"\nconfig extend\n option name snr-wlan0-min\n option prog \"/etc/librenms/wlSNR.sh wlan0 min\"\nconfig extend\n option name snr-wlan0-avg\n option prog \"/etc/librenms/wlSNR.sh wlan0 avg\"\nconfig extend\n option name snr-wlan0-max\n option prog \"/etc/librenms/wlSNR.sh wlan0 max\"\nconfig extend\n option name snr-wlan1-min\n option prog \"/etc/librenms/wlSNR.sh wlan1 min\"\nconfig extend\n option name snr-wlan1-avg\n option prog \"/etc/librenms/wlSNR.sh wlan1 avg\"\nconfig extend\n option name snr-wlan1-max\n option prog \"/etc/librenms/wlSNR.sh wlan1 max\"\n
NOTE, any of the scripts above can be tested simply by running the corresponding command.
NOTE, to check the output data from any of these extensions, on the LibreNMS machine, run (for example),
snmpwalk -v 2c -c public -Osqnv <openwrt-host> 'NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"frequency-wlan0\"'
NOTE, on the LibreNMS machine, ensure that snmp-mibs-downloader is installed.
NOTE, on the AsuswrtMerlin machine, ensure that distro is installed (i.e. that the OS is correctly detected!).
3: Restart the snmp service on AsuswrtMerlin:
service snmpd restart
And then wait for discovery and polling on LibreNMS!
The pCOWeb card is used to interface the pCO system to networks that use the HVAC protocols based on the Ethernet physical standard such as SNMP. The problem with this card is that the implementation is based on the final manufacturer of the HVAC (Heating, Ventilation and Air Conditioning) and not based on a standard given by Carel. So each pCOweb card has a different configuration that needs a different MIB depending on the manufacturers implementation.
The main problem is that LibreNMS will by default discover this card as pCOweb and not as your real manufacturer like it should. A solution was found to bypass this issue, but it's LibreNMS independent and you need to first configure your pCOWeb through the admin interface.
"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#accessing-the-pcoweb-card","title":"Accessing the pCOWeb card","text":"
Log on to the configuration page of the pCOWeb card. The pCOWeb interface is not always found when accessing the ip directly but rather a subdirectory. If you cant directly reach the configuration page try <ip address>/config. The default username and password is admin/fadmin. Modern browsers require you to enter this 2 or 3 times.
"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#configuring-the-pcoweb-card-snmp-for-librenms","title":"Configuring the pCOweb card SNMP for LibreNMS","text":"
First you need to configure your SNMP card using the admin interface. An SNMP tab in the configuration menu leaves you the choice to choose a System OID and a Enterprise OID. This is a little tricky but based on this information we defined a \"standard\" for all implementation of Carel products with LibreNMS.
The base Carel OID is 1.3.6.1.4.1.9839. To this OID we will add the final manufacturer Enterprise OID. You can find all enterprise OID following this link. This will allow us to create a specific support for this device. Librenms uses this value to detect which HVAC device is connected to the pCOWeb card.
Example for the Rittal IT Chiller that uses a pCOweb card:
Base Carel OID : 1.3.6.1.4.1.9839
Rittal (the manufacturer) base enterprise OID : 2606
Adding value to identify this device in LibreNMS : 1
Complete System OID for a Rittal Chiller using a Carel pCOweb card: 1.3.6.1.4.1.9839.2606.1
Use 9839 as Enterprise OID
The way this works is that the pCOWeb card pretends to be another device. In reality the pCOWeb card just inserts the \"enterprise OID\" in place of the vendor id in the OID.
In the table below you can find the values needed for devices which are already supported.
LibreNMS is ready for the devices listed in this table. You only need to configure your pCOweb card with the accorded System OID and Enterprise OID:
Manufacturer Description System OID Enterprise OID Rittal IT Chiller 1.3.6.1.4.1.9839.2606.1 9839 Rittal LCP DX 3311 1.3.6.1.4.1.9839.2606.3311 9839.2606"},{"location":"Support/Device-Notes/Carel-pCOweb-Devices/#unsupported-devices","title":"Unsupported devices","text":"
After constructing the correct System OID for your SNMP card, you can start the LibreNMS new OS implementation and use this new OID as sysObjectID for the YAML definition file.
To gather Port IP info & routing info for Fortigates, disable the append-index feature. This feature appends VDOM to the index, breaking standard MIBs.
config system snmp sysinfo\n set append-index disable\nend\n
To use Wireless Sensors on Openwrt, an agent of sorts is required. The purpose of the agent is to execute on the client (Openwrt) side, to ensure that the needed Wireless Sensor information is returned for SNMP queries (from LibreNMS).
Two items are required on the Openwrt side - scripts to generate the necessary information (for SNMP replies), and an SNMP extend configuration update (to return the information vs. the expected query).
1: Install the scripts:
Copy the scripts from librenms-agent repository - preferably inside /etc/librenms on Openwrt (and add this directory to /etc/sysupgrade.conf, to survive firmware updates):
The only file that needs to be edited is wlInterfaces.txt, which is a mapping from the wireless interfaces, to the desired display name in LibreNMS. For example,
wlan0,wl-2.4G\nwlan1,wl-5.0G\n
2: Update the Openwrt SNMP configuration, adding extend support for the OS detection and the Wireless Sensor queries:
vi /etc/config/snmpd, adding the following entries (assuming the scripts are installed in /etc/librenms, and are executable), and update the network interfaces as needed to match the hardware,
config extend\n option name distro\n option prog '/etc/librenms/distro'\nconfig extend\n option name hardware\n option prog '/bin/cat'\n option args '/sys/firmware/devicetree/base/model'\nconfig extend\n option name interfaces\n option prog \"/bin/cat /etc/librenms/wlInterfaces.txt\"\nconfig extend\n option name clients-wlan0\n option prog \"/etc/librenms/wlClients.sh wlan0\"\nconfig extend\n option name clients-wlan1\n option prog \"/etc/librenms/wlClients.sh wlan1\"\nconfig extend\n option name clients-wlan\n option prog \"/etc/librenms/wlClients.sh\"\nconfig extend\n option name frequency-wlan0\n option prog \"/etc/librenms/wlFrequency.sh wlan0\"\nconfig extend\n option name frequency-wlan1\n option prog \"/etc/librenms/wlFrequency.sh wlan1\"\nconfig extend\n option name rate-tx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 tx min\"\nconfig extend\n option name rate-tx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 tx avg\"\nconfig extend\n option name rate-tx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 tx max\"\nconfig extend\n option name rate-tx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 tx min\"\nconfig extend\n option name rate-tx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 tx avg\"\nconfig extend\n option name rate-tx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 tx max\"\nconfig extend\n option name rate-rx-wlan0-min\n option prog \"/etc/librenms/wlRate.sh wlan0 rx min\"\nconfig extend\n option name rate-rx-wlan0-avg\n option prog \"/etc/librenms/wlRate.sh wlan0 rx avg\"\nconfig extend\n option name rate-rx-wlan0-max\n option prog \"/etc/librenms/wlRate.sh wlan0 rx max\"\nconfig extend\n option name rate-rx-wlan1-min\n option prog \"/etc/librenms/wlRate.sh wlan1 rx min\"\nconfig extend\n option name rate-rx-wlan1-avg\n option prog \"/etc/librenms/wlRate.sh wlan1 rx avg\"\nconfig extend\n option name rate-rx-wlan1-max\n option prog \"/etc/librenms/wlRate.sh wlan1 rx max\"\nconfig extend\n option name noise-floor-wlan0\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan0\"\nconfig extend\n option name noise-floor-wlan1\n option prog \"/etc/librenms/wlNoiseFloor.sh wlan1\"\nconfig extend\n option name snr-wlan0-min\n option prog \"/etc/librenms/wlSNR.sh wlan0 min\"\nconfig extend\n option name snr-wlan0-avg\n option prog \"/etc/librenms/wlSNR.sh wlan0 avg\"\nconfig extend\n option name snr-wlan0-max\n option prog \"/etc/librenms/wlSNR.sh wlan0 max\"\nconfig extend\n option name snr-wlan1-min\n option prog \"/etc/librenms/wlSNR.sh wlan1 min\"\nconfig extend\n option name snr-wlan1-avg\n option prog \"/etc/librenms/wlSNR.sh wlan1 avg\"\nconfig extend\n option name snr-wlan1-max\n option prog \"/etc/librenms/wlSNR.sh wlan1 max\"\n
NOTE, any of the scripts above can be tested simply by running the corresponding command.
NOTE, to check the output data from any of these extensions, on the LibreNMS machine, run (for example),
snmpwalk -v 2c -c public -Osqnv <openwrt-host> 'NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"frequency-wlan0\"'
NOTE, on the LibreNMS machine, ensure that snmp-mibs-downloader is installed.
NOTE, on the AsuswrtMerlin machine, ensure that distro is installed (i.e. that the OS is correctly detected!).
3: Restart the snmp service on Openwrt:
service snmpd restart
And then wait for discovery and polling on LibreNMS!
This agent script will allow LibreNMS to run a script on a Mikrotik / RouterOS device to gather the vlan information from both /interface/vlan/ and /interface/bridge/vlan/
Go to https://github.com/librenms/librenms-agent/tree/master/snmp/Routeros
Copy and paste the contents of LNMS_vlans.scr file into a script within a RouterOS device. Name this script LNMS_vlans. (This is NOT the same thing as creating a txt file and importing it into the Files section of the device)
If you're unsure how to create the script. Download the LNMS_vlans.scr file. Rename to remove the .scr extension. Copy this file onto all the Mikrotik devices you want to monitor.
Open a Terminal / CLI on each tik and run this. { :global txtContent [/file get LNMS_vlans contents]; /system/script/add name=LNMS_vlans owner=admin policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive,romon source=$txtContent ;} This will import the contents of that txt file into a script named LNMS_vlans
Enable an SNMP community that has both READ and WRITE capabilities. This is important, otherwise, LibreNMS will not be able to run the above script. It is recommended to use SNMP v3 for this.
Discover / Force rediscover your Mikrotik devices. After discovery has been completed the vlans menu should appear within LibreNMS for the device.
"},{"location":"Support/Device-Notes/Routeros/#important-note","title":"*** IMPORTANT NOTE ***","text":"
It is strongly recommended that SNMP service only be allowed to be communicated on a very limited set of IP addresses that LibreNMS and related systems will be coming from. (usually /32 address for each) because the write permission could allow an attack on a device. (such as dropping all firewall filters or changing the admin credentials)
"},{"location":"Support/Device-Notes/Routeros/#theory-of-operation","title":"Theory of operation:","text":"
Mikrotik vlan discovery plugin using the ability of ROS to \"fire up\" a script through SNMP.
At first, LibreNMS check for the existence of the script, and if it is present, it will start the LNMS_vlans script.
The script will gather information from: - /interface/bridge/vlan for tagged ports inside bridge - /interface/bridge/vlan for currently untagged ports inside bridge - /interface/bridge/port for ports PVID (untagged) inside bridge - /interface/vlan for vlan interfaces
after the information is gathered, it is transmitted to LibreNMS over SNMP
protocol is: type,vlanId,ifName
i.e: T,254,ether1 is translated to Tagged vlan 254 on port ether1
U,100,wlan2 is translated to Untagged vlan 100 on port wlan2