Bringing data to the mortals again with Prometheus and it's sidekick Telegraf - Fuga Cloud
Bringing data to the mortals again with Prometheus and it's sidekick Telegraf
Prometheus Telegraf

Bringing data to the mortals again with Prometheus and it's sidekick Telegraf

There I am again, talking about server monitors. However, I personally think this one is a lot more exciting. Prometheus is a server monitoring tool that collects data about the hosts it monitors, allowing you to spot trends and interesting data passing by. And i don’t mean collecting data in a Facebook way, but in a way that’s actually beneficial to both customer and provider. It collects data about disk usage, CPU usage and all the stuff that interests us techies. Recently, we hired a mathematician at work. Being the wizard that he is, he dove head first into using data to predict future growth, trends and other neat stuff using techniques I won’t pretend to fully understand. So, he setup a server running Prometheus to collect data about our hosts. To do his work properly however, he needed to collect data over a longer term, as Prometheus only saves data for a short time span. Because i recently set up a couple databases for other uses, I figured I’d help him out on this one. Turns out its not as simple as setup up a MySQL schema.


I’m doing this from my Ubuntu 17.10 laptop, and an Ubuntu 16.04 server. Your setup might differ, hence why I tried to use generic options such as curl where possible. I’ve also included the documentation links for anything you need to install, so you should be able to follow this guide even if you’re on an esoteric operating system. However, if you spot any glaring mistakes, or run into issues yourself, please feel free to mail me. I hope you enjoy reading this blog and find it useful, or at least entertaining.

“Big Data”

I’m sure we’re all tired of this buzzword, but its the best term I could think of when describing the amount of data Prometheus exports. Think about it- a couple thousand hosts, and an entry X seconds. So when I told ops I was going to catch all this data in an instance of Postgres, they just started snickering. Another problem I discovered when trying to link Postgres to Prometheus is the adapters Prometheus uses. A lot of these are small “hip and cool” projects on Github, with half a page of “” as documentation. Sometimes these had an installation guide, in which commands could be found that did not make sense to me or my system. Another large dutch company -, seemed to have the right idea. They just wrote their own adapter. But I did not have the luxury of time, so I went looking for a more “out of the box” solution

My mathematically inclined friend send me the solution over slack, and in this blog, I’d love to share it with you.


Originally I wrote this blog post with the idea in mind that you already had an instance of Prometheus running with Grafana or your favorite interface. This was quite a strong assumption. And as developers, we all know the result of assumptions.

Start off by updating your package managers repositories. In Ubuntu I did that via

$sudo apt update && sudo apt dist-upgrade -y

A neat idea i saw on another tutorial is to create users for Prometheus and node exporters, to isolate ownership. Shout out to Marko Mudrinić for the original blog post on how to configure Prometheus.

$ sudo useradd --no-create-home --shell /bin/false prometheus
$ sudo useradd --no-create-home --shell /bin/false node_exporter

Next up, we’re going to download Prometheus and all its exporters, so it does more than just collect data about itself. If you have a GUI around, just download it off their site here

If you dont, we can download it off the projects’ Github:

$ curl -LO

After that, lets unpack it

$ tar xvf prometheus-2.0.0.linux-amd64.tar.gz

I’d like to say I’ve been a good boy who’s been checking his sha256 sums with every download, but that wouldn’t necessarily be true.

When your PC is done unpacking the tar, make some directories for the configuration files and data

$    sudo mkdir /etc/prometheus
$    sudo mkdir /var/lib/prometheus
$    sudo chown prometheus:prometheus /etc/prometheus
$    sudo chown prometheus:prometheus /var/lib/prometheus

Time to create a config. This is where you can go ahead and create your own configuration, but heres an example of what one could look like

sudo vim /etc/prometheus/prometheus.yml
# my global config
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

      - targets: ['localhost:9090']

  - job_name: 'node_exporter'
      - targets: ['localhost:9100']

Lets turn it on

sudo systemctl start prometheus

So, now we got prometheus gathering data, but we want to view that data. Thats where grafana comes in.

Find the latest version at:

Or use

sudo apt-get install -y adduser libfontconfig
sudo dpkg -i grafana_4.6.2_amd64.deb

My fellow EU citizens can see a problem here. “s3-us-west”? Unfortunately, i haven’t found a mirror closer to us, so during the download I went off and got myself some coffee.

After you’re done waiting, lets turn on Grafana:

sudo service grafana-server start

You should now be able to reach your instance at :3000

Setting up InfluxDB

The first thing that needed to happen was naturally installing the database itself. As I recently ran into a lot of trouble due to outdated packages on my Ubuntu servers, you’ll see me build packages by hand a lot. This doesn’t mean you can’t use apt-get like you’re used to, but moreso that I went a bit overboard with the building by hand. Anyhow, first things first.

$ curl -sL | sudo apt-key add -
$ source /etc/lsb-release
$ echo "deb${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

$ sudo apt-get update && sudo apt-get install influxdb

I always like to check wether the installation was sucessful after running these commands. Do so by using:

$ influxd

If everything looks pretty like it should, run the following command to run influx in the background

$ sudo systemctl start influxdb

Since this has been Ubuntu and Systemd specific, i’ll drop a link to infux’s documentation

Golang and the path to succes

Most of Prometheus is written in Go, wich means that most of its adapters and other useful tools are also written in Go. Therefore a Go compiler is useful to have on hand. Due to Debians conservative approach to its packages, the packages in its repos are kinda outdated. running “install golang” got me version 1.6, where 1.9.2 is the most recent stable version. So, we build that by hand.

$ wget
$ sudo tar -xvf go1.9.2.linux-amd64.tar.gz
$ sudo mv go /usr/local

Next up, GO requires you to set some paths. So, we set those as well. Because I’m lazy in server-side operations, I’ve set the GOPATH (where your projects are supposed to be) to ~/go. Feel absolutely free to change this though.

$export GOROOT=/usr/local/go
$export GOPATH=$HOME/go
$export PATH=$PATH:$GOROOT/bin

Just to make sure everything is working correctly, run

$go version

To verify that you actually got 1.9.2

Taking in data with telegraf

Telegraf is what allows our Prometheus instance to talk with our DB. To build it from source, follow these steps:

$ go get -d
$ cd $GOPATH/src/
$ make

Giving you an executable named telegraf. We’re gonna build a config by doing

$ ./telegraf config > telegraf.conf

Feel free to edit this config to your hearts content, changing it for your specific needs. Telegraf is very customisable with a ton of cool plugins, like sysstat that collects detailed data about the hosts OS, CPU and whatnot. However, I could not possibly fit all this amazing content into this blog, so therefore I’d recommend the projects github where you can have a look at all the great plugins.

For our example, we have to uncomment the “inputs.prometheus” lines and the “outputs.influx” lines. Here you can also do the settings for each of these input and output plugins. These definitions will look like this:


A ton of configurations below


Even more configuration options

Playing with data.

So, why did we go through all the trouble of setting this up? By setting up Prometheus we could start gathering data about our hosts. This allows us to see short term behavior and weirdness going on, which might need fixing. Grafana consumes this data, and allows us to visualise it with graphs. Then, we set up Influx to save this data. This allows us to spot long term trends in the usage of our systems, and, if you’re awesome at math like my coworker, make predictions about the future. We want to use his predictions to aid in the planning of future inventory, like for example when we need to add more servers. Though this is just one example of the plentiful uses data aggregation of our hosting will allow us to do. I hope you enjoyed reading this blog, and find a use for grafana, influx or Prometheus.

Next article:

No improvement without feedback

The world is changing fast and customers are more demanding than ever. The challenge for organisations is to continuously meet the expectations of customers. How do we manage this challenge? And how important is our customers’ feedback for us to improve our services? This blog will hopefully give you an insight into our business and the way we appreciate and use the feedback we receive. Fuga Cloud by Cyso But first, let’s take a few steps back.

Proudly made in The Netherlands Proudly made in The Netherlands

Copyright © 2023 FUGA BV