Installing the Prometheus server
Our first task is to get the Prometheus server up and running so that we can start serving real data. Prometheus is a powerful open source time-series database and monitoring system originally developed by SoundCloud. It followed Kubernetes to become the second Cloud Native Computing Foundation graduating incubation project. Grafana, having partnered with the Prometheus maintainers, includes the Prometheus data source as a first-class data source plugin.
Installing Prometheus from Docker
We're going to start up Prometheus from Docker Compose and point it to a local configuration file. First, let's create the following configuration file and save it to our local ch4/prometheus directory as prometheus.yml:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
It is beyond the scope of this book to give fully detailed information on the Prometheus configuration file format. You can go to https://prometheus.io/docs/prometheus/latest/configuration/configuration to find out more. This is a relatively simple configuration file designed to do a couple of things. Now, follow these steps:
- Establish a default scrape interval. This determines how often Prometheus will scrape or pull data from the metric's endpoint—in this case, every 15 seconds.
- Set up the configuration for a job called prometheus that will scrape itself every 5 seconds. The target server is located at localhost:9090.
- Next, create a docker-compose.yml file (this file can also be downloaded from this book's GitHub repository):
version: '3' services: grafana: image: "grafana/grafana:${GRAF_TAG-latest}" ports: - "3000:3000" volumes: - "${PWD-.}/grafana:/var/lib/grafana" prometheus: image: "prom/prometheus:${PROM_TAG-latest}" ports: - "9090:9090" volumes: - "${PWD-.}/prometheus:/etc/prometheus"
The preceding Docker Compose file does the following:
- Starts up a Grafana container and exposes its default port at 3000.
- Starts up a Prometheus container and exposes its default port at 9090.
- Maps the$PWD/prometheus local directory to/etc/prometheus in the prometheus container. This is so that we can manage the Prometheus configuration file from outside the container.$PWDis a shell variable describing the working directory.
Start up both containers with the following command:
> docker-compose up -d
The docker-compose command will start up both containers in their own network so that both Grafana and Prometheus containers can contact each other. If you are successful, you should see something similar to the following output lines:
Starting ch4_prometheus_1 ... done
Starting ch4_grafana_1 ... done
To confirm Prometheus is running correctly, open a web browser page and enter http://localhost:9090/targets. You will see a screen as in the following screenshot:
Now that we have the Grafana and Prometheus servers running, let's move on to creating a Prometheus data source.
Configuring the Prometheus data source
From our docker-compose.yml file, we know that the Prometheus server host will be localhost, the port is 9090, and our scrape interval is 5 seconds. So, let's configure a new Prometheus data source:
- From the left sidebar, go to Configuration | Data Sources.
- Add a new Prometheus data source and fill in the following information:
- Name: Prometheus
- URL: http://localhost:9090
- Access: Browser
- Click on Save & Test.
If everything worked correctly, you should now have a new data source, as in the following screenshot:
Now that we have a working data source, let's take a look at the data we're capturing in Prometheus.