Hello,
Can someone help me with the following problem? I have deployed a prometheus+node exporter+cadvisor+grafana stack with portainer to monitor a fresh OMV6 install and running containers using 2 youtube tutorials:
The problem is that the graph that suppose to show memory usage and memory cached for running containers displays 0 B. I have search the forum and the internet but I'm still stuck. In the grafana webgui I have imported dashboards 14282 (below image) and 1860.
I've have added port 9100 to ''node exporter section'' since it was not accessible through the browser at first. Here is my own cooked up docker compose:
version: '3'
volumes:
prometheus-data:
driver: local
grafana-data:
driver: local
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- /etc/prometheus:/etc/prometheus
- prometheus-data:/prometheus
restart: unless-stopped
command:
- "--config.file=/etc/prometheus/prometheus.yml"
grafana:
image: grafana/grafana-oss:latest
container_name: grafana
ports:
- "3000:3000"
volumes:
- grafana-data:/var/lib/grafana
restart: unless-stopped
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
ports:
- "9100:9100"
command:
- '--path.rootfs=/host'
pid: host
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
devices:
- /dev/kmsg
restart: unless-stopped
Alles anzeigen
Here is my etc/prometheus/prometheus.yml file:
GNU nano 5.4 /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
# external_labels:
# monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['192.168.1.100:9090']
# Example job for node_exporter
- job_name: 'node_exporter'
static_configs:
- targets: ['192.168.1.100:9100']
# Example job for cadvisor
- job_name: 'cadvisor'
static_configs:
- targets: ['192.168.1.100:8080']
Alles anzeigen
All dockers are running, are accessible through the browser. When I browse to node exporter on 192.168.1.100:9100 the following is listed (amongst many others) :
# HELP node_memory_Cached_bytes Memory information field Cached_bytes.
# TYPE node_memory_Cached_bytes gauge
node_memory_Cached_bytes 1.372033024e+10
So all the memory data including memory_cached is being gathered right?
When I browse to cAdvisor on 192.168.1.100:8080 and look at the ''docker containers'' section all is listed and when selecting qbittorrent a new page appears with the following metrics:
* On a side note. For those who want to try it out but have already a container running that uses port 8080 like qBittorrent should change the torrent container to something else like 8081 since I was not able to change port 8080 in the above stack successfully. It resulted in cadvisor not being accessible in the browser.
Thanks in advance.