2021-06-25 基于windows docker 搭建Prometheus+grafana监控报警系统

image.png

搭建分为2部分:

一:windows docker的安装和操作, golang的安装(这里不写这个步骤)

二:prometheus+grafana监控报警系统搭建

注意,我的截图中的端口跟我写的步骤可能不一样,因为我本地一些端口被占用,如果你们也是这样,也可以修改端口映射


接下来是具体操作
一:windows docker的安装和操作

本机系统:windows 10

由于docker for windows软件可能升级后安装步骤不一定相同,所以安装步骤直接照着这上面走:https://www.runoob.com/docker/windows-docker-install.html, 文中的Hyper-V和WSL 2.0(适用于 Linux 的 Windows 子系统 (WSL) )是一定要安装的,不然无法启动windows docker. WSL 2.0, (无法共享磁盘)安装参考https://docs.microsoft.com/zh-cn/windows/wsl/install-win10

image.png

安装完后启动windows的powershell,注意gitbash下有些container无法进入,比如prometheus
image.png

二:prometheus+grafana监控报警系统搭建

先总体说下一共7个步骤如下,再详细说明

  1. 创建一个prometheus文件目录,里面放如下目录分别用于存放各个子系统的配置文件:

    image.png

    /client目录下建prometheus.yml, rules.yml

  2. 搭建 prometheus(docker镜像)

  3. 搭建client-docker, node_exporter(docker镜像),用于初步prometheus测试

  4. 搭建pushgateway(docker镜像), 连同prometheus

  5. 搭建Grafana(docker镜像), 连同prometheus

  6. 搭建alertmanager(docker镜像), 连同prometheus

  7. 编写一后台接受警报处理系统

接下来罗列2-7的详细步骤:

  1. 在powershell里操作下面命令:
拉取最新prometheus镜像
docker pull prom/prometheus

启动一个prometheuscontainer
简单版:docker run -p 9090:9090 prom/prometheus
复杂版:docker run --name=prometheus -d -p 9090:9090 -v /yourpathto/promethues/conf/prometheus.yml:/etc/prometheus/prometheus.yml -v /yourpathto/promethues/conf/rules.yml:/etc/prometheus/rules.yml prom/prometheus --config.file=/etc/prometheus/prometheus.yml(挂载了prometheus.yml就要指定config.file) --web.enable-lifecycle(这句最好在最后,否则可能报错)

额外说几个常用docker调试命令:
查看上面命令是否真的启动成功,看docker ps是否有罗列名为promtheus的container,
docker ps 

如果没有,运行
docker logs prometheus 查看log

如果container名字冲突,可以删掉不用的container
docker stop xx
docker rm xx
或者重命名当前的container或者旧的container
docker rename oldname newname

进入docker container 查看或者调试
docker exec -it prometheus sh

测试看看:打开网页访问localhost:9090(我改成9093, 你如果上面配的端口是9090就访问9090)

可以切换到classic UI ,这样就可以看到可以用的metrics

image.png
  1. 拉取client-golang github的测试用例,启动 prom/node-exporter, 并将二者配置入prometheus里,这样prometheus就可以读到二者暴露出来的metrics:
-----------------------搭建启动client-golang-----------------------
打开gitbash,切换到步骤1的client目录:
运行
git clone https://github.com/prometheus/client_golang.git
cd client_golang/examples/random
go get -d
go build #编译出random.exe

## 启动三个服务
./random -listen-address=:8080
./random -listen-address=:8081
./random -listen-address=:8082

现在你在浏览器输入:http://localhost:8080/metrics, http://localhost:8081/metrics, http://localhost:8082/metrics, 能看到所有采集到的采样点数据。
-----------------搭建启动node-exporter---------------------------
docker run -d --name=node-exporter -p 9100:9100 prom/node-exporter

-----------------把上面2个exporter配置到prometheus.yml这里我把我这个项目完整的prometheus.yml贴出来,但是每次更新的步骤我用红色标注,你做到这步就只需要修改红色部分,请按顺序来,还有要注意yml文件书写规范,注意缩进只能用空格-------------
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).


# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['192.168.31.223:9098']
      # - alertmanager:9093


# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - /etc/prometheus/rules.yml
  # - "first_rules.yml"
  # - "second_rules.yml"


# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'


    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.


    static_configs:
    - targets: ['localhost:9090']
      labels:
        group: 'promself'
        
    - targets: ['192.168.31.223:8080','192.168.31.223:8081'] #注意,此时prometheus是以container形式存在,所以配置的地址不能是localhost,而要以你的内网IP地址(快),还有前面不能有http
      labels:
        group: 'client-golang'
        
    - targets: ['192.168.31.223:9100']
      labels:
        group: 'client-node'
  
  - job_name: 'pushgateway'
    static_configs:
    - targets: ['192.168.31.223:9099']

完成上面步骤,你可以打开prometheus, 可以在metric里找到 rpc_durations_seconds, 这个是client-golang暴露出来的metric, 同时也可以看到很多node_x 相关的指标:

测试看看:

3步骤:metric选择rpc_duration_seconds->execute->graph


image.png
  1. 搭建pushgateway(docker镜像), 连同prometheus
docker run -d -p 9091:9091 --name pushgateway prom/pushgateway

测试看看:我修改了端口到9099


image.png

接下来我们就可以往pushgateway推送数据了,prometheus提供了多种语言的sdk,最简单的方式就是通过shell
powershell无法执行curl, 这个步骤在gitbash里完成

#推送一个指标
echo "cqh_metric 100" | curl --data-binary @- http://ubuntu-linux:9091/metrics/job/cqh
#推送多个指标, 你可以修改数据多推几个以便后面用
$ cat <<EOF | curl --data-binary \@-  http://192.168.31.223:9099/metrics/job/cqh/instance/test
> bench_press 110
> dead_lift 200
> deep_squal 80
> EOF
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    44    0     0  100    44      0   2750 --:--:-- --:--:-- --:--:--  2750

然后我们再将pushgateway配置到prometheus.yml里边,即修改prometheus.yml, 再重起prometheus container

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['192.168.31.223:9098']
      # - alertmanager:9093


# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - /etc/prometheus/rules.yml
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
    - targets: ['localhost:9090']
      labels:
        group: 'promself'
        
    - targets: ['192.168.31.223:8080','192.168.31.223:8081'] #注意,此时prometheus是以container形式存在,所以配置的地址不能是localhost,而要以你的内网IP地址(快),还有前面不能有http
      labels:
        group: 'client-golang'
        
    - targets: ['192.168.31.223:9100']
      labels:
        group: 'client-node'
  
  - job_name: 'pushgateway'
    static_configs:
    - targets: ['192.168.31.223:9099']

docker restart prometheus

测试看看:
Status->targets里已经出现pushgateway


Image.png

回到Graph(菜单栏)可以看到刚刚推的metric

image.png
  1. 搭建Grafana(docker镜像), 连同prometheus
docker run -d -p 3000:3000 --name grafana grafana/grafana

测试看看:默認账号admin/admin


image.png

配置上prometheus, 注意这里要用内网地址。

image.png

配置自己的panel

image.png
image.png
image.png
  1. 编写一后台接受警报处理系统,为第7步准备

用go-gin 编写一个简单的API用于接收警报,并处理警报,这里用于测试,只打印出来

image.png
  1. 搭建alertmanager(docker镜像), 连同prometheus

在第1步的alertmanager目录下,放入一个alertmanager.yml, 内容:

global:
  resolve_timeout: 5m


route:
  group_by: ['cqh']
  group_wait: 10s
  group_interval: 10s
  repeat_interval: 1m
  receiver: 'web.hook'
receivers:
- name: 'web.hook'
  webhook_configs:
  - url: 'http://192.168.31.223:8888/send-alert' #上面go-gin系统API
inhibit_rules:
  - source_match:
      severity: 'critical'
    target_match:
      severity: 'warning'
    equal: ['alertname', 'dev', 'instance']

启动alertmanager:

docker run -d -p 9093:9093 --name alertmanager -v /yourpathto/promethues/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml prom/alertmanager

测试看看:


image.png

写入 promethues/conf/rules.yml 内容如下:下面的{{ $labels.instance }}不要写错,最好拷贝。 这条规则的意思是,硬拉超过150公斤,持续一分钟,就报警通知

groups:
  - name: cqh
    rules:
    - alert: cqhtest
      expr: dead_lift > 150
      for: 1m
      labels:
        status: warning
      annotations:
        summary: "{{ $labels.instance }}:warnning! lightweight baby!"
        description: "{{ $labels.instance }}:warning! lightweight baby!"

然后再修改prometheus添加altermanager配置

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).


# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets: ['192.168.31.223:9098']
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  - /etc/prometheus/rules.yml
  # - "first_rules.yml"
  # - "second_rules.yml"


# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.


    static_configs:
    - targets: ['localhost:9090']
      labels:
        group: 'promself'
        
    - targets: ['192.168.31.223:8080','192.168.31.223:8081'] #注意,此时prometheus是以container形式存在,所以配置的地址不能是localhost,而要以你的内网IP地址(快),还有前面不能有http
      labels:
        group: 'client-golang'
        
    - targets: ['192.168.31.223:9100']
      labels:
        group: 'client-node'
  
  - job_name: 'pushgateway'
    static_configs:
    - targets: ['192.168.31.223:9099']

重启prometheus:docker restart prometheus
观察grafana中数据的变化(选中放大)

image.png

设置 dead_lift > 150

然后1分钟后,后端就会一直收到警报:

image.png

来到prometheus页面,切换到newUI 在alert下也可以看到警报

image.png

第二阶段:由于之前在配置文件里都是用IP来让这些container互通,但是局域网IP是动态的,换个网络就要修改所有的配置文件。为了解决这个问题,就要创建一个docker network(docker桥接) , 然后把这些现成的container加入这个network,在同一个docker network里的container可以识别彼此的container name,端口用container的端口,而不必用暴露在物理机的端口。

PS C:\vicky\test> docker network create monitor-net
PS C:\vicky\test> docker network connect monitor-net prod-prometheus
PS C:\vicky\test> docker network connect monitor-net  prod-prometheus-node-exporter
PS C:\vicky\test> docker network connect monitor-net prod-prometheus-pushgateway

prometheus.yml里的IP改成container name:container port

image.png
image.png

完。

推荐阅读更多精彩内容