尽管可以通过可视化数据监控系统运行状态,但我们无法时刻关注系统运行,因此需要一些实时运行的工具能够辅助监控系统运行,当系统出现运行问题时,能够通知我们,以此确保系统稳定性,告警便是作为度量指标监控中及其重要的一环。
在Prometheus中,告警模块为Alertmanager,可以提供多种告警通道、方式来使得系统出现问题可以推送告警消息给相关人员。
Prometheus Server中的告警规则会向Alertmanager发送。Alertmanager管理这些告警,包括进行去重,分组和路由,以及告警的静默和抑制,通过电子邮件、实时通知系统和聊天平台等方法发送通知。
Alertmanager是一个独立的告警模块,接收Prometheus server发来的告警规则,通过去重、分组、静默和抑制等处理,并将它们通过路由发送给正确的接收器。
cd /opt mkdir alertmanager cd alertmanager touch alertmanager.yml
配置文件主要包含以下几个部分:
配置文件内容:
global: resolve_timeout: 2m smtp_smarthost: 'smtp.qq.com:465' smtp_from: your@qq.com smtp_auth_username: your@qq.com smtp_auth_password: 授权码 templates: - /etc/alertmanager/template/*.tmpl route: group_by: - alertname_wechat group_wait: 10s group_interval: 10s receiver: wechat repeat_interval: 1h receivers: - name: wechat email_configs: - to: otheremail@outlook.com send_resolved: true wechat_configs: - corp_id: wechat_corp_id to_party: wechat_to_party agent_id: wechat_agent_id api_secret: wechat_apisecret send_resolved: true
此处设置推送消息到邮箱和企业微信,Alertmanager内置了对企业微信的支持。
https://prometheus.io/docs/alerting/latest/configuration/#wechat_config
cd /opt/alertmanager mkdir template cd template touch wechat.tmpl
编辑文件内容,设置模板格式
{{ define "wechat.default.message" }} {{ range $i, $alert :=.Alerts }} ========监控报警========== 告警状态:{{ .Status }} 告警级别:{{ $alert.Labels.severity }} 告警类型:{{ $alert.Labels.alertname }} 告警应用:{{ $alert.Annotations.summary }} 告警主机:{{ $alert.Labels.instance }} 告警详情:{{ $alert.Annotations.description }} 触发阀值:{{ $alert.Annotations.value }} 告警时间:{{ $alert.StartsAt.Format "2023-02-19 10:00:00" }} ========end============= {{ end }} {{ end }}
docker run -d -p 9093:9093 --name StarCityAlertmanager -v /opt/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml -v /opt/alertmanager/template:/etc/alertmanager/template docker.io/prom/alertmanager:latest
可访问http://host:9093
可通过请求Alertmanager Api模拟告警规则来推送告警通知。
curl --location 'http://Host:9093/api/v2/alerts' \ --header 'Content-Type: application/json' \ --data '[ { "labels": { "severity": "Warning", "alertname": "内存使用过高", "instance": "实例1", "msgtype": "testing" }, "annotations": { "summary": "node", "description": "请检查实例1", "value": "0.95" } }, { "labels": { "severity": "Warning", "alertname": "CPU使用过高", "instance": "实例2", "msgtype": "testing" }, "annotations": { "summary": "node", "description": "请检查实例2", "value": "0.90" } } ]'
发送完毕,可以在企业微信和邮件中收到告警通知,如在邮箱中收到信息。
注意:如果告警配置完毕,但测试时企业微信怎么也收不到消息,需要设置企业自建应用底部可信IP菜单
cd /opt/prometheus touch rules.yml
告警规则内容
groups: - name: example rules: - alert: InstanceDown expr: up == 0 for: 10s labels: name: instance severity: Critical annotations: summary: ' {{ $labels.appname }}' description: ' The service stops running ' value: '{{ $value }}%' - name: Host rules: - alert: HostMemory Usage expr: >- (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 > 80 for: 10s labels: name: Memory severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: ' The instance memory usage exceeded 80%. ' value: '{{ $value }}' - alert: HostCPU Usage expr: >- sum(avg without (cpu)(irate(node_cpu_seconds_total{mode!='idle'}[5m]))) by (instance,appname) > 0.65 for: 10s labels: name: CPU severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: The CPU usage of the instance is exceeded 65%. value: '{{ $value }}' - alert: HostLoad expr: node_load5 > 4 for: 10s labels: name: Load severity: Warning annotations: summary: '{{ $labels.appname }} ' description: ' The instance load exceeds the default value for 5 minutes.' value: '{{ $value }}' - alert: HostFilesystem Usage expr: 1-(node_filesystem_free_bytes / node_filesystem_size_bytes) > 0.8 for: 10s labels: name: Disk severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: ' The instance [ {{ $labels.mountpoint }} ] partitioning is used by more than 80%.' value: '{{ $value }}%' - alert: HostDiskio expr: 'irate(node_disk_writes_completed_total{job=~"Host"}[1m]) > 10' for: 10s labels: name: Diskio severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: ' The instance [{{ $labels.device }}] average write IO load of the disk is high in 1 minute.' value: '{{ $value }}iops' - alert: Network_receive expr: >- irate(node_network_receive_bytes_total{device!~"lo|bond[0-9]|cbr[0-9]|veth.*|virbr.*|ovs-system"}[5m]) / 1048576 > 3 for: 10s labels: name: Network_receive severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: ' The instance [{{ $labels.device }}] average traffic received by the NIC exceeds 3Mbps in 5 minutes.' value: '{{ $value }}3Mbps' - alert: Network_transmit expr: >- irate(node_network_transmit_bytes_total{device!~"lo|bond[0-9]|cbr[0-9]|veth.*|virbr.*|ovs-system"}[5m]) / 1048576 > 3 for: 10s labels: name: Network_transmit severity: Warning annotations: summary: ' {{ $labels.appname }} ' description: ' The instance [{{ $labels.device }}] average traffic sent by the network card exceeds 3Mbps in 5 minutes.' value: '{{ $value }}3Mbps' - name: Container rules: - alert: ContainerCPU Usage expr: >- (sum by(name,instance) (rate(container_cpu_usage_seconds_total{image!=""}[5m]))*100) > 60 for: 10s labels: name: CPU severity: Warning annotations: summary: '{{ $labels.name }} ' description: ' Container CPU usage over 60%.' value: '{{ $value }}%' - alert: ContainerMem Usage expr: 'container_memory_usage_bytes{name=~".+"} / 1048576 > 1024' for: 10s labels: name: Memory severity: Warning annotations: summary: '{{ $labels.name }} ' description: ' Container memory usage exceeds 1GB.' value: '{{ $value }}G'
修改prometheus.yml,增加告警规则,如下alerting和rule_files部分,重启prometheus。
global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers: - static_configs: - targets: ['Host:9093'] rule_files: - "rules.yml" scrape_configs: - job_name: prometheus static_configs: - targets: - 'Prometheus Server Host:9090' labels: appname: Prometheus - job_name: node scrape_interval: 10s static_configs: - targets: - 'Metrics Host:9100' labels: appname: node - job_name: cadvisor static_configs: - targets: - 'Metrics Host:58080' - job_name: rabbitmq scrape_interval: 10s static_configs: - targets: - 'Metrics Host:9419' labels: appname: rabbitmq
再次访问prometheus可以看到告警规则
状态 | 说明 |
---|---|
Inactive | 待激活状态,度量指标处在合适范围内。 |
Pending | 符合告警规则,但是低于配置的持续时间。这里的持续时间即rule里的FOR字段设置的时间。该状态下不发送告警通知。 |
Firing | 符合告警规则,而且超出设置的持续时间。该状态下发送告警到Alertmanager。 |
当系统达到预定告警条件并超出设定的持续时间,则触发告警,推送告警消息到Alertmanager。
此处设置系统CPU使用率超过限定条件,可以在prometheus中看到CPU使用率告警规则达到Pending状态
当超过设定的持续时间,状态变更到Firing,消息推送到Alertmanager
在Alertmanager Web中可以看到推送过来的告警信息
在企业微信和邮箱中也收到告警信息
2023-02-23,望技术有成后能回来看见自己的脚步