Kubernetes监控方案kube-prometheus部署指北

网友投稿 2351 2022-10-14

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

Kubernetes监控方案kube-prometheus部署指北

kubernetes集群监控方案有许多种组合对其进行监控,在Prometheus-operator未出现之前,weave scope和Heapster+cAdvisor曾大显风采,不过在k8s版本1.12之后,通常选择prometheus-operator + grafana,但是目 前Prometheus-Operator已经不包含完整功能,完整的解决方案变为kube-prometheus,也就是本文主角。

生产环境信息:[root@worker01 manifests]# cat /etc/redhat-releaseCentOS Linux release 7.7.1908 (Core)[root@worker01 ~]# kubectl version --shortClient Version: v1.18.5Server Version: v1.18.3[root@worker01 ~]# docker -v Docker version 19.03.12, build 48a66213fe

1、配置Label标签

[root@worker01 ~]# kubectl label node worker04 role=monitor-cicd [root@worker01 ~]# kubectl label node worker05 role=monitor-cicd [root@worker01 ~]# kubectl label node worker06 role=monitor-cicd

2、配置污点Taints

[root@worker01 ~]# kubectl taint node worker04 node=monitor:PreferNoSchedule[root@worker01 ~]# kubectl taint node worker05 node=monitor:PreferNoSchedule[root@worker01 ~]# kubectl taint node worker06 node=monitor:PreferNoSchedule

3、label和taints查看

一、kube-prometheus介绍

kube-prometheus项目地址为:https://github.com/coreos/kube-prometheus

这个仓库包括:kubernetes清单、granfana dashboard以及promethues rules。同时还包括容易上手的安装脚本。

组件包括:

The Prometheus Operator高可用Prometheus高可用AlertmanagerPrometheus node-exporterPrometheus Adapter for Kubernetes Metrics APIskube-state-metricsGrafana

上图是Prometheus-Operator官方提供的架构图,其中Operator是最核心的部分,作为一个控制器,他会去创建Prometheus、ServiceMonitor、AlertManager以及PrometheusRule4个CRD资源对象,然后会一直监控并维持这4个资源对象的状态。

其中创建的prometheus这种资源对象就是作为Prometheus Server存在,而ServiceMonitor就是exporter的各种抽象,exporter是用来提供专门提供metrics数据接口的工具,Prometheus就是通过ServiceMonitor提供的metrics数据接口去 pull 数据的,当然alertmanager这种资源对象就是对应的AlertManager的抽象,而PrometheusRule是用来被Prometheus实例使用的报警规则文件。

这样我们要在集群中监控什么数据,就变成了直接去操作 Kubernetes 集群的资源对象。上图中的 Service 和 ServiceMonitor 都是 Kubernetes 的资源,一个 ServiceMonitor 可以通过 labelSelector 的方式去匹配一类 Service,Prometheus 也可以通过 labelSelector 去匹配多个ServiceMonitor。

二、kube-prometheus部署

默认使用namespace:monitoring

1、源码获取

我们使用源码来安装,首先获取源码到本地:

[root@worker01 prom]# wget https://github.com/coreos/kube-prometheus/archive/master.zip

[root@worker01 prom]# unzip kube-prometheus-master.zip

2、镜像信息、节点选择器、污点容忍以及数据持久化

考虑搭配我国或wars内网原因,原镜像地址镜像获取异常缓慢,故需要修改镜像地址,同时适当调整镜像版本。并增加nodeSelector和tolerations等信息。

2.1 prometheus相关调整

[root@worker01 prom]# cd kube-prometheus-master/manifests/setup/[root@worker01 setup]# vim prometheus-operator-deployment.yaml... spec: containers: - args: - --kubelet-service=kube-system/kubelet - --logtostderr=true - --config-reloader-image=jimmidyson/configmap-reload:v0.3.0 - --prometheus-config-reloader=kubesphere/prometheus-config-reloader:v0.38.3 image: bitnami/prometheus-operator:0.40.0... image: kubesphere/kube-rbac-proxy:v0.4.1 name: kube-rbac-proxy...

[root@worker01 setup]# kubectl apply -f ./

2.2 Grafana相关调整

说明:数据持久化需要提前创建存储类(推荐开源rook-ceph或者nfs,本次暂不介绍)

grafana数据持久化,创建pvc:grafana-data

[root@worker01 manifests]# vim grafana-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: grafana-data namespace: monitoringspec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: xsky-rbd volumeMode: Filesystem

修改emptydir类型为pvc,增加节点选择器,容忍和安全上下文

[root@worker01 manifests]# vim grafana-deployment.yaml... image: grafana/grafana:6.7.4... nodeSelector: role: monitor-cicd tolerations: #添加容忍策略 - key: "node" #对应我们添加节点的变量名 operator: "Equal" #操作符 value: "monitor" #容忍的值 effect: PreferNoSchedule #添加容忍的规则... volumes: - name: grafana-storage persistentVolumeClaim: claimName: grafana-data... securityContext: runAsGroup: 472 runAsUser: 472 fsGroup: 472

这里有个需要特别注意的事项,如果不添加以下“安全上下文”字段

securityContext:

runAsGroup: 472

runAsUser: 472

fsGroup: 472

grafana启动会有报错,数据不可写入,见下图示:

这个错误是在grafana5.1版本之后才会出现的,当然可以使用之前的版本来规避这个问题,但是解决问题才是我们的初衷。

日志中的错误很明显就是/var/lib/grafana目录的权限问题,这还是因为5.1版本后 groupid 更改了引起的问题,我们这里增加了securityContext,但是我们将目录/var/lib/grafana挂载到 pvc 这边后目录的拥有者并不是上面的 grafana(472)这个用户了,所以我们需要更改下这个目录的所属用户。

2.3 alertmanager相关调整

调整镜像信息和持久化存储,node选择以及容忍

[root@worker01 manifests]# more alertmanager-alertmanager.yaml apiVersion: monitoring.coreos.com/v1kind: Alertmanagermetadata: labels: alertmanager: main name: main namespace: monitoringspec: image: prom/alertmanager:v0.21.0 nodeSelector: role: monitor-cicd tolerations: #添加容忍策略 - key: "node" #对应我们添加节点的变量名 operator: "Equal" #操作符 value: "monitor" #容忍的值 effect: PreferNoSchedule #添加容忍的规则 replicas: 3 securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: alertmanager-main version: v0.21.0 storage: volumeClaimTemplate: metadata: name: alertmanager-main-db spec: accessModes: - ReadWriteOnce storageClassName: xsky-rbd resources: requests: storage: 10Gi

2.4 prometheus 持久化、节点选择、容忍、镜像调整

因为kube-prometheus中prometheus的持久化受crd控制,故需要修改prometheus-prometheus.yaml文件,(如在部署后再修改statefulset不会生效)

[root@worker01 manifests]# vim prometheus-prometheus.yamlapiVersion: monitoring.coreos.com/v1kind: Prometheusmetadata: labels: prometheus: k8s name: k8s namespace: monitoringspec: alerting: alertmanagers: - name: alertmanager-main namespace: monitoring port: web image: prom/prometheus:v2.19.2 nodeSelector: role: monitor-cicd tolerations: #添加容忍策略 - key: "node" #对应我们添加节点的变量名 operator: "Equal" #操作符 value: "monitor" #容忍的值 effect: PreferNoSchedule #添加容忍的规则 podMonitorNamespaceSelector: {} podMonitorSelector: {} replicas: 2 resources: requests: memory: 400Mi ruleSelector: matchLabels: prometheus: k8s role: alert-rules securityContext: fsGroup: 2000 runAsNonRoot: true runAsUser: 1000 serviceAccountName: prometheus-k8s serviceMonitorNamespaceSelector: {} serviceMonitorSelector: {} version: v2.19.2 storage: volumeClaimTemplate: spec: storageClassName: xsky-rbd resources: requests: storage: 20Gi

说明:因为是有状态应用,要使用volumeClaimTempate,创建后会自动生成prometheus-k8s-db-prometheus-k8s-0和prometheus-k8s-db-prometheus-k8s-1两个pvc

2.5 kube-state-metrics相关调整

[root@worker01 manifests]# vim kube-state-metrics-deployment.yaml... image: bitnami/kube-state-metrics:1.9.5... image: kubesphere/kube-rbac-proxy:v0.4.1... image: kubesphere/kube-rbac-proxy:v0.4.1... nodeSelector: role: monitor-cicd tolerations: #添加容忍策略 - key: "node" #对应我们添加节点的变量名 operator: "Equal" #操作符 value: "monitor" #容忍的值 effect: PreferNoSchedule #添加容忍的规则...

2.6 node-exporter相关调整

[root@worker01 manifests]# vim node-exporter-daemonset.yaml... image: prom/node-exporter:v0.18.1... image: kubesphere/kube-rbac-proxy:v0.4.1...

2.7 prometheus-adapter相关调整

[root@worker01 manifests]# vim prometheus-adapter-deployment.yaml... image: directxman12/k8s-prometheus-adapter:v0.7.0... nodeSelector: role: monitor-cicd tolerations: #添加容忍策略 - key: "node" #对应我们添加节点的变量名 operator: "Equal" #操作符 value: "monitor" #容忍的值          effect: PreferNoSchedule   #添加容忍的规则          ...

2.8 服务部署[root@worker01 manifests]# kubectl apply -f .

2.9 调整prometheus监控数据存储周期

调整为为30天,默认仅保存24h,不符合我们要求

[root@worker01 manifests]# kubectl edit statefulsets.apps prometheus-k8s -n monitoring

三、Ingress暴露服务

内网DNS做好解析或者使用hosts也可以

172.10.175.14   alert.k8s.domain

172.10.175.15   prom.k8s.domain

172.10.175.15   grafana.k8s.domain

3.1 alertmanager服务暴露

[root@worker01 manifests]# vim alert-ingress.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: alertmanager-ingress namespace: monitoringspec: rules: - host: alert.k8s.domain http: paths: - backend: serviceName: alertmanager-main servicePort: 9093[root@worker01 manifests]# kubectl apply -f alert-ingress.yaml

浏览器访问alert.k8s.domain

3.2 grafana暴露

[root@worker01 manifests]# vim grafana-ingress.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: grafana-ingress namespace: monitoringspec: rules:  - host: grafana.k8s.domain http: paths: - backend: serviceName: grafana servicePort: 3000[root@worker01 manifests]# kubectl apply -f grafana-ingress.yaml

浏览器访问grafana.k8s.domain,默认账号密码admin/admin

默认自带以下dashboard

3.3 Prometheus暴露

[root@worker01 ~]# vim prometheus-ingress.yamlapiVersion: extensions/v1beta1kind: Ingressmetadata: name: prometheus-ingress namespace: monitoringspec: rules: - host: prom.k8s.domain http: paths: - backend: serviceName: prometheus-k8s servicePort: 9090[root@worker01 ~]# kubectl apply -f prometheus-ingress.yaml

browser访问prom.k8s.domain

但是现在在targets里面,下面2个服务无信息显示

这个是因为我们并不是把其部署在集群内的,(本处为rancher部署k8s集群,kube-scheduler和kube-controller-manager为docker形式启动。

不管集群内外,但是其监控流程基本是一致的,唯一的区别就是在定义Service的时候,其EndPoints是需要我们自己去定义。

3.4 监控kube-controller-manager

配置Service和EndPoints

[root@worker01 manifests]# vim prometheus-KubeControllerManagerService.yamlapiVersion: v1kind: Servicemetadata: name: kube-controller-manager namespace: kube-system labels: k8s-app: kube-controller-managerspec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 10252 targetPort: 10252 protocol: TCP---apiVersion: v1kind: Endpointsmetadata: name: kube-controller-manager namespace: kube-system labels: k8s-app: kube-controller-managersubsets:- addresses:  - ip: 172.10.175.14  - ip: 172.10.175.15  - ip: 172.10.175.16 ports: - name: http-metrics port: 10252 protocol: TCP

[root@worker01 manifests]# kubectl apply -f prometheus-KubeControllerManagerService.yaml

过一会查看targets和grafana,有图形展示了

3.5 监控kube-scheduler

[root@worker01 manifests]# vim prometheus-KubeSchedulerService.yamlapiVersion: v1kind: Servicemetadata: name: kube-scheduler namespace: kube-system labels: k8s-app: kube-schedulerspec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 10251 targetPort: 10251 protocol: TCP---apiVersion: v1kind: Endpointsmetadata: name: kube-scheduler namespace: kube-system labels: k8s-app: kube-schedulersubsets:- addresses:  - ip: 172.10.175.14  - ip: 172.10.175.15  - ip: 172.10.175.16 ports: - name: http-metrics port: 10251protocol: TCP

[root@worker01 manifests]# kubectl apply -f prometheus-KubeSchedulerService.yaml

过会刷新,也已经显示正常啦

四、外部ETCD集群监控

到现在,还有ETCD集群未监控,本处docker形式启动,那么,又改如何监控外部ETCD集群呢,三步走:

第一步建立一个 ServiceMonitor 对象,用于 Prometheus 添加监控项第二步为 ServiceMonitor 对象关联 metrics 数据接口的一个 Service 对象第三步确保 Service 对象可以正确获取到 metrics 数据

1、创建etcd 证书secret

对于 etcd 集群一般情况下,为了安全都会开启 https 证书认证的方式,所以要想让 Prometheus 访问到 etcd 集群的监控数据,就需要提供相应的证书校验。

获取etcd证书路径,将需要使用到的证书通过 secret 对象保存到集群中:

[root@worker01 ssl]# cd /etc/kubernetes/ssl[root@worker01 ssl]# kubectl -n monitoring create secret generic etcd-certs --from-file=kube-ca.pem --from-file=kube-etcd-172-10-175-14.pem --from-file=kube-etcd-172-10-175-14-key.pemsecret/etcd-certs-14 created

然后将上面创建的 etcd-certs 对象配置到 prometheus 资源对象中,直接更新prometheus 资源对象即可:vim prometheus-prometheus.yaml

即文件最后新增:

secrets:

- etcd-certs

在 Prometheus Pod 中查看创建的 etcd 证书文件

[root@worker01 manifests]# kubectl -n monitoring exec -it prometheus-k8s-0 shkubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.Defaulting container name to prometheus.Use 'kubectl describe pod/prometheus-k8s-0 -n monitoring' to see all of the containers in this pod./prometheus $ ls /etc/prometheus/secrets/etcd-certs/prometheus $ ls /etc/prometheus/secrets/etcd-certs/kube-ca.pem                     kube-etcd-172-10-175-14.pemkube-etcd-172-10-175-14-key.pem

现在 Prometheus 访问 etcd 集群的证书已经准备好了,接下来创建 ServiceMonitor 对象和service:

[root@worker01 etcd]# lltotal 8-rw-r--r-- 1 root root 606 Jul 22 15:25 prometheus-serviceMonitorEtcd.yaml-rw-r--r-- 1 root root 475 Jul 22 16:13 serviceMonitorEtcd.yaml

[root@worker01 etcd]# more serviceMonitorEtcd.yamlapiVersion: v1kind: Servicemetadata: name: k8s-etcd namespace: kube-system labels: k8s-app: k8s-etcdspec: type: ClusterIP clusterIP: None ports: - name: port port: 2379 protocol: TCP ---apiVersion: v1kind: Endpointsmetadata: name: k8s-etcd namespace: kube-system labels: k8s-app: k8s-etcdsubsets:- addresses: - ip: 172.16.75.14 - ip: 172.16.75.15 - ip: 172.16.75.16 ports: - name: port port: 2379protocol: TCP

[root@worker01 etcd]# more prometheus-serviceMonitorEtcd.yamlapiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: k8s-etcd namespace: monitoring labels: k8s-app: k8s-etcdspec: jobLabel: k8s-app endpoints: - port: port interval: 30s scheme: https tlsConfig: caFile: /etc/prometheus/secrets/etcd-certs/kube-ca.pem certFile: /etc/prometheus/secrets/etcd-certs/kube-etcd-172-16-75-14.pem keyFile: /etc/prometheus/secrets/etcd-certs/kube-etcd-172-16-75-14-key.pem insecureSkipVerify: true selector: matchLabels: k8s-app: k8s-etcd namespaceSelector: matchNames:- kube-system[root@worker01 etcd]# kubectl apply -f .

等一会儿去 Prometheus 的 Dashboard 中查看 targets,已经有 etcd 的监控项

如果遇到访问拒绝,请 查看你的etcd监听地址,如果只监听127.0.0.1 修改为0.0.0.0并重启etcd即可。

在https://grafana.com/dashboards 找到etcd相关dashboard 导入,展示如下

五、grafana模板展示

Grafana已经自带了很多dashboard,比较丰富,包括cluster、node、pod以及k8s的各个组件的监控数据。

知识点:k8s污点和容忍

上一篇:一款好用的 Kubernetes 开源桌面监控工具 KubeScrape
下一篇:CNCF 官方大使张磊:Kubernetes 是一个“数据库”吗?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~