kubernetes配置之十二:Pod资源调度

网友投稿 890 2022-11-07

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

kubernetes配置之十二:Pod资源调度

1、kubernetes调度器概述

API server接受客户端提交Pod创建请求后,kube-scheduler需要挑选一个最佳的节点来运行它,通常默认的调度器为default-scheduler;调度过程通常分为三个阶段:预选、优选、选定

节点预选:基于一系列预选规则对每个节点进行检查,将那些不符合条件的节点过滤掉;节点优选:对预选出来的节点进行优先级排序;节点选定:从优先级排序结果中挑选出优先级最高的节点运行Pod对象,当此类节点不止一个时,随机选定一个节点。

1.1 常用的预选策略

Hostname:根据Pod对象spec.hostname字段定义的值,检查节点的hostname是否与该值匹配;

MatchNodeSelector:根据Pod对象spec.nodeSelector字段定义的值,检查节点标签是否与该值匹配;

PodToleratesNodeTaints:根据Pod对象spec.tolerations字段定义的值,检查Pod节是否能够接纳节点定义的taints;

MatchInterPodAffinity:检查节点是否满足Pod对象的亲和和反亲和性条件;

CheckNodeMemoryPressure:若当前节点已经报告了内存资源压力,则检查当前Pod是否可调度至此节点上;

CheckNodeDiskPressure:若当前节点已经报告了磁盘资源压力,则检查当前Pod是否可调度至此节点上;

1.2 常用的优选策略

LeastRequestPriority:根据资源空闲比例来计算优先级,空闲比越高得分越高;

NodeAffinityPriority:基于节点的亲和性调度偏好进行优先级评估;

TaintTolerationPriority:基于Pod对象对节点的污点容忍度偏好进行优先级评估;

NodeLabelPriority:根据节点是否拥有特定标签来评估;

2、节点亲和调度

2.1 节点硬亲和

[root@k8s-master-01 scheduler]# cat required-node-affinity.yaml apiVersion: v1kind: Podmetadata: name: with-required-nodeaffinityspec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - {key: zone, operator: In, values: ["foo"]} containers: - name: myapp image: ikubernetes/myapp:v1[root@k8s-master-01 scheduler]# kubectl get pod/with-required-nodeaffinityNAME READY STATUS RESTARTS AGEwith-required-nodeaffinity 0/1 Pending 0 39s#新建Pod对象要求调度至标签zone的值为foo的节点上,由于没有任何一个节点符合要求,因此Pod处于Pending状态;

注意:IgnoreDuringExecution是指在Pod资源基于规则调度之后,节点标签发生变化而不再符合此节点亲和性规则时,调度器不会将Pod对象从此节点移除,只对新建节点生效;

#没有符合条件的节点,Pod处于Pending状态[root@k8s-master-01 scheduler]# kubectl get pod/with-required-nodeaffinityNAME READY STATUS RESTARTS AGEwith-required-nodeaffinity 0/1 Pending 0 39s[root@k8s-master-01 scheduler]# kubectl label node k8s-worker-01 zone=foonode/k8s-worker-01 labeled[root@k8s-master-01 scheduler]# kubectl label node k8s-worker-02 zone=barnode/k8s-worker-02 labeled[root@k8s-master-01 scheduler]# kubectl get pod/with-required-nodeaffinity -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESwith-required-nodeaffinity 1/1 Running 0 7m16s 10.244.1.10 k8s-worker-01 #为k8s-worker-01添加对应标签之后,Pod对象便可成功调度;[root@k8s-master-01 scheduler]# kubectl describe pods with-required-nodeaffinityEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 3m19s (x6 over 9m16s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector. Normal Scheduled 2m27s default-scheduler Successfully assigned default/with-required-nodeaffinity to k8s-worker-01 Normal Pulled 2m24s kubelet, k8s-worker-01 Container image "ikubernetes/myapp:v1" already present on machine Normal Created 2m23s kubelet, k8s-worker-01 Created container myapp  Normal   Started           2m23s                  kubelet, k8s-worker-01  Started container myapp

[root@k8s-master-01 scheduler]# kubectl label node k8s-worker-01 ssd=truenode/k8s-worker-01 labeled[root@k8s-master-01 scheduler]# kubectl label node k8s-worker-02 ssd=truenode/k8s-worker-02 labeled#为nodeSelector定义了两个选择条目,关系为逻辑与。[root@k8s-master-01 scheduler]# cat required-node-affinity-2.yamlapiVersion: v1kind: Podmetadata: name: with-required-nodeaffinity-2spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - {key: zone, operator: In, values: ["foo", "bar"]} - {key: ssd, operator: Exists, values: []} containers: - name: myapp image: ikubernetes/myapp:v1[root@k8s-master-01 scheduler]# kubectl describe pods with-required-nodeaffinity-2Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 109s default-scheduler Successfully assigned default/with-required-nodeaffinity-2 to k8s-worker-02 Normal Pulled 106s kubelet, k8s-worker-02 Container image "ikubernetes/myapp:v1" already present on machine Normal Created 106s kubelet, k8s-worker-02 Created container myapp Normal Started 106s kubelet, k8s-worker-02 Started container myapp

注意:定义了节点亲和性只是一个预选条件,调度结果还会受其他优选条件的影响;

2.2 节点软亲和

[root@k8s-master-01 scheduler]# cat prefer-node-affinity.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: with-prefer-nodeaffinityspec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 60 preference: matchExpressions: - {key: zone, operator: In, values: ["foo"]} - weight: 30 preference: matchExpressions: - {key: ssd, operator: Exists, values: []} containers: - name: myapp image: ikubernetes/myapp:v1

#worker-01与worker-02中创建的Pod数为2:1;[root@k8s-master-01 scheduler]# kubectl get pods -o wideNAME                                        READY   STATUS             RESTARTS   AGE    IP             NODE            NOMINATED NODE   READINESS GATESwith-prefer-nodeaffinity-6d94c6fb65-cv68k 1/1 Running 0 16s 10.244.1.12 k8s-worker-01 with-prefer-nodeaffinity-6d94c6fb65-jj9tf 1/1 Running 0 16s 10.244.3.37 k8s-worker-02 with-prefer-nodeaffinity-6d94c6fb65-vzfb7   1/1     Running            0          16s    10.244.1.13    k8s-worker-01              

注意:节点软亲和性调度规则会根据每个匹配项对应的权重生成一个优先级排序,根据优先级排序再调度,

3、Pod资源亲和调度

与节点调度类似,required表示硬亲和,preferred表示软亲和;Pod资源亲和调度通过内建的MatchInterPodAffinity预选策略和InterPodAffinityPriority优选函数进行各节点的优先评估;

3.1 Pod硬亲和调度

#创建一个可被依赖额Pod对象[root@k8s-master-01 scheduler]# kubectl run tomcat -l app=tomcat --image tomcat:alpine#新建Pod通过labelSelector挑选感兴趣的现存Pod。而后根据挑选出的Pod对象所在节点的标签kubernetes.io/hostname来判断同一位置;[root@k8s-master-01 scheduler]# cat required-pod-affinity.yaml apiVersion: v1kind: Podmetadata: name: with-required-podaffinityspec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app, operator: In, values: ["tomcat"]} topologyKey: kubernetes.io/hostname containers: - name: myapp image: ikubernetes/myapp:v1[root@k8s-master-01 scheduler]# kubectl get pod/tomcat -o wide -L appNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES APPtomcat 1/1 Running 0 5m31s 10.244.3.38 k8s-worker-02 tomcat[root@k8s-master-01 scheduler]# kubectl get pod/with-required-podaffinity -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESwith-required-podaffinity 1/1 Running 0 58s 10.244.3.39 k8s-worker-02

注意:kubernetes.io/hostname为集群内建标签;

[root@k8s-master-01 scheduler]# kubectl run db -l app=db --image redis:alpine[root@k8s-master-01 scheduler]# cat required-pod-affinity-2.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: with-required-podaffinity-2spec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app, operator: In, values: ["db"]} topologyKey: zone containers: - name: myapp image: ikubernetes/myapp:v1#根据db Pod所在节点的zone值来判断在同一区域;[root@k8s-master-01 scheduler]# kubectl get pod -o wideNAME                                           READY   STATUS             RESTARTS   AGE    IP             NODE            NOMINATED NODE   READINESS GATESdb                                             1/1     Running            0          2m9s   10.244.1.26    k8s-worker-01              with-required-podaffinity-2-584759cd64-9glzk 1/1 Running 0 23m 10.244.1.17 k8s-worker-01 with-required-podaffinity-2-584759cd64-crsgr 1/1 Running 0 23m 10.244.1.18 k8s-worker-01 with-required-podaffinity-2-584759cd64-hb5lj 1/1 Running 0 23m 10.244.1.16 k8s-worker-01

注意:labelSeletor属性仅匹配与被调度Pod在同一名称空间的Pod资源;

3.2 Pod软亲和调度

[root@k8s-master-01 scheduler]# kubectl run db-1 -l app=cache --image redis:alpine --replicas=2[root@k8s-master-01 scheduler]# kubectl get pod/db -o wide -L appNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES APPdb 1/1 Running 0 13m 10.244.1.26 k8s-worker-01 db[root@k8s-master-01 scheduler]# kubectl get pod/db-1 -o wide -L appNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES APPdb-1 1/1 Running 0 5m1s 10.244.3.40 k8s-worker-02 cache

[root@k8s-master-01 scheduler]# cat prefer-pod-affinity.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: with-prefer-podaffinityspec: replicas: 4 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 50 podAffinityTerm: labelSelector: matchExpressions: - {key: app, operator: In, values: ["cache"]} topologyKey: zone - weight: 50 podAffinityTerm: labelSelector: matchExpressions: - {key: app, operator: In, values: ["db"]} topologyKey: zone containers: - name: myapp        image: ikubernetes/myapp:v1#最终调度值work-01和work-02的节点各一半;[root@k8s-master-01 scheduler]# kubectl get pods -o wide -L app:myappNAME                                      READY   STATUS             RESTARTS   AGE    IP             NODE            NOMINATED NODE   READINESS GATES   APP:MYAPPdb 1/1 Running 0 27m 10.244.1.26 k8s-worker-01 db-1                                      1/1     Running            0          19m    10.244.3.40    k8s-worker-02                          with-prefer-podaffinity-7b9494787-9cvxv 1/1 Running 0 75s 10.244.3.60 k8s-worker-02 with-prefer-podaffinity-7b9494787-kzdn8 1/1 Running 0 75s 10.244.1.28 k8s-worker-01 with-prefer-podaffinity-7b9494787-ptqgb 1/1 Running 0 75s 10.244.1.29 k8s-worker-01 with-prefer-podaffinity-7b9494787-v6lmq 1/1 Running 0 75s 10.244.3.61 k8s-worker-02

3.3 Pod反亲和调度

[root@k8s-master-01 scheduler]# cat required-pod-anti-affinity.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: with-required-anti-podaffinityspec: replicas: 3 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - {key: app, operator: In, values: ["myapp"]} topologyKey: kubernetes.io/hostname containers: - name: myapp image: ikubernetes/myapp:v1

[root@k8s-master-01 scheduler]# kubectl get pod -o wide -L appwith-required-anti-podaffinity-b5b869d8c-66xhw 0/1 ContainerCreating 0 3s k8s-worker-02 myappwith-required-anti-podaffinity-b5b869d8c-lmx8n 0/1 ContainerCreating 0 3s k8s-worker-01 myappwith-required-anti-podaffinity-b5b869d8c-n9dz2   0/1     Pending             0          2s                                               myapp#由于只有两个worker节点,因此创建三个互斥的Pod对象时,会有一个Pod处于Pending状态;

注意:Pod反亲和性也可以定义为软反亲和;

4、污点和容忍度

4.1 污点和容忍度的定义

污点taints为定义在节点之上的键值属性数据,用于让节点拒绝不能接受该污点的Pod运行于其上;容忍度toleration为定义在Pod对象的键值性数据,用于配置可容忍的节点污点;kubernetes使用PodToleratesNodeTaints预选策略和TaintTolerationPriority优选函数来完成此类调度机制;节点污点和Pod容忍度语法格式为key=value:effect,effect表示对Pod的排斥等级;

‍NoSchedule:不能容忍此污点的新Pod对象不可调度至当前节点,属于强制约束条件,节点上现存的Pod对象不受影响;    PreferNoSchedule:不能容忍此污点的新Pod对象尽量不要调度至当前节点,属于柔弱性约束条件,节点上现存的Pod对象不受影响; NoExecute:不能容忍此污点的新Pod对象不可调度至当前节点,属于强制约束条件,节点上现存的Pod对象因节点污点和Pod容忍度变动而不再满足匹配规则时,Pod对象将被驱逐。

[root@k8s-master-01 scheduler]# kubectl describe node k8s-master-01Taints: node-role.kubernetes.io/master:NoSchedule[root@k8s-master-01 scheduler]# kubectl describe pods kube-flannel-ds-94kj9 -n kube-systemTolerations: :NoSchedule node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/network-unavailable:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/pid-pressure:NoSchedule node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoSchedule

注意:使用kubeadm部署的kubernetes集群,其master节点将自动添加污点信息以阻止不能容忍此污点的Pod对象运行至此节点,而一些系统级关键的组件会被添加很大的容忍度(可以容忍磁盘或内存压力以及未就绪的节点和不可达的节点)实现排斥非关键组件的Pod运行于Master上,并且还保证了集群的正常运行;

4.2 管理节点污点和Pod容忍度

#定义污点信息[root@k8s-master-01 scheduler]# kubectl taint nodes k8s-worker-01 node-type=product:NoSchedulenode/k8s-worker-01 tainted[root@k8s-master-01 scheduler]# kubectl get nodes k8s-worker-01 -o go-template={{.spec.taints}}[map[effect:NoSchedule key:node-type value:product]]

#定义容忍度apiVersion: v1kind: Podmetadata: name: pod-examplespec: containers: - name: myapp image: ikubernetes/myapp:v2 tolerations: - key: "key1" operator: "Equal" values: "value1" effect: "NoExecute"    tolerationSeconds:  "3600"

tolerations: - key: "key1"    operator: "Exists" values: "value1" effect: "NoExecute" tolerationSeconds: "3600"

Equal:表示与污点信息完全匹配的等值关系;Exists:判断污点信息存在性的匹配方式;tolerationSeconds:定义延迟驱逐当前Pod对象的时长;污点信息可以灵活的控制节点的上下线,确保能够容忍此污点的非生产型Pod被调度至其上;集群中未就绪或者内存、磁盘资源存在压力的节点会被自动添加污点信息,待恢复正常后再自动删除.

上一篇:软件测试培训之总结各种软件的测试手段
下一篇:软件测试培训之没有用户的可用性测试方法
相关文章

 发表评论

暂时没有评论,来抢沙发吧~