kubernetes配置之九:statefulset

网友投稿 723 2022-10-29

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

kubernetes配置之九:statefulset

1、statefulset控制器的特点

statefulset是一个有状态副本集;statefulset的的每个Pod对象都有一个专有的索引;statefulset的的每个Pod对象严格按照顺序升序部署、降序终止;statefulset的的每个Pod对象都有专有的存储卷;一个完整的statefulset由三个组件组成:Headless Service、StatefulSet、volumeClaimTemplate(Headless Service:为Pod资源生成可解析的DNS资源记录、StatefulSet:管控Pod资源、volumeClaimTemplate:基于动态或静态的PV为Pod资源提供专有固定的存储)动态存储卷供给时,statefulset控制器会为每个volumclaim模板创建一个专有的PV,它从模板中指定的storageclass中每个PVC创建PV;静态存储卷供给时,需要管理员事先创建好满足的PV。

2、创建PV

静态PV供给时需要管理员事先创建好满足条件的PV;

[root@k8s-master-01 statefulset]# cat pv-nfs-statefulset.yaml apiVersion: v1kind: Namespacemetadata: name: statefulset---apiVersion: v1kind: PersistentVolumemetadata: name: statefulset-nfs-pv1 namespace: statefulsetspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfsv1 nfs: path: data/pv-nfs/pv-1 server: k8s-nfs ---apiVersion: v1kind: PersistentVolumemetadata: name: statefulset-nfs-pv2 namespace: statefulsetspec: capacity: storage: 2Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfsv1 nfs: path: data/pv-nfs/pv-2 server: k8s-nfs

[root@k8s-master-01 statefulset]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEstatefulset-nfs-pv1 2Gi RWX Retain Bound statefulset/myappdata-myapp-0 nfsv1 19mstatefulset-nfs-pv2 2Gi RWX Retain Bound statefulset/myappdata-myapp-1 nfsv1 19m

3、创建statefulset资源并利用NFS静态供给PV

statefulset控制器会自动创建一个PVC申请绑定与volumeClaimTemplate.storageClassName对应的PV;statefulset对象引用静态PV时,PV类型资源对象中的spec.accessMode值需要与volumeClaimTemplate.accessMode的值完全一致,否者statefulset对象会一直处于pending状态;

[root@k8s-master-01 statefulset]# cat statefulset-deamon.yaml apiVersion: v1kind: Servicemetadata: name: myapp-svc namespace: statefulset labels: app: myapp-svcspec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod---apiVersion: apps/v1kind: StatefulSetmetadata: name: myapp namespace: statefulsetspec: serviceName: myapp-svc replicas: 2 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: nginx:1.12-alpine ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteMany" ] storageClassName: nfsv1 resources: requests: storage: 2Gi

[root@k8s-master-01 statefulset]# kubectl get pvc -n statefulsetNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmyappdata-myapp-0 Bound statefulset-nfs-pv1 2Gi RWX nfsv1 19mmyappdata-myapp-1 Bound statefulset-nfs-pv2 2Gi RWX nfsv1 19m[root@k8s-master-01 statefulset]# kubectl get pods -n statefulset -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmyapp-0 1/1 Running 0 20m 10.244.3.39 k8s-worker-02 myapp-1 1/1 Running 0 20m 10.244.1.67 k8s-worker-01

4、statefulset类型Pod测试域名

statefulset创建的Pod对象具有固定的标识符,它们基于statefulset名称以及索引号生成;

[root@k8s-master-01 statefulset]# for i in 0 1; do kubectl exec myapp-$i -n statefulset -- sh -c 'hostname'; done myapp-0myapp-1[root@k8s-worker-02 mnt]# docker container exec 5024420aa1fa hostnamemyapp-0[root@k8s-worker-01 ~]# docker container exec 5ea7f1d19e6c hostname myapp-1名称解析:[root@k8s-master-01 statefulset]# kubectl run -it --image busybox dns-client --restart=Never --rm bin/sh/ # ping myapp-0.myapp-svc.statefulset.svcPING myapp-0.myapp-svc.statefulset.svc (10.244.3.39): 56 data bytes64 bytes from 10.244.3.39: seq=0 ttl=64 time=0.220 ms64 bytes from 10.244.3.39: seq=1 ttl=64 time=0.147 ms64 bytes from 10.244.3.39: seq=2 ttl=64 time=0.124 ms/ # ping myapp-1.myapp-svc.statefulset.svcPING myapp-1.myapp-svc.statefulset.svc (10.244.1.67): 56 data bytes64 bytes from 10.244.1.67: seq=0 ttl=62 time=0.638 ms64 bytes from 10.244.1.67: seq=1 ttl=62 time=0.507 ms64 bytes from 10.244.1.67: seq=2 ttl=62 time=0.514 ms[root@k8s-master-01 ~]# for i in 0 1; do kubectl exec myapp-$i -n statefulset -- sh -c 'echo $(date), hostname: $(hostname) > usr/share/nginx/html/index.html'; done[root@k8s-master-01 statefulset]# kubectl run -it --image cirros client --restart=Never --rm bin/sh/ # curl myapp-0.myapp-svc.statefulset.svcThu Sep 24 05:08:07 UTC 2020, hostname: myapp-0/ # curl myapp-1.myapp-svc.statefulset.svcThu Sep 24 05:08:07 UTC 2020, hostname: myapp-1

5、statefulset类型Pod扩缩容

statefulset资源的扩容与deployment资源的扩容方式相似,按照当前索引号的最大值向后命名;

[root@k8s-master-01 ~]# kubectl scale statefulset myapp --replicas=3 -n statefulset[root@k8s-master-01 ~]# kubectl get pods -n statefulset -l app=myapp-pod -wNAME READY STATUS RESTARTS AGEmyapp-0 1/1 Running 0 169mmyapp-1 1/1 Running 0 169mmyapp-2 0/1 ContainerCreating 0 4m42smyapp-2 1/1 Running 0 6m59s[root@k8s-master-01 ~]# kubectl patch statefulset myapp -p '{"spec":{"replicas":2}}' -n statefulsetstatefulset.apps/myapp patched[root@k8s-master-01 ~]# kubectl get pods -n statefulset -l app=myapp-pod -wNAME READY STATUS RESTARTS AGEmyapp-0 1/1 Running 0 173mmyapp-1 1/1 Running 0 173mmyapp-2 0/1 Terminating 0 8m52smyapp-2 0/1 Terminating 0 8m53smyapp-2 0/1 Terminating 0 8m53s

6、statefulset类型Pod存储卷测试

强制删除某个Pod对象时,与之关联的PV不会删除,当该Pod被重新创建回来后,原PV会与原Pod对象关联;

7、statefulset类型Pod的滚动升级

滚动升级为statefulset类型Pod的默认升级策略,升级时按照索引号从大到小升级,任何Pod只有当比自己的索引号处于就绪状态时该Pod才会做升级操作;

[root@k8s-master-01 statefulset]# kubectl set image statefulset myapp myapp=nginx:alpine -n statefulsetstatefulset.apps/myapp image updated[root@k8s-master-01 ~]# kubectl get pods -n statefulset -l app=myapp-pod -wNAME READY STATUS RESTARTS AGEmyapp-0 1/1 Running 0 3h23mmyapp-1 1/1 Running 0 3h23mmyapp-1 1/1 Terminating 0 3h26mmyapp-1 0/1 Terminating 0 3h26mmyapp-1 0/1 Terminating 0 3h26mmyapp-1 0/1 Terminating 0 3h26mmyapp-1 0/1 Pending 0 0smyapp-1 0/1 Pending 0 0smyapp-1 0/1 ContainerCreating 0 0smyapp-1 1/1 Running 0 2smyapp-0 1/1 Terminating 0 3h26mmyapp-0 0/1 Terminating 0 3h26mmyapp-0 0/1 Terminating 0 3h26mmyapp-0 0/1 Terminating 0 3h26mmyapp-0 0/1 Pending 0 0smyapp-0 0/1 Pending 0 0smyapp-0 0/1 ContainerCreating 0 0smyapp-0 1/1 Running 0 2s

8、statefulset类型Pod的暂存升级

当把updateStrategy.rollingUpdate.partition的值设置为把Pod的最大索引号大1,当下发升级操作时,任何Pod都不会执行更新操作;

[root@k8s-master-01 statefulset]# kubectl patch statefulset myapp -p '{"spec": {"updateStrategy":{"rollingUpdate": {"partition":3}}}}' -n statefulsetstatefulset.apps/myapp patched[root@k8s-master-01 statefulset]# kubectl get pods -l app=myapp-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image -n statefulsetNAME IMAGEmyapp-0 nginx:alpinemyapp-1 nginx:alpine

9、statefulset类型Pod的金丝雀升级

当设置updateStrategy.rollingUpdate.partition的值为Pod的最大索引值时,就只有一个Pod进行升级操作,等升级后的Pod运行平稳时,再进行下一个Pod升级;

[root@k8s-master-01 statefulset]# kubectl patch statefulset myapp -p '{"spec": {"updateStrategy":{"rollingUpdate": {"partition":1}}}}' -n statefulsetstatefulset.apps/myapp patched[root@k8s-master-01 statefulset]# kubectl get pods -l app=myapp-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image -n statefulsetNAME IMAGEmyapp-0 nginx:alpinemyapp-1 nginx:1.12-alpine

10、配置动态供给NFS

NFS存储官方默认不支持动态PV供给,需要额外安装插件;

10.1 创建storageclassname

StorageClassName在volumeClaimTemplate中需要引用,用以动态创建PVC;

apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

10.2 创建nfs-client-privisoner

deployment资源清单中需要指明实际的NFS server和exportfs的路径;https://github.com/rimusz/nfs-client-provisioner/blob/master/deploy/

kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nfs-client-provisionerspec:  replicas: 2 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER              value: k8s-nfs - name: NFS_PATH value: /data/nfs-privisioner volumes: - name: nfs-client-root nfs: server: k8s-nfs path: /data/nfs-privisioner

[root@k8s-master-01 nfs-client-provisioner]# kubectl get deployment -n statefulsetNAME READY UP-TO-DATE AVAILABLE AGEnfs-client-provisioner 2/2 2 2 4m20s[root@k8s-master-01 nfs-client-provisioner]# kubectl get pod -n statefulsetNAME                                      READY   STATUS    RESTARTS   AGE       2m57snfs-client-provisioner-5c8f58cc6c-m492t 1/1 Running 0 4m25snfs-client-provisioner-5c8f58cc6c-tq9lt 1/1 Running 0 4m25s

10.3 创建rbac

apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultrules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultsubjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io

10.3 创建statefulset对象进行测试

利用动态PV供给时,创建的Pod可以随时调整副本数和volume的空间大小,而无需事先创建PV;

[root@k8s-master-01 statefulset]# cat statefulset-deamon.yaml apiVersion: v1kind: Servicemetadata: name: myapp-svc namespace: statefulset labels: app: myapp-svcspec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod---apiVersion: apps/v1kind: StatefulSetmetadata: name: myapp namespace: statefulsetspec: serviceName: myapp-svc replicas: 4 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: nginx:1.12-alpine ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: [ "ReadWriteMany" ] storageClassName: managed-nfs-storage resources: requests: storage: 2Gi

[root@k8s-master-01 statefulset]# kubectl get pvc -n statefulsetNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmyappdata-myapp-0 Bound pvc-4adb0e56-ba0a-47c9-8eed-ebf5ee895b35 2Gi RWX managed-nfs-storage 4m9smyappdata-myapp-1 Bound pvc-28e414dd-7d4a-4894-9ac6-f8e1b2d1ed1d 2Gi RWX managed-nfs-storage 4m4smyappdata-myapp-2 Bound pvc-f5314bdf-6147-4e6e-9331-66bf54f5a019 2Gi RWX managed-nfs-storage 4mmyappdata-myapp-3 Bound pvc-82a689b0-ba38-4353-a5ac-e8670e241434 2Gi RWX managed-nfs-storage 3m57s

11、statefulset实现etcd集群

etcd为一个分布式键值数据存储系统,具有可靠、快速、强一致性等特性,它通过分布式锁、leader选举和写屏障来实现可靠的分布式协作。

11.1 新建storageclass

[root@k8s-master-01 statefulset]# cat nfs-client-provisioner/class.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: etcd-storageprovisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'

11.2 新建service

第一个service为headless类型,为Pod资源提供名称解析,例如etcd-0、etcd-1、etcd-2;第二个service为NodePort为etcd集群外部提供服务;

[root@k8s-master-01 statefulset]# cat etcd-service.yaml apiVersion: v1kind: Servicemetadata: name: etcd annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"spec: ports: - port: 2379 name: client - port: 2380 name: peer clusterIP: None selector: app: etcd-member---apiVersion: v1kind: Servicemetadata: name: etcd-clientspec: ports: - name: client-client port: 2379 protocol: TCP targetPort: 2379 selector: app: etcd-member type: NodePort

11.3 新建statefulset

[root@k8s-master-01 statefulset]# cat etcd.statefulset.yaml apiVersion: apps/v1kind: StatefulSetmetadata: name: etcd labels: app: etcdspec: serviceName: etcd replicas: 3 selector: matchLabels: app: etcd-member template: metadata: name: etcd labels: app: etcd-member spec: containers: - name: etcd image: "quay.io/coreos/etcd:v3.2.16" ports: - containerPort: 2379 name: client - containerPort: 2380 name: peer env: - name: CLUSTER_SIZE value: "3" - name: SET_NAME value: "etcd" volumeMounts: - name: data mountPath: /var/run/etcd command: - "/bin/sh" - "-ecx" - | IP=$(hostname -i) PEERS="" for i in $(seq 0 $((${CLUSTER_SIZE} - 1))); do PEERS="${PEERS}${PEERS:+,}${SET_NAME}-${i}=http://${SET_NAME}-${i}.${SET_NAME}:2380" done exec etcd --name ${HOSTNAME} \ --listen-peer-urls http://${IP}:2380 --listen-client-urls http://${IP}:2379,http://127.0.0.1:2379 \ --advertise-client-urls http://${HOSTNME}.${SET_NAME}:2379 \ --initial-advertise-peer-urls http://${HOSTNME}.${SET_NAME}:2380 \ --initial-cluster-token etcd-cluster-1 \ --initial-cluster ${PEERS} \ --initial-cluster-state new \ --data-dir /var/run/etcd/default.etcd volumeClaimTemplates: - metadata: name: data spec: storageClassName: etcd-storage accessModes: - "ReadWriteOnce" resources: requests: storage: 1Gi

[root@k8s-master-01 statefulset]# kubectl get pods -l app=etcd-member -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESetcd-0 1/1 Running 0 10m 10.244.3.94 k8s-worker-02 etcd-1 1/1 Running 0 10m 10.244.3.95 k8s-worker-02 etcd-2 1/1 Running 0 10m 10.244.1.118 k8s-worker-01

[root@k8s-master-01 statefulset]# kubectl get statefulset -o wideNAME READY AGE CONTAINERS IMAGESetcd 3/3 11m etcd quay.io/coreos/etcd:v3.2.16

[root@k8s-master-01 statefulset]# kubectl get endpointsNAME ENDPOINTS AGEetcd 10.244.1.118:2380,10.244.3.94:2380,10.244.3.95:2380 + 3 more... 81setcd-client 10.244.1.118:2379,10.244.3.94:2379,10.244.3.95:2379 81s

上一篇:生产执行系统( MES)之系统架构
下一篇:系统建设规模和应用成效三
相关文章

 发表评论

暂时没有评论,来抢沙发吧~