2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移

网友投稿 679 2022-10-24

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

2个Kubernetes使用同一个Ceph存储达到Kubernetes间持久化数据迁移

[TOC]

当前最新Kubernetes稳定版为1.14。现在为止,还没有不同Kubernetes间持久化存储迁移的方案。但根据Kubernetes pv/pvc绑定流程和原理,只要 "存储"-->"PV"-->"PVC" 的绑定关系相同,即可保证不同间Kubernetes可挂载相同的存储,并且里面是相同数据。

1. 环境

原来我的Kubernetes为阿里云ECS自己搭建的,现在想切换使用阿里云购买的Kubernetes。因Kubernetes中一些应用使用像1G、2G等小容量存储比较多,所以仍旧想保留原有的Ceph存储使用。

Kubernetes: v1.13.4Ceph: 12.2.10 luminous (stable)

2个Kubernetes存储使用storageclass管理,并连接相同Ceph集群。可参考:Kubernetes使用Ceph动态卷部署应用

2. 迁移过程示例

数据依旧保留在存储中,并未真正有迁移动作,迁移只是相对于不同Kubernetes来讲。

2.1 提取旧Kubernetes持久化存储

为了更好的看到效果,这里新建一个nginx的deploy,并使用ceph rbd做为持久化存储,然后写一些数据。

vim rbd-claim.yaml

kind: PersistentVolumeClaimapiVersion: v1metadata: name: rbd-pv-claimspec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi

vim rbd-nginx-dy.yaml

apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-rbd-dyspec: replicas: 1 template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: ceph-cephfs-volume mountPath: "/usr/share/nginx/html" volumes: - name: ceph-cephfs-volume persistentVolumeClaim: claimName: rbd-pv-claim

# 创建pvc和deploykubectl create -f rbd-claim.yamlkubectl create -f rbd-nginx-dy.yaml

查看结果,并写入数据至nginx持久化目录中:

pod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m31s[root@node5 tmp]# kubectl get pvc,podNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/rbd-pv-claim Bound pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO ceph-rbd 4m37sNAME READY STATUS RESTARTS AGEpod/nginx-rbd-dy-7455884d49-rthzt 1/1 Running 0 4m36s[root@node5 tmp]# kubectl exec -it nginx-rbd-dy-7455884d49-rthzt bin/bashroot@nginx-rbd-dy-7455884d49-rthzt:/# df -hFilesystem Size Used Avail Use% Mounted onoverlay 40G 23G 15G 62% tmpfs 64M 0 64M 0% devtmpfs 16G 0 16G 0% sys/fs/cgroup/dev/vda1 40G 23G 15G 62% etc/hostsshm 64M 0 64M 0% dev/shm/dev/rbd5 976M 2.6M 958M 1% usr/share/nginx/htmltmpfs 16G 12K 16G 1% run/secrets/kubernetes.io/serviceaccounttmpfs 16G 0 16G 0% proc/acpitmpfs 16G 0 16G 0% proc/scsitmpfs 16G 0 16G 0% sys/firmwareroot@nginx-rbd-dy-7455884d49-rthzt:/# echo ygqygq2 > usr/share/nginx/html/ygqygq2.htmlroot@nginx-rbd-dy-7455884d49-rthzt:/# exitexit[root@node5 tmp]#

将pv、pvc信息提取出来:

[root@node5 tmp]# kubectl get pvc rbd-pv-claim -oyaml --export > rbd-pv-claim-export.yaml[root@node5 tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee -oyaml --export > pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml[root@node5 tmp]# more rbd-pv-claim-export.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd creationTimestamp: null finalizers: - kubernetes.io/pvc-protection name: rbd-pv-claim selfLink: api/v1/namespaces/default/persistentvolumeclaims/rbd-pv-claimspec: accessModes: - ReadWriteOnce dataSource: null resources: requests: storage: 1Gi storageClassName: ceph-rbd volumeMode: Filesystem volumeName: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeeestatus: {}[root@node5 tmp]# more pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml apiVersion: v1kind: PersistentVolumemetadata: annotations: pv.kubernetes.io/provisioned-by: ceph.com/rbd rbdProvisionerIdentity: ceph.com/rbd creationTimestamp: null finalizers: - kubernetes.io/pv-protection name: pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee selfLink: api/v1/persistentvolumes/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeeespec: accessModes: - ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: rbd-pv-claim namespace: default resourceVersion: "51998402" uid: d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee persistentVolumeReclaimPolicy: Retain rbd: fsType: ext4 image: kubernetes-dynamic-pvc-dac8284a-6a1c-11e9-b533-1604a9a8a944 keyring: etc/ceph/keyring monitors: - 172.18.43.220:6789 - 172.18.138.121:6789 - 172.18.228.201:6789 pool: kube secretRef: name: ceph-secret namespace: kube-system user: kube storageClassName: ceph-rbd volumeMode: Filesystemstatus: {}[root@node5 tmp]#

2.2 将提取出来的pv、pvc导入新Kubernetes中

将上文中提取出来的pv和pvc传至新的Kubernetes中:

[root@node5 tmp]# rsync -avz pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml rbd-pv-claim-export.yaml rbd-nginx-dy.yaml 172.18.97.95:/tmp/sending incremental file listpvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yamlrbd-nginx-dy.yamlrbd-pv-claim-export.yamlsent 1,371 bytes received 73 bytes 2,888.00 bytes/sectotal size is 2,191 speedup is 1.52[root@node5 tmp]#

在新的Kubernetes中导入pv、pvc:

[root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl apply -f pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee-export.yaml -f rbd-pv-claim-export.yaml persistentvolume/pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee createdpersistentvolumeclaim/rbd-pv-claim created[root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl get pv pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeeeNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 1Gi RWO Retain Released default/rbd-pv-claim ceph-rbd 20s[root@iZwz9g5ec0q4fc8iuqawr0Z tmp]# kubectl get pvc rbd-pv-claimNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGErbd-pv-claim Lost pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee 0 ceph-rbd 28s[root@iZwz9g5ec0q4fc8iuqawr0Z tmp]#

可以看到,pvc状态显示为Lost,这是因为在新的Kubernetes中导入pv和pvc后,它们会自动重新生成自己的resourceVersion和uid,因此在新导入的pv中的spec.claimRef信息为旧的:

为了解决新导入的pv中的spec.claimRef信息旧的变成新的,我们将这段信息删除,由provisioner自动重新绑定它们的关系:

这里我们做成一个脚本处理:

vim unbound.sh

pv=$*function unbound() { kubectl patch pv -p '{"spec":{"claimRef":{"apiVersion":"","kind":"","name":"","namespace":"","resourceVersion":"","uid":""}}}' \ $pv kubectl get pv $pv -oyaml> /tmp/.pv.yaml sed '/claimRef/d' -i /tmp/.pv.yaml #kubectl apply -f /tmp/.pv.yaml kubectl replace -f /tmp/.pv.yaml}unbound

sh unbound.sh pvc-d1cb2de6-6a1c-11e9-8124-eeeeeeeeeeee

脚本执行后,过个10秒左右,查看结果:

在新的Kubernetes中使用之前传的rbd-nginx-dy.yaml验证下,在此之前,因为使用ceph rbd,需要先解除旧Kubernetes上的pod占用该rbd:

旧Kubernetes:

[root@node5 tmp]# kubectl delete -f rbd-nginx-dy.yaml deployment.extensions "nginx-rbd-dy" deleted

新Kubernetes:

3. 小结

上面实验中,使用的是RWO的pvc,大家试想下,如果使用RWX,多个Kubernetes使用,这种使用场景可能有更大的作用。

Kubernetes使用过程中,pv、pvc和存储,它们的信息和绑定关系至关重要,所以可按需求当作日常备份,有了这些备份,即使Kubernetes etcd数据损坏,也可达到恢复和迁移Kubernetes持久化数据目的。

上一篇:北京2022冬季残奥会技术维修服务中心筹备工作全面展开
下一篇:维稳物价!韩国下周起将统一公示主要连锁品牌菜品价格
相关文章

 发表评论

暂时没有评论,来抢沙发吧~