kubernetes集群使用ceph RBD storageClass供给卷

网友投稿 917 2022-10-15

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

kubernetes集群使用ceph RBD storageClass供给卷

kubernetes 使用Ceph RBD进行动态卷配置

1. 实验环境简述:

本实验主要演示了将现有Ceph集群用作k8s 动态创建持久性存储(pv)的示例。假设您的环境已经建立了一个工作的Ceph集群。

2. 配置步骤:

1. k8s所有节点安装ceph-common软件包

yum install -y ceph-common# 在每一台k8s节点安装ceph-common软件包,无论是master节点还是node节点如果k8s节点比较多,可以使用ansible安装ansible kube-master -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"ansible kube-master -m yum -a "name=ceph-common state=present"ansible kube-node -m copy -a "src=ceph.repo backup=yes dest=/etc/yum.repos.d"ansible kube-node -m yum -a "name=ceph-common state=present"

2. Create Pool for Dynamic Volumes   在ceph管理节点上面创建一个pool,名称为kube

ceph osd pool create kube 1024[root@k8sdemo-ceph1 cluster]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 3809G 3793G 15899M 0.41POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 1196G 0 k8sdemo 1 0 0 1196G 0 kube 2 72016k 0 1196G 30[root@k8sdemo-ceph1 cluster]# cd /cluster#创建密钥,用于k8s认证ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring[root@k8sdemo-ceph1 cluster]# ls ceph.client.kube.keyringceph.client.kube.keyring[root@k8sdemo-ceph1 cluster]#

3. 在k8s集群上面创建一个ceph集群的secret

[root@k8sdemo-ceph1 cluster]# ceph auth get-key client.admin | base64QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==[root@k8sdemo-ceph1 cluster]## 使用该命令在其中一个Ceph MON节点上生成此base64密钥ceph auth get-key client.admin | base64,然后复制输出并将其粘贴为密钥的值[root@master-01 ceph]# cat ceph-secret.yamlapiVersion: v1kind: Secretmetadata: name: ceph-secret namespace: kube-systemdata: key: QVFEdkJhZGN6ZW41SFJBQUQ5RzNJSVU0djlJVXRRQzZRZjBnNXc9PQ==type: kubernetes.io/rbd[root@master-01 ceph]#kubectl apply -f ceph-secret.yaml[root@master-01 ceph]# kubectl describe secrets -n kube-system ceph-secretName: ceph-secretNamespace: kube-systemLabels: Annotations:Type: kubernetes.io/rbdData====key: 40 bytes[root@master-01 ceph]# # k8s上面使用Ceph RBD 动态供给卷需要配置ceph secret

3. 在k8s集群上面创建一个ceph集群的用户 secret

root@k8sdemo-ceph1 cluster]# ceph auth get-key client.kube | base64QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==[root@k8sdemo-ceph1 cluster]# # 使用该命令在其中一个Ceph MON节点上生成此base64密钥ceph auth get-key client.kube | base64,然后复制输出并将其粘贴为密钥的值。[root@master-01 ceph]# cat ceph-user-secret.yaml apiVersion: v1kind: Secretmetadata: name: ceph-user-secret namespace: kube-systemdata: key: QVFDTks2ZGNjcEZoQmhBQWs4anVvbmVXZnZUeitvMytPbGZ6OFE9PQ==type: kubernetes.io/rbd[root@master-01 ceph]#kubectl apply -f ceph-user-secret.yaml[root@master-01 ceph]# kubectl get secrets -n kube-system ceph-user-secret NAME TYPE DATA AGEceph-user-secret kubernetes.io/rbd 1 3h45m[root@master-01 ceph]## k8s上面使用Ceph RBD 动态供给卷需要配置ceph user secret

4. 在k8s集群上面创建dynamic volumes

[root@master-01 ceph]# cat ceph-storageclass.yaml apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: name: dynamic annotations: storageclass.beta.kubernetes.io/is-default-class: "true"provisioner: kubernetes.io/rbdparameters: monitors: 10.83.32.224:6789,10.83.32.225:6789,10.83.32.234:6789 adminId: admin adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-user-secret[root@master-01 ceph]#kubectl apply -f ceph-storageclass.yaml# 配置了ceph mon节点的地址和端口,在pool中可以创建image的Ceph client ID# Secret Name for adminId. It is required. The provided secret must have type kubernetes.io/rbd. # The namespace for adminSecret. Default is default.# Ceph RBD pool. Default is rbd, but that value is not recommended.# Ceph client ID that is used to map the Ceph RBD image. Default is the same as adminId.# The name of Ceph Secret for userId to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required unless its set as the default in new projects.

[root@master-01 ceph]# cat ceph-class.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: ceph-claim namespace: kube-systemspec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi[root@master-01 ceph]# kubectl apply -f ceph-class.yaml

6. 在k8s集群上面创建Pod,使用ceph RDB自动关联的pvc

卷的名称。此名称在containers和 volumes部分中必须相同。

[root@master-01 ceph]# cat ceph-pod.yaml apiVersion: v1kind: Podmetadata: name: ceph-pod1 namespace: kube-systemspec: containers: - name: ceph-busybox image: busybox command: ["sleep","60000"] volumeMounts: - name: ceph-vol1 mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: ceph-claim[root@master-01 ceph]#kubectl apply -f ceph-pod.yaml[root@master-01 ceph]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGEceph-pod1 1/1 Running 0 3h21m# 进入容器,查看挂载[root@master-01 ceph]# kubectl exec -it -n kube-system ceph-pod1 -- /bin/sh/ # df -h|grep busybox/dev/rbd0 1.9G 6.0M 1.9G 0% /usr/share/busybox/ #

上一篇:看不懂监控怎么办?TiDB 新推出了耗时关系图
下一篇:MySql学习笔记(六) - 使用trace分析Sql
相关文章

 发表评论

暂时没有评论,来抢沙发吧~