k8s部署-46-k8s共享存储(下)

网友投稿 773 2022-11-04

本站部分文章、图片属于网络上可搜索到的公开信息,均用于学习和交流用途,不能代表睿象云的观点、立场或意见。我们接受网民的监督,如发现任何违法内容或侵犯了您的权益,请第一时间联系小编邮箱jiasou666@gmail.com 处理。

k8s部署-46-k8s共享存储(下)

3、glusterfs环境准备

要求:

1、GlusterFS需要三个节点(我这里只有两个节点,就配置两个了,能跑起来,但是后面会有问题,本文会执行不下去,必须配置三个节点以上,我这里只做演示来用,如果你的系统资源足够,但是node节点不够,可以扩增node节点数哈,前面的文章有写到);

2、每个节点上都要有一块裸磁盘;

3、三个节点都需要在k8s集群中。

我这里没有多余的空服务器了,直接在我们一直使用的两个节点上,每个节点再添加一块1G的裸磁盘吧,添加完毕之后状态如下;

[root@node2 ~]# fdisk -l磁盘 dev/sda:21.5 GB, 21474836480 字节,41943040 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 512 字节I/O 大小(最小/最佳):512 字节 512 字节磁盘标签类型:dos磁盘标识符:0x000d1bb2 设备 Boot Start End Blocks Id System/dev/sda1 * 2048 2099199 1048576 83 Linux/dev/sda2 2099200 41943039 19921920 8e Linux LVM磁盘 dev/sdc:1073 MB, 1073741824 字节,2097152 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 512 字节I/O 大小(最小/最佳):512 字节 512 字节磁盘 dev/sdb:21.5 GB, 21474836480 字节,41943040 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 512 字节I/O 大小(最小/最佳):512 字节 512 字节磁盘标签类型:dos磁盘标识符:0x8207355b 设备 Boot Start End Blocks Id System/dev/sdb1 2048 41943039 20970496 83 Linux磁盘 dev/mapper/centos-root:34.0 GB, 33978056704 字节,66363392 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 512 字节I/O 大小(最小/最佳):512 字节 512 字节磁盘 dev/mapper/centos-swap:2147 MB, 2147483648 字节,4194304 个扇区Units = 扇区 of 1 * 512 = 512 bytes扇区大小(逻辑/物理):512 字节 512 字节I/O 大小(最小/最佳):512 字节 512 字节[root@node2 ~]#

可以看到有一块/dev/sdc是一个空磁盘,那么我们就将针对这个空磁盘来操作。

然后我们登录gluster官网来看看,还需要什么东西呢?如下图:

看起来还需要一个heketi,好了,我们准备开始吧。

4、glusterfs安装

首先我们需要在node节点上,安装glusterfs的客户端,对应着我这里就是node2和node3,下面的这个命令,node2和node3都需要执行;

[root@node2 ~]# yum -y install glusterfs glusterfs-fuse

然后我们再看apiserver是否支持,主要看一个参数,如下:

[root@node1 ~]# ps -ef | grep apiserver | grep allow-privilegedroot 777 1 6 09:32 ? 00:04:06 usr/local/bin/kube-apiserver --advertise-address=192.168.112.130 --allow-privileged=true --apiserver-count=2 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/log/audit.log --authorization-mode=Node,RBAC --bind-address=0.0.0.0 --client-ca-file=/etc/kubernetes/ssl/ca.pem --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --etcd-servers=https://192.168.112.130:2379,https://192.168.112.131:2379,https://192.168.112.132:2379 --event-ttl=1h --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem --kubelet-client-certificate=/etc/kubernetes/ssl/kubernetes.pem --kubelet-client-key=/etc/kubernetes/ssl/kubernetes-key.pem --service-account-issuer=api --service-account-key-file=/etc/kubernetes/ssl/service-account.pem --service-account-signing-key-file=/etc/kubernetes/ssl/service-account-key.pem --api-audiences=api,vault,factors --service-cluster-ip-range=10.233.0.0/16 --service-node-port-range=30000-32767 --proxy-client-cert-file=/etc/kubernetes/ssl/proxy-client.pem --proxy-client-key-file=/etc/kubernetes/ssl/proxy-client-key.pem --runtime-config=api/all=true --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --v=1 --feature-gates=RemoveSelfLink=false[root@node1 ~]#

spiserver检查完毕之后,再看下kubelet是否支持,如果不支持的话需要添加下,并重启kubelet服务;

[root@node2 ~]# cat etc/systemd/system/kubelet.service | grep allow-privileged --allow-privileged=true \[root@node2 ~]#

首先我们需要准备一个daemonset,这个deamonset就是为咱们提供了一个GlusterFS的服务端;

[root@node1 ~]# cd namespace/[root@node1 namespace]# mkdir glusterfs[root@node1 namespace]# cd glusterfs/[root@node1 glusterfs]# [root@node1 glusterfs]# vim glusterfs-daemonset.yaml ---kind: DaemonSet#apiVersion: extensions/v1beta1apiVersion: apps/v1metadata: name: glusterfs labels: glusterfs: daemonset annotations: description: GlusterFS DaemonSet tags: glusterfsspec: selector: matchLabels: glusterfs: pod glusterfs-node: pod template: metadata: name: glusterfs labels: glusterfs: pod glusterfs-node: pod spec: nodeSelector: storagenode: glusterfs hostNetwork: true containers: - image: gluster/gluster-centos:latest imagePullPolicy: IfNotPresent name: glusterfs env: # alternative for dev volumeMount to enable access to *all* devices - name: HOST_DEV_DIR value: "/mnt/host-dev" # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the # readiness/liveness probe validate gluster-blockd as well - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE value: "1" - name: GB_GLFS_LRU_COUNT value: "15" - name: TCMU_LOGDIR value: "/var/log/glusterfs/gluster-block" resources: requests: memory: 100Mi cpu: 100m volumeMounts: - name: glusterfs-heketi mountPath: "/var/lib/heketi" - name: glusterfs-run mountPath: "/run" - name: glusterfs-lvm mountPath: "/run/lvm" - name: glusterfs-etc mountPath: "/etc/glusterfs" - name: glusterfs-logs mountPath: "/var/log/glusterfs" - name: glusterfs-config mountPath: "/var/lib/glusterd" - name: glusterfs-host-dev mountPath: "/mnt/host-dev" - name: glusterfs-misc mountPath: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class mountPath: "/sys/class" - name: glusterfs-block-sys-module mountPath: "/sys/module" - name: glusterfs-cgroup mountPath: "/sys/fs/cgroup" readOnly: true - name: glusterfs-ssl mountPath: "/etc/ssl" readOnly: true - name: kernel-modules mountPath: "/usr/lib/modules" readOnly: true securityContext: capabilities: {} privileged: true readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v usr/local/bin/status-probe.sh; then usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 40 exec: command: - "/bin/bash" - "-c" - "if command -v usr/local/bin/status-probe.sh; then usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi" periodSeconds: 25 successThreshold: 1 failureThreshold: 50 volumes: - name: glusterfs-heketi hostPath: path: "/var/lib/heketi" - name: glusterfs-run - name: glusterfs-lvm hostPath: path: "/run/lvm" - name: glusterfs-etc hostPath: path: "/etc/glusterfs" - name: glusterfs-logs hostPath: path: "/var/log/glusterfs" - name: glusterfs-config hostPath: path: "/var/lib/glusterd" - name: glusterfs-host-dev hostPath: path: "/dev" - name: glusterfs-misc hostPath: path: "/var/lib/misc/glusterfsd" - name: glusterfs-block-sys-class hostPath: path: "/sys/class" - name: glusterfs-block-sys-module hostPath: path: "/sys/module" - name: glusterfs-cgroup hostPath: path: "/sys/fs/cgroup" - name: glusterfs-ssl hostPath: path: "/etc/ssl" - name: kernel-modules hostPath: path: "/usr/lib/modules"[root@node1 glusterfs]#

可以看到上面的daemonset中有一个nodeSelector:,如下:

spec: nodeSelector: storagenode: glusterfs

所以我们需要给有需要的节点,打上这个标签,我这里就是在node2和node3上配置;

[root@node1 glusterfs]# kubectl get nodeNAME STATUS ROLES AGE VERSIONnode2 NotReady 36d v1.20.2node3 NotReady 36d v1.20.2[root@node1 glusterfs]# kubectl label node node2 storagenode=glusterfsnode/node2 labeled[root@node1 glusterfs]# kubectl label node node3 storagenode=glusterfsnode/node3 labeled[root@node1 glusterfs]#

然后我们执行下daemonset;

[root@node1 glusterfs]# kubectl apply -f glusterfs-daemonset.yaml daemonset.apps/glusterfs created[root@node1 glusterfs]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESglusterfs-ld4vr 1/1 Running 0 155m 192.168.112.131 node2 glusterfs-mz8rt 1/1 Running 0 155m 192.168.112.132 node3 [root@node1 glusterfs]#

服务端有了之后,我们还有磁盘没有初始化是吧,磁盘初始化的动作交给了heketi,看下如何配置;

[root@node1 glusterfs]# vim heketi-security.yaml apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: heketi-clusterrolebindingroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: heketi-clusterrolesubjects:- kind: ServiceAccount name: heketi-service-account namespace: default---apiVersion: v1kind: ServiceAccountmetadata: name: heketi-service-account namespace: default---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: heketi-clusterrolerules:- apiGroups: - "" resources: - pods - pods/status - pods/exec verbs: - get - list - watch - create[root@node1 glusterfs]# [root@node1 glusterfs]# vim heketi-deployment.yaml kind: ServiceapiVersion: v1metadata: name: heketi labels: glusterfs: heketi-service deploy-heketi: support annotations: description: Exposes Heketi Servicespec: selector: name: heketi ports: - name: heketi port: 80 targetPort: 8080---apiVersion: v1kind: ConfigMapmetadata: name: tcp-services namespace: ingress-nginxdata: "30001": default/heketi:80---kind: DeploymentapiVersion: apps/v1metadata: name: heketi labels: glusterfs: heketi-deployment annotations: description: Defines how to deploy Heketispec: selector: matchLabels: name: heketi glusterfs: heketi-pod replicas: 1 template: metadata: name: heketi labels: name: heketi glusterfs: heketi-pod spec: serviceAccountName: heketi-service-account containers: - image: heketi/heketi:dev imagePullPolicy: Always name: heketi env: - name: HEKETI_EXECUTOR value: "kubernetes" - name: HEKETI_DB_PATH value: "/var/lib/heketi/heketi.db" - name: HEKETI_FSTAB value: "/var/lib/heketi/fstab" - name: HEKETI_SNAPSHOT_LIMIT value: "14" - name: HEKETI_KUBE_GLUSTER_DAEMONSET value: "y" - name: HEKETI_ADMIN_KEY value: "yunweijia123" ports: - containerPort: 8080 volumeMounts: - name: db mountPath: /var/lib/heketi readinessProbe: timeoutSeconds: 3 initialDelaySeconds: 3 httpGet: path: /hello port: 8080 livenessProbe: timeoutSeconds: 3 initialDelaySeconds: 30 httpGet: path: /hello port: 8080 volumes: - name: db hostPath: path: "/heketi-data"[root@node1 glusterfs]#

这里我们配置了一个HEKETI_ADMIN_KEY,需要注意下,后面初始化的时候会用到;

然后执行下,让其生效;

[root@node1 glusterfs]# kubectl apply -f heketi-security.yaml clusterrolebinding.rbac.authorization.k8s.io/heketi-clusterrolebinding createdserviceaccount/heketi-service-account createdclusterrole.rbac.authorization.k8s.io/heketi-clusterrole created[root@node1 glusterfs]# kubectl apply -f heketi-deployment.yaml service/heketi unchangedconfigmap/tcp-services unchangeddeployment.apps/heketi created[root@node1 glusterfs]# [root@node1 glusterfs]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESglusterfs-ld4vr 1/1 Running 0 3h27m 192.168.112.131 node2 glusterfs-mz8rt 1/1 Running 0 3h27m 192.168.112.132 node3 heketi-7d7bc4758-7m6d6 1/1 Running 0 3m49s 10.200.104.47 node2 [root@node1 glusterfs]#

然后我们做初始化磁盘的动作;

从上面看到我的heketi的pod部署在了node2上,那么我们去看下;

[root@node2 ~]# crictl ps | grep heketd3e18b2c2a3d8 3ffd29f1e74fe About a minute ago Running heketi 0 a2bbabd7df725[root@node2 ~]# [root@node2 ~]# crictl exec -it d3e18b2c2a3d8 bash[root@heketi-7d7bc4758-7m6d6 ]# export HEKETI_CLS_SERVER=http://localhost:8080[root@heketi-7d7bc4758-7m6d6 /]# vim yunweijia.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "node2" ], "storage": [ "192.168.112.131" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] }, { "node": { "hostnames": { "manage": [ "node3" ], "storage": [ "192.168.112.132" ] }, "zone": 1 }, "devices": [ { "name": "/dev/sdc", "destroydata": false } ] } ] } ]}[root@heketi-7d7bc4758-7m6d6 ]# [root@heketi-7d7bc4758-7m6d6 ]# heketi-cli --user admin --secret yunweijia123 topology load --json yunweijia.jsonCreating cluster ... ID: 83aaecd7b0487b392926c6049a9c1bec Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node node2 ... ID: 1c01bcf846cf182e1a5ef859d2ebc20c Adding device dev/sdc ... OK Creating node node3 ... ID: bef330c08e832f8f79f51ec0b1464e2d Adding device dev/sdc ... OK[root@heketi-7d7bc4758-7m6d6 /]# # 使用以下命令看下当前集群的信息[root@heketi-7d7bc4758-7m6d6 /]# heketi-cli --user admin --secret yunweijia123 topology infoCluster Id: 83aaecd7b0487b392926c6049a9c1bec File: true Block: true Volumes: Nodes: Node Id: 1c01bcf846cf182e1a5ef859d2ebc20c State: online Cluster Id: 83aaecd7b0487b392926c6049a9c1bec Zone: 1 Management Hostnames: node2 Storage Hostnames: 192.168.112.131 Devices: Id:ff74c91c991178cf3bc621b476bc5b6f State:online Size (GiB):0 Used (GiB):0 Free (GiB):0 Known Paths: /dev/sdc Bricks: Node Id: bef330c08e832f8f79f51ec0b1464e2d State: online Cluster Id: 83aaecd7b0487b392926c6049a9c1bec Zone: 1 Management Hostnames: node3 Storage Hostnames: 192.168.112.132 Devices: Id:c2486d413e563a44dd1cb6c7e3e5338f State:online Size (GiB):0 Used (GiB):0 Free (GiB):0 Known Paths: /dev/sdc Bricks:[root@heketi-7d7bc4758-7m6d6 /]#[root@heketi-7d7bc4758-7m6d6 /]# exitexit[root@node2 ~]#

然后我们可以去node2和node3上看下服务有没有启动;

# node2[root@node2 ~]# ps -ef | grep glusterroot 41841 41727 0 14:30 ? 00:00:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFOroot 41901 41727 0 14:30 ? 00:00:00 /usr/sbin/gluster-blockd --glfs-lru-count 15 --log-level INFOroot 118359 38212 0 15:58 pts/0 00:00:00 grep --color=auto gluster[root@node2 ~]## node3[root@node3 ~]# ps -ef | grep glusterroot 26354 26222 0 14:37 ? 00:00:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFOroot 26462 26222 0 14:37 ? 00:00:00 /usr/sbin/gluster-blockd --glfs-lru-count 15 --log-level INFOroot 79575 23200 0 15:58 pts/0 00:00:00 grep --color=auto gluster[root@node3 ~]#

亦或者可以直接进去gluster容器中,看下集群状态;

[root@node2 ~]# crictl ps | grep gluster4e58c440a4e8b b2919ab8d731c About an hour ago Running glusterfs 0 d28a382b1ca39[root@node2 ~]# [root@node2 ~]# crictl exec -it 4e58c440a4e8b bash[root@node2 /]# [root@node2 /]# gluster peer statusNumber of Peers: 1Hostname: 192.168.112.132Uuid: 7f4455b5-a196-4dd5-b51b-16f10c1159eeState: Peer in Cluster (Connected)[root@node2 /]# [root@node2 /]# exitexit[root@node2 ~]#

那么我们的底层存储服务就搭建完毕了。

5、准备PV、PVC、POD

然后我们开始配置PV;

[root@node1 glusterfs]# vim glusterfs-storage-class.yaml apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: glusterfs-storage-classprovisioner: kubernetes.io/glusterfsparameters: resturl: "http://192.168.112.131:30001" restauthenabled: "false"[root@node1 glusterfs]# kubectl apply -f glusterfs-storage-class.yaml storageclass.storage.k8s.io/glusterfs-storage-class created[root@node1 glusterfs]#

PV创建完毕之后,我们创建PVC;

[root@node1 glusterfs]# vim glusterfs-pvc.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: glusterfs-pvcspec: storageClassName: glusterfs-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 200Mi[root@node1 glusterfs]# kubectl apply -f glusterfs-pvc.yaml persistentvolumeclaim/glusterfs-pvc created[root@node1 glusterfs]#

然后我们检查下该状态;

[root@node1 glusterfs]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEglusterfs-pvc Pending glusterfs-storage-class 7m47s[root@node1 glusterfs]#

这个状态是错的,一直处于pending状态,我这里就已经执行不下去了,问题点是由于我系统资源不足,最开始状态没有配置三个glusterfs节点,所以报错了;

PVC创建完毕之后,我们就可以创建pod了;

[root@node1 glusterfs]# vim web-deploy.yaml #deployapiVersion: apps/v1kind: Deploymentmetadata: name: web-deployspec: strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate selector: matchLabels: app: web-deploy replicas: 2 template: metadata: labels: app: web-deploy spec: containers: - name: web-deploy image: registry.cn-beijing.aliyuncs.com/yunweijia0909/springboot-web:v1 ports: - containerPort: 8080 volumeMounts: - name: gluster-volume            mountPath: "/yunweijia-data" readOnly: false volumes: - name: gluster-volume persistentVolumeClaim: claimName: glusterfs-pvc[root@node1 glusterfs]#

至此,本文结束。

上一篇:软件测试培训之编写自动化测试用例要遵守的原则
下一篇:软件测试培训之设计自动化测试用例需考虑的方面
相关文章

 发表评论

暂时没有评论,来抢沙发吧~