如何在智能告警平台CA触发测试告警
846
2022-10-29
使用kubeadm安装Kubernetes13.10
环境准备:环境准备:使用Proxmox创建5个虚拟机,使用的ISO是Centos 7.2 1511,在系统环境直接升级为Centos7.7 1908masterHA VIP: 192.168.1.100hostname: k8s-master-01, HostIP: 192.168.1.101hostname: k8s-master-02, HostIP: 192.168.1.102nodehostname: k8s-node-01, HostIP: 192.168.1.103hostname: k8s-node-02, HostIP: 192.168.1.104Repositoryhostname: k8s-harbor, HostIP: 192.168.1.105软件版本:Kubernetes 1.13.10Etcd 3.3.15Flanel 0.11.0
环境初始化
Kubernetes的运行环境有些严格要求,如各个主机时间同步,域名解析,关闭防火墙,禁用swap等等。
主机名称解析
分布式系统环境中的多主机通信通常基于主机名称进行,这在 IP 地址存在变化的可能 性时为主机提供了固定的访问人口,因此一般需要有专用的 DNS 服务负责解决各节点主机 不过,考虑到此处部署的是测试集群,因此为了降低系复杂度,这里将基于hosts的文件进行主机名称解析,我本地有dns服务器,所以这步我没有做。
修改hosts
分别进入你的虚拟机,打开/etc/hosts进行编辑
vim /etc/hosts192.168.1.100 master.k8s.io k8s-vip192.168.1.101 master01.k8s.io k8s-master-01192.168.1.102 master02.k8s.io k8s-master-02192.168.1.103 node01.k8s.io k8s-node-01192.168.1.104 node02.k8s.io k8s-node-02192.168.1.105 harbor.k8s.io k8s-harbor
修改hostname
分别进入不同的虚拟机修改hostname
# 修改 192.168.1.101 服务器hostnamectl set-hostname k8s-master-01# 修改 192.168.1.102 服务器hostnamectl set-hostname k8s-master-02# 修改 192.168.1.103 服务器hostnamectl set-hostname k8s-node-01# 修改 192.168.1.104 服务器hostnamectl set-hostname k8s-node-02
主机时间同步
将各个虚拟机的时间同步
yum -y install ntpsystemctl start ntpdntpdate cn.pool.ntp.org
关闭防火墙服务
禁用防火墙
systemctl stop firewalldsystemctl disable firewalld
我的iso是mini,默认不会装firewalld,如果你的也是,可以略过。
禁用SELinux
修改/etc/sysconfig selinux 文件
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config# 查看selinux状态getenforce
如果是permissive,重启即可
禁用 Swap 设备
kubeadm 会预先检当前主机是否禁用了Swap,所以需要禁用Swap。
# 关闭Swapswapoff -a && sysctl -w vm.swappiness=0# 注释 fstab 中Swap 配置sed -i 's/.*swap.*/#&/' /etc/fstabcat /etc/fstab
设置系统参数
# 设置路由转发以及bridge的数据进行处理cat <
资源配置文件
/etc/security/limits.conf是Linux资源使用配置文件,用来限制用户对系统资源的使用,不过,自centos7.3 以后,普通用户登录,会被/etc/security/limits.d/20-nproc.conf文件覆盖,需要在修改一下此文件
## 1 echo "* soft nofile 65536" >> /etc/security/limits.confecho "* hard nofile 65536" >> /etc/security/limits.confecho "* soft nproc 65536" >> /etc/security/limits.confecho "* hard nproc 65536" >> /etc/security/limits.confecho "* soft memlock unlimited" >> /etc/security/limits.confecho "* hard memlock unlimited" >> /etc/security/limits.conf## 2echo "* soft nofile 65536" >> /etc/security/limits.d/20-nproc.confecho "* hard nofile 65536" >> /etc/security/limits.d/20-nproc.confecho "* soft nproc 65536" >> /etc/security/limits.d/20-nproc.confecho "* hard nproc 65536" >> /etc/security/limits.d/20-nproc.confecho "* soft memlock unlimited" >> /etc/security/limits.d/20-nproc.confecho "* hard memlock unlimited" >> /etc/security/limits.d/20-nproc.conf
安装依赖包&工具
yum install -y epel-releaseyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl psmisc
安装高可用
keepalived介绍:是集群管理中保证服务高可用的一个服务软件,用来防止出现单点故障后发生宕机。Keepalived作用:对haproxy提供虚拟IP(192.168.1.100)在2个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。
安装
yum install -y keepalived
配置
## master-01! Configuration File for keepalived# 主要是配置故障发生时的通知对象以及机器标识。global_defs { # 标识本节点的字条串,通常为 hostname,但不一定非得是 hostname。故障发生时,邮件通知会用到。 router_id LVS_k8s}# 用来做健康检查的,当时检查失败时会将 vrrp_instance 的 priority 减少相应的值。vrrp_script check_haproxy { script "killall -0 haproxy" # check process status interval 3 weight -2 fall 10 rise 2}# rp_instance用来定义对外提供服务的 VIP 区域及其相关属性。vrrp_instance VI_1 { state MASTER #Current is MASTER,Other BACKUP interface eth0 # Local Host Enternet virtual_router_id 51 priority 250 advert_int 1 authentication { auth_type PASS auth_pass 640205b7b0fc66c1ea91c463fac6334d1ea91c463fac6334d } virtual_ipaddress { 192.168.1.100/22 # VIP } track_script { check_haproxy }}
当前节点的配置中 state 配置为 MASTER,master-02节点设置为 BACKUP
启动
# 开机启动systemctl enable keepalived# 启动keepalivedsystemctl start keepalived# 查看状态systemctl status keepalived# 查看当前ip信息[root@k8s-master-01 ~]# ip a1: lo:
当restart当前节点的keeplived服务后将进行虚拟IP转移,将会推选state 为 BACKUP 的节点的某一节点为新的MASTER,可以在那台节点上查看网卡,将会查看到虚拟IP。
安装负载均衡
此处的haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量,充分的将资源利用起来。
安装
yum install -y haproxy
配置
vim /etc/haproxy/haproxy.cfg#---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global # to have these messages end up in /var/log/haproxy.log you will # need to: # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends#--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp bind *:16443 option tcplog default_backend kubernetes-apiserver #---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver mode tcp balance roundrobin server master01.k8s.io 192.168.1.101:6443 check server master02.k8s.io 192.168.1.102:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats bind *:1080 stats auth admin:1Password stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats
在maste-02节点上Haproxy的配置相同
启动验证
# 开机启动systemctl enable haproxy# 开启haproxysystemctl start haproxy# 查看状态systemctl status haproxy# 验证端口netstat -anplt|grep -E "16443|1080"
安装Docker (Master/Node)
配置Docker的yum源
# 阿里源yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 官方源yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安装docker
# 查看下docker-ce的版本yum list docker-ce --showduplicates | sort -r * updates: mirror.jdcloud.comLoading mirror speeds from cached hostfileLoaded plugins: fastestmirrorInstalled Packages * extras: mirror.jdcloud.com * epel: mirrors.njupt.edu.cndocker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stabledocker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stabledocker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stabledocker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.3.ce-3.el7 @docker-ce-stabledocker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stabledocker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stabledocker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stabledocker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stabledocker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable * base: ap.stykers.moeAvailable Packages# 选择docker-ce-18.06.3.ce-3.el7版本制定安装sudo yum install docker-ce-18.06.3.ce-3.el7 -y
启动
systemctl enable dockersystemctl start docker
防火墙规则确认
iptables -nvLChain INPUT (policy ACCEPT 82 packets, 24567 bytes) pkts bytes target prot opt in out source destinationChain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
注: Docker从1.13版本开始调整了默认的iptables规则,禁用了iptables filter表中FOWARD,这样会引起Kubernetes集群跨Node的Pods无法通信。这里通过制定版本安装的docker1806,发现默认策略加入了ACCEPT。
yum源安装kubeadm
配置可用的国内yum安装源
vim /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
必须先安装kubelet,如果先装kubeadm,默认的kubelet会是最新版本1.16.0-0。
yum list kubelet --showduplicates | sort -r yum install -y kubelet-1.13.10-0
安装kubeadm, 会默认安装 kubectl ,无需要单独安装kubectl.
yum list kubeadm --showduplicates | sort -r yum install -y kubeadm-1.13.10-0
安装kubeadm, 会默认安装 kubectl ,无需要单独安装kubectl.
重启,继续做相关操作。
reboot
启动kubelet
systemctl enable kubelet systemctl start kubelet
检查状态
检查状态,发现是failed状态,是因为master节点没有初始化,kubelet默认设置10秒重启一次,等初始化master节点后即可恢复正常。
systemctl status kubelet
Initialize第一个kubernetes master
查看VIP在那个master节点
ip a...2: eth0:
kubeadm配置文件
创建kube-adm配置文件
apiServer: certSANs: - k8s-master-01 - k8s-master-02 - master.k8s.io - 192.168.1.100 - 192.168.1.101 - 192.168.1.102 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta1certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: "master.k8s.io:16443"controllerManager: {}dns: type: CoreDNSetcd: local: dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.13.10networking: dnsDomain: cluster.local podSubnet: 10.20.0.0/16 serviceSubnet: 10.10.0.0/16scheduler: {}
配置说明:imageRepository:registry.aliyuncs.com/google_containers (阿里源)podSubnet:10.20.0.0/16 (pods地址池)serviceSubnet:10.10.0.0/16初始化第一个master节点
kubeadm init --config kubeadm-config.yaml Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root: kubeadm join master.k8s.io:16443 --token dm3cw1.kw4hq84ie1376hji --discovery-token-ca-cert-hash sha256:f079b624773145ba714b56e177f52143f90f75a1dcebabda6538a49e224d4009
配置kubectl环境变量
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
查看组件状态(此处的unknown先不必理会)
kubectl get csNAME AGEscheduler
配置Flannel组件
创建kube-flannel文件
vim kube-flannel.yaml---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelrules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: flannelroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannelsubjects:- kind: ServiceAccount name: flannel namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: flannel namespace: kube-system---kind: ConfigMapapiVersion: v1metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flanneldata: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.20.0.0/16", "Backend": { "Type": "vxlan" } }---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannelspec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
Network”: “10.20.0.0/16”与kubeadm-config.yaml配置文件中podSubnet: 10.20.0.0/16必须一致。创建flanner的role和pod
kubectl apply -f kube-flannel.yaml
创建需要一定时间,等几分钟后查看.
kubectl get pods --namespace=kube-systemNAME READY STATUS RESTARTS AGEcoredns-5774ff9fd9-wfr96 1/1 Running 0 2d1hkubernetes-dashboard-f6bd9778-vzj5p 1/1 Running 0 2d2h
Master加入集群组成高可用
复制KEY到各个master节点
复制文件到 master-02,node-01,node-02
ssh root@master02.k8s.io mkdir -p /etc/kubernetes/pki/etcdscp /etc/kubernetes/admin.conf root@master02.k8s.io:/etc/kubernetesscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02.k8s.io:/etc/kubernetes/pkiscp /etc/kubernetes/pki/etcd/ca.* root@master02.k8s.io:/etc/kubernetes/pki/etcd
保存在mastet-01节点的token
07d9f11925fc6ec6225f0bb7d39ee769ef8f433bc96f8af4dcfb6dc0949ee3f1
加入mater节点
在master-02执行加入master节点需要加入--experimental-control-plane参数
kubeadm join master.k8s.io:16443 --token isk0e0.xtpqiiig0nfrspai --discovery-token-ca-cert-hash sha256:24839e55086e0fd8f410ee901be52b2d737c2a550048fc9ab6931fa1be686f55 --experimental-control-plane
在node-01,node-02执行
kubeadm join master.k8s.io:16443 --token isk0e0.xtpqiiig0nfrspai --discovery-token-ca-cert-hash sha256:24839e55086e0fd8f410ee901be52b2d737c2a550048fc9ab6931fa1be686f55
如果遇到加入集群有问题,请在故障节点执行:
kubeadm reset iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
然后重新执行加入命令:
kubeadm join master.k8s.io:16443 --token isk0e0.xtpqiiig0nfrspai --discovery-token-ca-cert-hash sha256:24839e55086e0fd8f410ee901be52b2d737c2a550048fc9ab6931fa1be686f55
### 部署dashboard配置文件
创建dashboard配置文件
vim dashboard.yaml # ------------------- Dashboard Secret ------------------- #apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-systemtype: Opaque---# ------------------- Dashboard Service Account ------------------- #apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system---# ------------------- Dashboard Role & Role Binding ------------------- #kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: kubernetes-dashboard-minimal namespace: kube-systemrules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.- apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.- apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster.- apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"]- apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: kubernetes-dashboard-minimal namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimalsubjects:- kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system---# ------------------- Dashboard Deployment ------------------- ## 1.修改了镜像仓库位置,编辑成自己的镜像仓库# 2.变更了镜像拉去策略imagePullPolicy: IfNotPresentkind: DeploymentapiVersion: apps/v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 imagePullPolicy: IfNotPresent ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule---# ------------------- Dashboard Service ------------------- ## 增加了nodePort,使得能够访问,改变默认的type类型ClusterIP,变为NodePort# 如果不配置的话默认只能集群内访问kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30007 selector: k8s-app: kubernetes-dashboard
创建dashboard
kubectl create -f kubernetes-dashboard.yaml
Dashboard 创建 ServiceAccount 并绑定 Admin 角色
vim dashboard-user-role.yamlkind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true"roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.iosubjects:- kind: ServiceAccount name: admin namespace: kube-system---apiVersion: v1kind: ServiceAccountmetadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
启动dashboard的用户和角色绑定
kubectl create -f dashboard-user-role.yaml
登录token获取
kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system
在浏览器中输入https://192.168.1.100:30007使用获取的token登录
至此,Kubernetes1.3.10安装完成。
如果你对k8s很感兴趣,可以加Q群或者微信群与k8s大拿一起交流。
欢迎关注~
Kubernetes文档:https://docs.k8stech.net(持续更新)
Prometheus文档:https://prometheus.wang(持续更新)
发表评论
暂时没有评论,来抢沙发吧~