如何在智能告警平台CA触发测试告警
793
2023-02-11
etcd 集群管理维护
官方网站:
环境:
CentOS7
etcd-3.0.4
3节点集群示例
etcd1:192.168.8.101
etcd2:192.168.8.102
etcd3:192.168.8.103
一.安装etcd(所有节点)
tar xzvf etcd-v3.0.4-linux-amd64.tar.gz
cp -af etcd-v3.0.4-linux-amd64/{etcd,etcdctl} /usr/local/bin
chmod +x /usr/local/bin/{etcd,etcdctl}
二.配置etcd集群
cluster帮助文档etcd-v3.0.4-linux-amd64/Documentation/op-guide/clustering.md
This guide will cover the following mechanisms for bootstrapping an etcd cluster:
* [Static](#static)
* [etcd Discovery](#etcd-discovery)
* [DNS Discovery](#dns-discovery)
目前支持三种发现方式,Static适用于有固定IP的主机节点,etcd Discovery适用于DHCP环境,DNS Discovery依赖DNS SRV记录
Static方式
提示:etcd支持ssl/tls,详见官方文档
节点一:etcd1:192.168.8.101
etcd --name etcd1 --data-dir /opt/etcd \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new
节点二:etcd2:192.168.8.102
etcd --name etcd2 --data-dir /opt/etcd \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new
节点三:etcd1:192.168.8.103
etcd --name etcd3 --data-dir /opt/etcd \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state new
2379是用于监听客户端请求,2380用于集群通信,可以通过--data-dir指定数据存放目录,不指定则默认为当前工作目录
[root@node3 ~]# netstat -tunlp|grep etcd
tcp 0 0 192.168.8.103:2379 0.0.0.0:* LISTEN 11103/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 11103/etcd
tcp 0 0 192.168.8.103:2380 0.0.0.0:* LISTEN 11103/etcd
[root@node3 ~]# ls
etcd3.etcd
[root@node3 ~]# ls etcd3.etcd/
fixtures/ member/
[root@node3 ~]# ls etcd3.etcd/fixtures/
client/ peer/
[root@node3 ~]# ls etcd3.etcd/fixtures/peer/
cert.pem key.pem
注意:上面的初始化只是在集群初始化时运行一次,之后服务有重启,必须要去除掉initial参数,否则报错
请使用如下类似命令
etcd --name etcd3 --data-dir /opt/etcd \
三.管理集群
etcdctl
[root@node3 ~]# etcdctl --version
etcdctl version: 3.0.4
API version: 2
COMMANDS:
backup backup an etcd directory
cluster-health check the health of the etcd cluster
mk make a new key with a given value
mkdir make a new directory
rm remove a key or a directory
rmdir removes the key if it is an empty directory or a key-value pair
get retrieve the value of a key
ls retrieve a directory
set set the value of a key
setdir create a new directory or update an existing directory TTL
update update an existing key with a given value
updatedir update an existing directory
watch watch a key for changes
exec-watch watch a key for changes and exec an executable
import import a snapshot to a cluster
auth overall auth controls
集群健康状态
[root@node3 ~]# etcdctl cluster-health
cluster is healthy
集群成员查看
[root@node3 ~]# etcdctl member list
删除集群成员
[root@node2 ~]# etcdctl member remove b200a8bec19bd22e
Removed member 4d11141f72b2744c from cluster
[root@node2 ~]# etcdctl member list
添加集群成员
注意:步骤很重要,不然会报集群ID不匹配
[root@node2 ~]# etcdctl member add --help
NAME:
etcdctl member add - add a new member to the etcd cluster
USAGE:
etcdctl member add
1.将目标节点添加到集群
Added member named etcd3 with ID 28e0d98e7ec15cd4 to cluster
ETCD_NAME="etcd3"
ETCD_INITIAL_CLUSTER_STATE="existing"
[root@node2 ~]# etcdctl member list
此时,集群会为目标节点生成一个唯一的member ID
2.清空目标节点的data-dir
[root@node3 ~]#rm -rf /opt/etcd
注意:节点删除后,集群中的成员信息会更新,新节点加入集群是作为一个全新的节点加入,如果data-dir有数据,etcd启动时会读取己经存在的数据,启动时仍然用的老member ID,也会造成,集群不无法加入,所以一定要清空新节点的data-dir
3.在目标节点上启动etcd
etcd --name etcd3 --data-dir /opt/etcd \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster-state existing
注意: 这里的initial标记一定要指定为existing,如果为new则会自动生成一个新的member ID,和前面添加节点时生成的ID不一致,故日志中会报节点ID不匹配的错
[root@node2 ~]# etcdctl member list
增删改查
[root@node3 ~]# etcdctl set foo "bar"
bar
[root@node3 ~]# etcdctl get foo
bar
[root@node3 ~]# etcdctl mkdir hello
[root@node3 ~]# etcdctl ls
/foo
/hello
[root@node3 ~]# etcdctl --output extended get foo
Key: /foo
Created-Index: 9
Modified-Index: 9
TTL: 0
Index: 10
bar
[root@node3 ~]# etcdctl --output json get foo
{"action":"get","node":{"key":"/foo","value":"bar","nodes":null,"createdIndex":9,"modifiedIndex":9},"prevNode":null}
[root@node2 ~]# etcdctl update foo "etcd cluster is ok"
etcd cluster is ok
[root@node2 ~]# etcdctl get foo
etcd cluster is ok
[root@node3 ~]# etcdctl import --snap /opt/etcd/member/snap/db
starting to import snapshot /opt/etcd/member/snap/db with 10 clients
2016-08-12 01:18:17.281921 I | entering dir: /
finished importing 0 keys
REST API
[root@node1 ~]# curl 192.168.8.101:2379/v2/keys
{"action":"get","node":{"dir":true,"nodes":[{"key":"/foo","value":"etcd cluster is ok","modifiedIndex":28,"createdIndex":9},{"key":"/hello","dir":true,"modifiedIndex":10,"createdIndex":10},{"key":"/registry","dir":true,"modifiedIndex":47,"createdIndex":47}]}}
[root@node1 ~]# curl -fs -X PUT 192.168.8.101:2379/v2/keys/_test
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":1439,"createdIndex":1439}}
[root@node1 ~]# curl -X GET 192.168.8.101:2379/v2/keys/_test
{"action":"get","node":{"key":"/_test","value":"","modifiedIndex":1439,"createdIndex":1439}}
四.systemd管控
1.建用户etcd
useradd -r -s /sbin/nologin etcd
chown -R etcd: /opt/etcd
2.创建systemd服务脚本etcd.service
cat >/lib/systemd/system/etcd.service < [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/opt/etcd/ User=etcd ExecStart=/usr/local/bin/etcd --config-file /etc/etcd.conf Restart=on-failure LimitNOFILE=1000000 [Install] WantedBy=multi-user.target HERE 3.创建主配置文件etcd.conf cat >/etc/etcd.conf < name: etcd2 data-dir: "/opt/etcd" HERE 不同节点的配置文件不同,如上是etcd2的范本 4.测试systemd启动 [root@node2 ~]# systemctl enable etcd Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service. [root@node2 ~]# systemctl start etcd [root@node2 ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 五 2016-08-12 03:06:30 CST; 8min ago Main PID: 12099 (etcd) CGroup: /system.slice/etcd.service └─12099 /usr/local/bin/etcd --config-file /etc/etcd.conf Hint: Some lines were ellipsized, use -l to show in full.
发表评论
暂时没有评论,来抢沙发吧~