睿象云智能告警平台的分派策略
696
2023-03-10
Hadoop集群搭建(二) HDFS
HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的。所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始。
安装Hadoop集群,首先需要有Zookeeper才可以完成安装。如果没有Zookeeper,请先部署一套Zookeeper。另外,JDK以及物理主机的一些设置等。都请参考下文:
Hadoop集群搭建(一) Zookeeper
下面开始HDFS的安装
HDFS主机分配
192.168.67.101 c6701 --Namenode+datanode192.168.67.102 c6702 --datanode192.168.67.103 c6703 --datanode
192.168.67.101 c6701 --Namenode+datanode192.168.67.102 c6702 --datanode192.168.67.103 c6703 --datanode
1. 安装HDFS,解压hadoop-2.6.0-EDH-0u2.tar.gz
我同时下载2.6和2.7版本的软件,先安装2.6,然后在执行2.6到2.7的升级步骤
useradd hdfsecho "hdfs:hdfs" | chpasswdsu - hdfscd /tmp/softwaretar -zxvf hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs/mkdir -p /data/hadoop/temp mkdir -p /data/hadoop/journal mkdir -p /data/hadoop/hdfs/name mkdir -p /data/hadoop/hdfs/datachown -R hdfs:hdfs /data/hadoopchown -R hdfs:hdfs /data/hadoop/temp chown -R hdfs:hdfs /data/hadoop/journal chown -R hdfs:hdfs /data/hadoop/hdfs/name chown -R hdfs:hdfs /data/hadoop/hdfs/data $ pwd/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop
useradd hdfsecho "hdfs:hdfs" | chpasswdsu - hdfscd /tmp/softwaretar -zxvf hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs/mkdir -p /data/hadoop/temp mkdir -p /data/hadoop/journal mkdir -p /data/hadoop/hdfs/name mkdir -p /data/hadoop/hdfs/datachown -R hdfs:hdfs /data/hadoopchown -R hdfs:hdfs /data/hadoop/temp chown -R hdfs:hdfs /data/hadoop/journal chown -R hdfs:hdfs /data/hadoop/hdfs/name chown -R hdfs:hdfs /data/hadoop/hdfs/data $ pwd/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop
2. 修改core-site.xml对应的参数
$ cat core-site.xml<configuration> <!-- 指定hdfs的nameservice为ns --> <property> <name>fs.defaultFS</name> <value>hdfs://ns</value> </property> <!--指定hadoop数据临时存放目录--> <property> <name>hadoop.tmp.dir</name> <value>/data/hadoop/temp</value> </property> <property> <name>io.file.buffer.size</name> <value>4096</value> </property> <!--指定zookeeper地址--> <property> <name>ha.zookeeper.quorum</name> <value>c6701:2181,c6702:2181,c6703:2181</value> </property> </configuration>
$ cat core-site.xml
3. 修改hdfs-site.xml对应的参数
4. 添加slaves文件
$ more slavesc6701c6702c6703
$ more slavesc6701c6702c6703
--- 安装C6702的hdfs---
5. 创建c6702的用户,并为hdfs用户ssh免密
ssh c6702 "useradd hdfs"ssh c6702 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id hdfs@c6702
ssh c6702 "useradd hdfs"ssh c6702 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id hdfs@c6702
6. 拷贝软件
scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6702:/tmp/software/.ssh c6702 "chmod 777 /tmp/software/*"
scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6702:/tmp/software/.ssh c6702 "chmod 777 /tmp/software/*"
7. 创建目录,解压软件
ssh hdfs@c6702 "mkdir hdfs"ssh hdfs@c6702 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6702 "ls -al hdfs"ssh hdfs@c6702 "ls -al hdfs/hadoop*"
ssh hdfs@c6702 "mkdir hdfs"ssh hdfs@c6702 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6702 "ls -al hdfs"ssh hdfs@c6702 "ls -al hdfs/hadoop*"
复制配置文件
ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves
ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves
创建hdfs需要的目录
ssh root@c6702 "mkdir -p /data/hadoop"ssh root@c6702 " chown -R hdfs:hdfs /data/hadoop"ssh hdfs@c6702 "mkdir -p /data/hadoop/temp"ssh hdfs@c6702 "mkdir -p /data/hadoop/journal"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/data"
ssh root@c6702 "mkdir -p /data/hadoop"ssh root@c6702 " chown -R hdfs:hdfs /data/hadoop"ssh hdfs@c6702 "mkdir -p /data/hadoop/temp"ssh hdfs@c6702 "mkdir -p /data/hadoop/journal"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/data"
--- 安装C6703的hdfs---
8. 创建c6703的用户,并为hdfs用户ssh免密
ssh c6703 "useradd hdfs"ssh c6703 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id hdfs@c6703
ssh c6703 "useradd hdfs"ssh c6703 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id hdfs@c6703
9. 拷贝软件
scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6703:/tmp/software/.ssh c6703 "chmod 777 /tmp/software/*"10. 创建目录,解压软件ssh hdfs@c6703 "mkdir hdfs"ssh hdfs@c6703 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6703 "ls -al hdfs"ssh hdfs@c6703 "ls -al hdfs/hadoop*"
scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6703:/tmp/software/.ssh c6703 "chmod 777 /tmp/software/*"10. 创建目录,解压软件ssh hdfs@c6703 "mkdir hdfs"ssh hdfs@c6703 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6703 "ls -al hdfs"ssh hdfs@c6703 "ls -al hdfs/hadoop*"
复制配置文件
ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves
ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves
创建hdfs需要的目录
ssh root@c6703 "mkdir -p /data/hadoop"ssh root@c6703 " chown -R hdfs:hdfs /data/hadoop"ssh hdfs@c6703 "mkdir -p /data/hadoop/temp"ssh hdfs@c6703 "mkdir -p /data/hadoop/journal"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/data"
ssh root@c6703 "mkdir -p /data/hadoop"ssh root@c6703 " chown -R hdfs:hdfs /data/hadoop"ssh hdfs@c6703 "mkdir -p /data/hadoop/temp"ssh hdfs@c6703 "mkdir -p /data/hadoop/journal"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/data"
11. 启动HDFS,先启动三个节点的journalnode
/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start journalnode
检查状态
$ jps3958 Jps3868 JournalNode
$ jps3958 Jps3868 JournalNode
12. 然后启动namenode,首次启动namenode之前,先在其中一个节点(主节点)format namenode信息,信息会存在于dfs.namenode.name.dir指定的路径中
<name>dfs.namenode.name.dir</name> <value>/data/hadoop/hdfs/name</value>
14. 检查状态,namenode还没有启动
[hdfs@c6702 sbin]$ jps4539 Jps3868 JournalNode
[hdfs@c6702 sbin]$ jps4539 Jps3868 JournalNode
15. 启动standby namenode,命令和master启动的方式相同
[hdfs@c6702 sbin]$ ./hadoop-daemon.sh start namenodestarting namenode, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-namenode-c6702.python279.org.out
[hdfs@c6702 sbin]$ ./hadoop-daemon.sh start namenodestarting namenode, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-namenode-c6702.python279.org.out
16. 再次检查,namenode已经启动
[hdfs@c6702 sbin]$ jps4640 Jps4570 NameNode3868 JournalNode
[hdfs@c6702 sbin]$ jps4640 Jps4570 NameNode3868 JournalNode
17. 格式化zkfc,让在zookeeper中生成ha节点,在master上执行如下命令,完成格式化
18. 格式化完成的检查
格式成功后,查看zookeeper中可以看到 <<<<<<<<<<<命令没确认
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
19. 启动zkfc,这个就是为namenode使用的
./hadoop-daemon.sh start zkfcstarting zkfc, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-zkfc-c6701.python279.org.out$ jps4272 DataNode4402 JournalNode6339 Jps6277 DFSZKFailoverController4952 NameNode
./hadoop-daemon.sh start zkfcstarting zkfc, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-zkfc-c6701.python279.org.out$ jps4272 DataNode4402 JournalNode6339 Jps6277 DFSZKFailoverController4952 NameNode
20. 另一个节点启动zkfc,
ssh hdfs@c6702 /home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc$ jps4981 Jps4935 DFSZKFailoverController4570 NameNode3868 JournalNode
ssh hdfs@c6702 /home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc$ jps4981 Jps4935 DFSZKFailoverController4570 NameNode3868 JournalNode
21. 注意:进行初始化的时候,必须保证zk集群已经启动了。
1、在ZK中创建znode来存储automatic Failover的数据,任选一个NN执行完成即可:
sh bin/hdfs zkfc -formatZK
sh bin/hdfs zkfc -formatZK
2、启动zkfs,在所有的NN节点中执行以下命令:
sh sbin/hadoop-daemon.sh start zkfc
sh sbin/hadoop-daemon.sh start zkfc
22. 启动datanode
最后启动集群
/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc sh sbin/start-dfs.sh
/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc sh sbin/start-dfs.sh
HDFS安装过程中的重点,最后在软件启动过程中,一些初始化操作,很重要。
1. 启动全部的journalnode
2. 在namenode1上执行, hdfs namenode -formatZK
3. 在namenode1上执行, 启动namenode1,命令hadoop-daemon.sh start namenode
4. 在namenode2上执行, hdfs namenode -bootstrapstandby
5. 在namenode1上执行,格式化zkfc,在zookeeper中生成HA节点, hdfs zkfc -formatZK
6. 启动zkfc,hadoop-daemon.sh start zkfc。 有namenode运行的节点,都要启动ZKFC
7. 启动 datanode
HDFS只是Hadoop最基本的一个模块,这里已经安装完成,可以为后面的Hbase提供服务了。
发表评论
暂时没有评论,来抢沙发吧~