Yarn>(yarn基本准备)搭建Hadoop的HA集群

2022-09-04 12:25:19

1.修改Linux主机名
2.修改IP
3.修改主机名和IP的映射关系 /etc/hosts
4.关闭防火墙
5.ssh免登陆
6.安装JDK,配置环境变量等
7.注意集群时间要同步

集群部署节点角色的规划(7节点)
------------------
server01   namenode   zkfc
server02   namenode   zkfc
server03   resourcemanager
server04   resourcemanager
server05   datanode   nodemanager      zookeeper     journal node
server06   datanode   nodemanager      zookeeper     journal node 
server07   datanode   nodemanager      zookeeper     journal node 

------------------

集群部署节点角色的规划(3节点)
------------------
server01   namenode    resourcemanager  zkfc   nodemanager  datanode   zookeeper   journal node
server02   namenode    resourcemanager  zkfc   nodemanager  datanode   zookeeper   journal node
server03   datanode    nodemanager     zookeeper    journal node
------------------

安装步骤:
1.安装配置zooekeeper集群
1.1解压
tar -zxvf zookeeper-3.4.5.tar.gz -C /home/hadoop/app/
1.2修改配置
cd /home/hadoop/app/zookeeper-3.4.5/conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
修改:dataDir=/home/hadoop/app/zookeeper-3.4.5/tmp
在最后添加:
server.1=hadoop05:2888:3888
server.2=hadoop06:2888:3888
server.3=hadoop07:2888:3888
保存退出
然后创建一个tmp文件夹
mkdir /home/hadoop/app/zookeeper-3.4.5/tmp
echo 1 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid
1.3将配置好的zookeeper拷贝到其他节点(首先分别在hadoop06、hadoop07根目录下创建一个hadoop目录:mkdir /hadoop)
scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop06:/home/hadoop/app/
scp -r /home/hadoop/app/zookeeper-3.4.5/ hadoop07:/home/hadoop/app/

		注意:修改hadoop06、hadoop07对应/hadoop/zookeeper-3.4.5/tmp/myid内容
		hadoop06:
			echo 2 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid
		hadoop07:
			echo 3 > /home/hadoop/app/zookeeper-3.4.5/tmp/myid

2.安装配置hadoop集群
	2.1解压
		tar -zxvf hadoop-2.6.4.tar.gz -C /home/hadoop/app/
	2.2配置HDFS(hadoop2.0所有的配置文件都在$HADOOP_HOME/etc/hadoop目录下)
		#将hadoop添加到环境变量中
		vim /etc/profile
		export JAVA_HOME=/usr/java/jdk1.7.0_55
		export HADOOP_HOME=/hadoop/hadoop-2.6.4
		export PATH=$PATH:$JAVA_HOME/cluster1n:$HADOOP_HOME/cluster1n
		
		#hadoop2.0的配置文件全部在$HADOOP_HOME/etc/hadoop下
		cd /home/hadoop/app/hadoop-2.6.4/etc/hadoop
		
		2.2.1修改hadoop-env.sh
		export JAVA_HOME=/home/hadoop/app/jdk1.7.0_55

###############################################################################

2.2.2修改core-site.xml

fs.defaultFS hdfs://cluster1 hadoop.tmp.dir /export/servers/hadoop-2.6.0-cdh5.14.0/HAhadoopDatas/tmp ha.zookeeper.quorum node01:2181,node02:2181,node03:2181

###############################################################################

2.2.3修改hdfs-site.xml

dfs.nameservices cluster1 dfs.ha.namenodes.cluster1 nn1,nn2 dfs.namenode.rpc-address.cluster1.nn1 node01:8020 dfs.namenode.http-address.cluster1.nn1 node01:50070 dfs.namenode.rpc-address.cluster1.nn2 node02:8020 dfs.namenode.http-address.cluster1.nn2 node02:50070 dfs.namenode.shared.edits.dir qjournal://node01:8485;node02:8485;node03:8485/cluster1 dfs.journalnode.edits.dir /export/servers/hadoop-2.6.0-cdh5.14.0/journaldata dfs.ha.automatic-failover.enabled true dfs.client.failover.proxy.provider.cluster1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000

###############################################################################

2.2.4修改mapred-site.xml

mapreduce.framework.name yarn

###############################################################################

2.2.5修改yarn-site.xml

yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id yrc yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 node01 yarn.resourcemanager.hostname.rm2 node02 yarn.resourcemanager.zk-address node01:2181,node02:2181,node03:2181 yarn.nodemanager.aux-services mapreduce_shuffle

2.2.6修改slaves(slaves是指定子节点的位置,因为要在hadoop01上启动HDFS、在hadoop03启动yarn,所以hadoop01上的slaves文件指定的是datanode的位置,hadoop03上的slaves文件指定的是nodemanager的位置)
hadoop05
hadoop06
hadoop07

2.2.7配置免密码登陆
#首先要配置hadoop00到hadoop01、hadoop02、hadoop03、hadoop04、hadoop05、hadoop06、hadoop07的免密码登陆
#在hadoop01上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其他节点,包括自己
ssh-coyp-id hadoop00
ssh-coyp-id hadoop01
ssh-coyp-id hadoop02
ssh-coyp-id hadoop03
ssh-coyp-id hadoop04

#注意:两个namenode之间要配置ssh免密码登陆  ssh远程补刀时候需要

###注意:严格按照下面的步骤!!!
2.5启动zookeeper集群(分别在hadoop05、hadoop06、tcast07上启动zk)

		bin/zkServer.sh start
		#查看状态:一个leader,两个follower
		bin/zkServer.sh status
		
	2.6手动启动journalnode(分别在在hadoop05、hadoop06、hadoop07上执行)
		hadoop-daemon.sh start journalnode
		#运行jps命令检验,hadoop05、hadoop06、hadoop07上多了JournalNode进程
	
	2.7格式化namenode
		#在hadoop00上执行命令:
		hdfs namenode -format
		#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置的目录下生成个hdfs初始化文件,
		
		把hadoop.tmp.dir配置的目录下所有文件拷贝到另一台namenode节点所在的机器
		scp -r tmp/ hadoop02:/home/hadoop/app/hadoop-2.6.4/
		
		##也可以这样,建议hdfs namenode -bootstrapStandby
	
	2.8格式化ZKFC(在active上执行即可)
		hdfs zkfc -formatZK
	
	2.9启动HDFS(在hadoop00上执行)
		start-dfs.sh

	2.10启动YARN   
		start-yarn.sh
		还需要手动在standby上手动启动备份的  resourcemanager
		yarn-daemon.sh start resourcemanager

	
到此,hadoop-2.6.4配置完毕,可以统计浏览器访问:
	http://hadoop00:50070
	NameNode 'hadoop01:9000' (active)
	http://hadoop01:50070
	NameNode 'hadoop02:9000' (standby)

验证HDFS HA
	首先向hdfs上传一个文件
	hadoop fs -put /etc/profile /profile
	hadoop fs -ls /
	然后再kill掉active的NameNode
	kill -9 <pid of NN>
	通过浏览器访问:http://192.168.1.202:50070
	NameNode 'hadoop02:9000' (active)
	这个时候hadoop02上的NameNode变成了active
	在执行命令:
	hadoop fs -ls /
	-rw-r--r--   3 root supergroup       1926 2014-02-06 15:36 /profile
	刚才上传的文件依然存在!!!
	手动启动那个挂掉的NameNode
	hadoop-daemon.sh start namenode
	通过浏览器访问:http://192.168.1.201:50070
	NameNode 'hadoop01:9000' (standby)

验证YARN:
	运行一下hadoop提供的demo中的WordCount程序:
	hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar wordcount /profile /out

OK,大功告成!!!

测试集群工作状态的一些指令 :
hdfs dfsadmin -report 查看hdfs的各节点状态信息

cluster1n/hdfs haadmin -getServiceState nn1 获取一个namenode节点的HA状态

scluster1n/hadoop-daemon.sh start namenode 单独启动一个namenode进程

./hadoop-daemon.sh start zkfc 单独启动一个zkfc进程

  • 作者:BigMoM1573
  • 原文链接:https://blog.csdn.net/qq_44509920/article/details/105217685
    更新时间:2022-09-04 12:25:19