ip | 主机名 | 角色 |
192.168.1.250 | node1.jacky.com | master |
192.168.1.251 | node2.jacky.com | slave |
192.168.1.252 | node3.jacky.com | slave |
安装文件:
[root@localhost sbin]# ls /opt
hadoop-2.10.1.tar.gz
jdk-8u171-linux-x64.tar.gz
hbase-2.1.5-bin.tar.gz
zookeeper-3.4.5.tar.gz
java安装路径为/usr/local/java/jdk1.8.0_171
在3个机器上执行修改主机名和ip的映射:
# vi /etc/hosts
192.168.1.250 node1.jacky.com
192.168.1.251 node2.jacky.com
192.168.1.252 node3.jacky.com
在192.168.1.250机器中修改,修改hostname为
node1.jacky.com
在192.168.1.251机器中修改,修改hostname为
node2.jacky.com
在192.168.1.252机器中修改,修改hostname为
node3.jacky.com
步骤:
[root@node1 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:pvR6iWfppGPSFZlAqP35/6DEtGTvaMY64otThWoBTuk root@localhost.localdomain The key's randomart image is: +---[RSA 2048]----+ | . o. | |.o . . | |+. o . . o | | Eo o . + | | o o..S. | | o ..oO.o | | . . ..=*oo | | ..o *=@+ . | | .oo=+@+.o.. | +----[SHA256]-----+ [root@node1 .ssh]# cp id_rsa.pub authorized_keys [root@node1 .ssh]# chmod 777 authorized_keys #修改文件权限
说明:
authorized_keys:存放远程免密登录的公钥,主要通过这个文件记录多台机器的公钥
id_rsa : 生成的私钥文件
id_rsa.pub : 生成的公钥文件
在192.168.1.250上执行:
[root@node1 .ssh]# ssh-copy-id -i root@node1.jacky.com(给自己授权免密码ssh登录) [root@node1 .ssh]# ssh-copy-id -i root@node2.jacky.com [root@node1 .ssh]# ssh-copy-id -i root@node3.jacky.com
#tar zxvf /opt/hadoop-2.10.1.tar.gz -C /usr/local# vim /etc/profile
# hadoop export HADOOP_HOME=/usr/local/hadoop-2.10.1 export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_INSTALL=$HADOOP_HOME
# cd /usr/local/hadoop-2.10.1/etc/hadoop
export JAVA_HOME=/usr/local/java/jdk1.8.0_171
export JAVA_HOME=/usr/local/java/jdk1.8.0_171
node2.jacky.com node3.jacky.com
<configuration> <!--配置hadoop使用的文件系统,配置hadoop内置的文件系统--> <property> <name>fs.defaultFS</name> <value>hdfs://node1.jacky.com:9000</value> </property> <!--配置hadoop数据目录--> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop-2.10.1/tmp</value> </property> </configuration>
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>node1.jacky.com:50090</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.name.dir</name> <value>/usr/local/hadoop-2.10.1/hadoop/name</value> </property> <property> <name>dfs.data.dir</name> <value>/usr/local/hadoop-2.10.1/hadoop/data</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
<configuration> <!--mapreduce配置在yarn集群上跑--> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>node1.jacky.com:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>node1.jacky.com:19888</value> </property> </configuration>
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!--配置yarn的master--> <property> <name>yarn.resourcemanager.address</name> <value>node1.jacky.com:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>node1.jacky.com:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>node1.jacky.com:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>node1.jacky.com:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>node1.jacky.com:8088</value> </property> </configuration>
在192.168.1.125上执行:
# scp -r /usr/local/hadoop-2.10.1 root@node2.jacky.com:/usr/local/ # scp -r /usr/local/hadoop-2.10.1 root@node3.jacky.com:/usr/local/
# hdfs namenode -format
/usr/local/hadoop-2.10.1/sbin/start-all.sh
[root@node1 sbin]# jps 7969 QuorumPeerMain 25113 NameNode 25483 ResourceManager 73116 Jps 25311 SecondaryNameNode
[root@node2 jacky]# jps 43986 Jps 60437 DataNode 12855 QuorumPeerMain 60621 NodeManager
[root@node2 jacky]# jps 43986 Jps 60437 DataNode 12855 QuorumPeerMain 60621 NodeManager
http://192.168.1.250:8088/cluster/nodes
查看dataNode是否启动
http://192.168.1.250:50070/
好了,到这里,hadoop-2.10.1完全分布式集群搭建成功了,接下来我们将进入hbase搭建