Java教程

搭建hbase1.2.5完全分布式集群

本文主要是介绍搭建hbase1.2.5完全分布式集群,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

参考https://www.cnblogs.com/520playboy/p/9655914.html

1、集群如下:

ip 主机名 角色
192.168.1.250 node1.jacky.com master
192.168.1.251 node2.jacky.com slave
192.168.1.252 node3.jacky.com slave

安装文件:

[root@localhost sbin]# ls /opt

hadoop-2.10.1.tar.gz

jdk-8u171-linux-x64.tar.gz

hbase-2.1.5-bin.tar.gz

zookeeper-3.4.5.tar.gz

java安装路径为/usr/local/java/jdk1.8.0_171

在3个机器上执行修改主机名和ip的映射:

# vi /etc/hosts

192.168.1.250 node1.jacky.com
192.168.1.251 node2.jacky.com
192.168.1.252 node3.jacky.com

修改3台机器hostname文件

在192.168.1.250机器中修改,修改hostname为

node1.jacky.com

在192.168.1.251机器中修改,修改hostname为

node2.jacky.com

在192.168.1.252机器中修改,修改hostname为

node3.jacky.com

配置192.168.1.250可以免密码登录192.168.1.251和192.168.1.252

步骤:

  • 生成公钥和私钥
  • 修改公钥名称为authorized_keys
  • [root@node1 ~]# ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:pvR6iWfppGPSFZlAqP35/6DEtGTvaMY64otThWoBTuk root@localhost.localdomain
    The key's randomart image is:
    +---[RSA 2048]----+
    |  .  o.          |
    |.o  . .          |
    |+. o . . o       |
    | Eo o . +        |
    |   o o..S.       |
    |  o ..oO.o       |
    | . . ..=*oo      |
    |  ..o *=@+ .     |
    |  .oo=+@+.o..    |
    +----[SHA256]-----+
    [root@node1 .ssh]# cp id_rsa.pub authorized_keys
    [root@node1 .ssh]# chmod 777 authorized_keys #修改文件权限
    

    说明:

    authorized_keys:存放远程免密登录的公钥,主要通过这个文件记录多台机器的公钥 
    id_rsa : 生成的私钥文件 
    id_rsa.pub : 生成的公钥文件 

    在192.168.1.250上执行:

  • [root@node1 .ssh]# ssh-copy-id -i root@node1.jacky.com(给自己授权免密码ssh登录)
    [root@node1 .ssh]# ssh-copy-id -i root@node2.jacky.com 
    [root@node1 .ssh]# ssh-copy-id -i root@node3.jacky.com 
    

    #tar zxvf /opt/hadoop-2.10.1.tar.gz -C /usr/local# vim /etc/profile

  • # vim /etc/profile
  • # hadoop
    export HADOOP_HOME=/usr/local/hadoop-2.10.1
    export HADOOP_MAPRED_HOME=$HADOOP_HOME
    export HADOOP_COMMON_HOME=$HADOOP_HOME
    export HADOOP_HDFS_HOME=$HADOOP_HOME
    export YARN_HOME=$HADOOP_HOME
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
    export HADOOP_INSTALL=$HADOOP_HOME
    

    # cd /usr/local/hadoop-2.10.1/etc/hadoop

  • #vi hadoop-env.sh

  • export JAVA_HOME=/usr/local/java/jdk1.8.0_171
  • # vi yarn-env.sh
  • export JAVA_HOME=/usr/local/java/jdk1.8.0_171
    

      

    修改slaves文件,指定master的小弟,在master机器上,sbin目录下只执行start-all.sh,能够启动所有slave的DataNode和NodeManager

  • # vi slaves
  • node2.jacky.com
    node3.jacky.com
    

     

    修改hadoop核心配置文件core-site.xml

  • <configuration>
      <!--配置hadoop使用的文件系统,配置hadoop内置的文件系统-->
       <property>
            <name>fs.defaultFS</name>
            <value>hdfs://node1.jacky.com:9000</value>
        </property>
       <!--配置hadoop数据目录-->
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/hadoop-2.10.1/tmp</value>
        </property>
    </configuration>
    

      

    修改hdfs-site.xml文件

  • <configuration>
        <property>
          <name>dfs.namenode.secondary.http-address</name>
          <value>node1.jacky.com:50090</value>
        </property>
        <property>
          <name>dfs.replication</name>
          <value>3</value>
        </property>
        <property>
            <name>dfs.name.dir</name>
            <value>/usr/local/hadoop-2.10.1/hadoop/name</value>
        </property>
        <property>
            <name>dfs.data.dir</name>
            <value>/usr/local/hadoop-2.10.1/hadoop/data</value>
        </property>
        <property>
          <name>dfs.webhdfs.enabled</name>
          <value>true</value>
        </property>
    </configuration>
    

      

    修改mapred-site.xml文件

  •  

    <configuration>
           <!--mapreduce配置在yarn集群上跑-->
            <property>
            <name>mapreduce.framework.name</name>
                    <value>yarn</value>
               </property>
              <property>
                      <name>mapreduce.jobhistory.address</name>
                      <value>node1.jacky.com:10020</value>
              </property>
              <property>
                    <name>mapreduce.jobhistory.webapp.address</name>
                    <value>node1.jacky.com:19888</value>
           </property>
    </configuration>
    

      

    修改yarn-site.xml文件

  • <configuration>  
            <property>  
                   <name>yarn.nodemanager.aux-services</name>  
                   <value>mapreduce_shuffle</value>  
            </property>  
            <property>                                                                  
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
                   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
            </property> 
           <!--配置yarn的master--> 
            <property>  
                   <name>yarn.resourcemanager.address</name>  
                   <value>node1.jacky.com:8032</value>  
           </property>  
           <property>  
                   <name>yarn.resourcemanager.scheduler.address</name>  
                   <value>node1.jacky.com:8030</value>  
           </property>  
           <property>  
                <name>yarn.resourcemanager.resource-tracker.address</name>  
                 <value>node1.jacky.com:8031</value>  
          </property>  
          <property>  
                  <name>yarn.resourcemanager.admin.address</name>  
                   <value>node1.jacky.com:8033</value>  
           </property>  
           <property>  
                   <name>yarn.resourcemanager.webapp.address</name>  
                   <value>node1.jacky.com:8088</value>  
           </property>  
    </configuration>
    

      

    然后把在master的配置拷贝到node2.jacky.com和node3.jacky.com节点上

  •  

    在192.168.1.125上执行:

  • # scp -r /usr/local/hadoop-2.10.1 root@node2.jacky.com:/usr/local/
    # scp -r /usr/local/hadoop-2.10.1 root@node3.jacky.com:/usr/local/
    

      

    在master上启动hadoop

  • # hdfs namenode -format
    

      

    在master上启动hadoop

  • /usr/local/hadoop-2.10.1/sbin/start-all.sh 
    

      

    用jps命令查看三台机器上hadoop有没起来

  • 192.168.1.250

  • [root@node1 sbin]# jps
    7969 QuorumPeerMain
    25113 NameNode
    25483 ResourceManager
    73116 Jps
    25311 SecondaryNameNode
    

     

    192.168.1.251

  • [root@node2 jacky]# jps
    43986 Jps
    60437 DataNode
    12855 QuorumPeerMain
    60621 NodeManager
    

      

    192.168.1.252

  • [root@node2 jacky]# jps
    43986 Jps
    60437 DataNode
    12855 QuorumPeerMain
    60621 NodeManager
    

      

    5.4、界面查看验证

  • http://192.168.1.250:8088/cluster/nodes

  •  

     

    查看dataNode是否启动

    http://192.168.1.250:50070/

  •  

     好了,到这里,hadoop-2.10.1完全分布式集群搭建成功了,接下来我们将进入hbase搭建

     

     

     

这篇关于搭建hbase1.2.5完全分布式集群的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!