C/C++教程

Centos7安装Hadoop

本文主要是介绍Centos7安装Hadoop,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

1、设置静态ip网络/etc/sysconfigs/network-scripts/ifcfg-ens33,修改/etc/hosts,/etc/hostname,修改客户机win10的hosts文件(C:\Windows\System32\drivers\etc)

2、关闭防火墙

#查看防火墙状态
[root@bigdata01 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
#开启防火墙
[root@bigdata01 ~]# systemctl start firewalld
#关闭防火墙
[root@bigdata01 ~]# systemctl stop firewalld
#禁止开机自启
[root@bigdata01 ~]# systemctl disable firewalld

3、安装必要插件

yum install -y epel-release
yum install -y psmisc nc net-tools rsync vim lrzsz ntp libzstd openssl-static tree iotop git

4、创建用户

[root@bigdata01 ~]# useradd bigdata
[root@bigdata01 ~]# passwd bigdata
Changing password for user atguigu.
New password: 
BAD PASSWORD: The password is a palindrome
Retype new password: 
passwd: all authentication tokens updated successfully.
#密码bigdata
#删除用户包括用户目录
[root@bigdata01 ~]# userdel -r username

5、配置用户具有root权限

[root@bigdata01 home]# visudo
#查找到root,在root    ALL=(ALL)       ALL下增加bigdata行
bigdata    ALL=(ALL)       NOPASSWD:ALL

6、在/opt目录下创建文件夹

  在/opt创建module、software文件夹,并修改所有者

[root@bigdata01 opt]# cd /opt
[root@bigdata01 opt]# mkdir module
[root@bigdata01 opt]# mkdir software
[root@bigdata01 opt]# ll
total 0
drwxr-xr-x. 2 root root 6 Jul 21 00:56 module
drwxr-xr-x. 2 root root 6 Jul 21 00:56 software
[root@bigdata01 opt]# chown bigdata:bigdata module
[root@bigdata01 opt]# chown bigdata:bigdata software
[root@bigdata01 opt]# ll
total 0
drwxr-xr-x. 2 bigdata bigdata 6 Jul 21 00:56 module
drwxr-xr-x. 2 bigdata bigdata 6 Jul 21 00:56 software

7、重启

[root@bigdata01 ~]# reboot

8、换bigdata用户登录

9、安装jdk

#卸载已安装的jdk
[root@bigdata01 ~]# rpm -qa | grep -i java | xargs -n1 sudo rpm -e --nodeps
rpm: no packages given for erase
#将jdk导入/opt/software
[root@bigdata01 software]# ls
hadoop-3.1.3.tar.gz  jdk-8u211-linux-x64.tar.gz
#解压jdk包到module目录
[root@bigdata01 software]# tar -zxvf jdk-8u211-linux-x64.tar.gz  -C /opt/module/
#解压完毕
[root@bigdata01 module]# ll
total 0
drwxr-xr-x. 7 10 143 245 Apr  2  2019 jdk1.8.0_211

10、配置jdk环境变量

#进入profile.d目录
[root@bigdata01 module]# cd /etc/profile.d/
[root@bigdata01 profile.d]# ll
total 64
-rw-r--r--. 1 root root  771 Aug  9  2019 256term.csh
-rw-r--r--. 1 root root  841 Aug  9  2019 256term.sh
-rw-r--r--. 1 root root  196 Mar 25  2017 colorgrep.csh
-rw-r--r--. 1 root root  201 Mar 25  2017 colorgrep.sh
-rw-r--r--. 1 root root 1741 Aug  6  2019 colorls.csh
-rw-r--r--. 1 root root 1606 Aug  6  2019 colorls.sh
-rw-r--r--. 1 root root   80 Oct 31  2018 csh.local
-rw-r--r--. 1 root root 1706 Aug  9  2019 lang.csh
-rw-r--r--. 1 root root 2703 Aug  9  2019 lang.sh
-rw-r--r--. 1 root root  123 Jul 31  2015 less.csh
-rw-r--r--. 1 root root  121 Jul 31  2015 less.sh
-rw-r--r--. 1 root root   81 Oct 31  2018 sh.local
-rw-r--r--. 1 root root  105 Dec 16  2020 vim.csh
-rw-r--r--. 1 root root  269 Dec 16  2020 vim.sh
-rw-r--r--. 1 root root  164 Jan 28  2014 which2.csh
-rw-r--r--. 1 root root  169 Jan 28  2014 which2.sh
[root@bigdata01 profile.d]# touch my_env.sh
#如果提示权限不够则
[root@bigdata01 profile.d]# sudo touch my_env.sh
#修改my_env
[root@bigdata01 profile.d]# sudo vim my_env.sh

加入以下内容

#JAVA_HOME
export JAVA_HOME=/opt/module/jdk1.8.0_211
export PATH=$PATH:$JAVA_HOME/bin

保存退出

#使配置生效
[root@bigdata01 profile.d]# source my_env.sh
#测试jdk安装是否成功
[root@bigdata01 profile.d]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)

开始安装hadoop

11、上传到/opt/software后,先解压hadoop到module目录

[root@bigdata01 software]# ls
hadoop-3.1.3.tar.gz  jdk-8u211-linux-x64.tar.gz
[root@bigdata01 software]# tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/
#查看解压后的文件夹
[root@bigdata01 module]# cd /opt/module/hadoop-3.1.3/
[root@bigdata01 hadoop-3.1.3]# ll
total 176
drwxr-xr-x. 2 bigdata bigdata    183 Sep 12  2019 bin
drwxr-xr-x. 3 bigdata bigdata     20 Sep 12  2019 etc
drwxr-xr-x. 2 bigdata bigdata    106 Sep 12  2019 include
drwxr-xr-x. 3 bigdata bigdata     20 Sep 12  2019 lib
drwxr-xr-x. 4 bigdata bigdata    288 Sep 12  2019 libexec
-rw-rw-r--. 1 bigdata bigdata 147145 Sep  4  2019 LICENSE.txt
-rw-rw-r--. 1 bigdata bigdata  21867 Sep  4  2019 NOTICE.txt
-rw-rw-r--. 1 bigdata bigdata   1366 Sep  4  2019 README.txt
drwxr-xr-x. 3 bigdata bigdata   4096 Sep 12  2019 sbin
drwxr-xr-x. 4 bigdata bigdata     31 Sep 12  2019 share

12、配置hadoop环境变量

#进入my_env.sh
[bigdata@bigdata01 ~]$ sudo vi /etc/profile.d/my_env.sh
#新增一行
export HADOOP_HOME=/opt/module/hadoop-3.1.3/
#在PASH中新增:$HADOOP_HOME/bin:$HADOOP_HOME/bin,修改后如下所示:
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#使配置生效
[bigdata@bigdata01 hadoop-3.1.3]$ source /etc/profile.d/my_env.sh

13、测试安装是否成功

[bigdata@bigdata01 hadoop-3.1.3]$ hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar

 14、将第一台hadoop中module目录下的内容分发到其他机器,使用scp安全拷贝工具

#/opt/module/* 代表module下所有内容,不加*代表只分发文件夹不包含文件
[bigdata@bigdata01 module]$ scp -r /opt/module/* bigdata@bigdata02:/opt/module
#开始执行
The authenticity of host 'bigdata02 (192.168.1.110)' can't be established.
ECDSA key fingerprint is SHA256:GZv8fMnc5wu2r5t70oskCQH7VNAdcU44fZvAh4r5B7Y.
ECDSA key fingerprint is MD5:16:bb:16:96:0b:29:0d:86:f4:14:f5:f3:3b:bb:ba:df.
Are you sure you want to continue connecting (yes/no)?

输入yes

bigdata@bigdata02's password:

输入密码,继续执行直至结束。

#查看bigdata02mdule下是否分发
[bigdata@bigdata02 module]$ ll
total 0
drwxr-xr-x. 9 bigdata bigdata 149 Jul 23 18:04 hadoop-3.1.3
drwxr-xr-x. 7 bigdata bigdata 245 Jul 23 18:04 jdk1.8.0_211

按以上步骤将module目录下的内容分发到其他机器中。

15、分发my_env.sh

#将第一台机器的/etc/profile.d/my_env.sh文件分发到其他机器
[bigdata@bigdata01 module]$ scp -r /etc/profile.d/my_env.sh root@bigdata02:/etc/profile.d/
#输入root密码,my_env.sh需要root权限
root@bigdata02's password: 
my_env.sh           100%  166   202.6KB/s   00:00  
#使配置生效
[bigdata@bigdata02 module]$ source /etc/profile.d/my_env.sh
#测试是否成功
[bigdata@bigdata02 module]$ hadoop version
Hadoop 3.1.3
Source code repository https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579
Compiled by ztang on 2019-09-12T02:47Z
Compiled with protoc 2.5.0
From source with checksum ec785077c385118ac91aadde5ec9799
This command was run using /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar

[bigdata@bigdata02 module]$ java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)

按以上步骤分发到其他机器。

16、rsync远程同步工具

rsync主要用于备份和镜像。具有速度快、避免复制相同内容和支持符号链接的优点。

rsync和scp区别:用rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去。

#语法
rsync    -av       $pdir/$fname              $user@hadoop$host:$pdir/$fname
命令   选项参数   要拷贝的文件路径/名称    目的用户@主机:目的路径/名称

 17、设置SSH无密登录

在第一台机器生成密钥

#生成公钥私钥,停顿处一直敲回车三次至结束
[bigdata@bigdata01 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bigdata/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/bigdata/.ssh/id_rsa.
Your public key has been saved in /home/bigdata/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Ptp/3gSe9O1eQOfj9bEbnNg6LV2MqFAZvPTzscrbEag bigdata@bigdata01
The key's randomart image is:
+---[RSA 2048]----+
|         .       |
|          +      |
|         . =  . .|
|          + oo.o |
|        S.  ++oBo|
|       ..  +.+O+X|
|        o.Eooo*B=|
|       o .. +*.=+|
|      . ...ooo=oo|
+----[SHA256]-----+
#生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)
[bigdata@bigdata01 .ssh]$ ll
total 12
-rw-------. 1 bigdata bigdata 1679 Jul 26 09:27 id_rsa
-rw-r--r--. 1 bigdata bigdata  399 Jul 26 09:27 id_rsa.pub
-rw-r--r--. 1 bigdata bigdata  370 Jul 23 18:14 known_hosts
#将公钥拷贝到要免密登录的目标机器上,其他机器参照此命令,本机也同样需要拷贝
[bigdata@bigdata01 .ssh]$ ssh-copy-id bigdata02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/bigdata/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
bigdata@bigdata02's password: 
Permission denied, please try again.
bigdata@bigdata02's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'bigdata02'"
and check to make sure that only the key(s) you wanted were added.

验证是否拷贝到目标机器

#在当前用户家目录cd到.ssh目录,包含authorized_keys则说明拷贝成功
[bigdata@bigdata03 ~]$ ll -a
total 16
drwx------. 3 bigdata bigdata  93 Jul 23 21:58 .
drwxr-xr-x. 3 root    root     21 Jul 23 20:56 ..
-rw-r--r--. 1 bigdata bigdata  18 Aug  8  2019 .bash_logout
-rw-r--r--. 1 bigdata bigdata 193 Aug  8  2019 .bash_profile
-rw-r--r--. 1 bigdata bigdata 231 Aug  8  2019 .bashrc
drwx------. 2 bigdata bigdata  29 Jul 23 21:58 .ssh
-rw-------. 1 bigdata bigdata  55 Jul 23 21:11 .Xauthority
[bigdata@bigdata03 ~]$ pwd
/home/bigdata
[bigdata@bigdata03 ~]$ cd .ssh
[bigdata@bigdata03 .ssh]$ ll
total 4
-rw-------. 1 bigdata bigdata 399 Jul 23 21:58 authorized_keys

查看目标机器authorized_keys文件内容与第一台机器公钥一致

[bigdata@bigdata01 .ssh]$ cat id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDN7PlnJs+ES/e9uMimk7VlkDA4lI7OCU0OBT/yQbviWMrKDNY8aVP4/8pODNKZN1tcp9qtdRO3F3t+VeszraRp5PD080Ij7W6AuaECniHan0Obhx2diO88uNHNwLb1H+ozehc/UoBxO6+3v0HntDzGhJZGfOzmOGtHkztoWw5YdVYn4b8Q/iTN3h5cboBTlXrex8y7ohMilja5XpTtf344GKga46tfue9IjzIQ6r/C7k8Y6zfKhH8N4mJudMFwuIXYIe1X6OLU4M8bLEJEew8alZIobOKGr0aZCvP9r2eGay7q/aYzlW6lFui1x2hOEDgqeRCdNX2gbpaFh8BXVQu9 bigdata@bigdata01

[bigdata@bigdata03 .ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDN7PlnJs+ES/e9uMimk7VlkDA4lI7OCU0OBT/yQbviWMrKDNY8aVP4/8pODNKZN1tcp9qtdRO3F3t+VeszraRp5PD080Ij7W6AuaECniHan0Obhx2diO88uNHNwLb1H+ozehc/UoBxO6+3v0HntDzGhJZGfOzmOGtHkztoWw5YdVYn4b8Q/iTN3h5cboBTlXrex8y7ohMilja5XpTtf344GKga46tfue9IjzIQ6r/C7k8Y6zfKhH8N4mJudMFwuIXYIe1X6OLU4M8bLEJEew8alZIobOKGr0aZCvP9r2eGay7q/aYzlW6lFui1x2hOEDgqeRCdNX2gbpaFh8BXVQu9 bigdata@bigdata01

测试ssh

#成功登陆bigdata03机器
[bigdata@bigdata01 .ssh]$ ssh bigdata03
Last login: Fri Jul 23 22:09:51 2021 from bigdata01
[bigdata@bigdata03 ~]$ 

此时第一台机器已经配置完成,其他机器按此步骤生成公钥私钥,并将公钥分发拷贝到其他机器,确保后续集群时免密互相访问。

家目录下各文件说明

known_hosts

记录ssh访问过计算机的公钥(public key)

id_rsa

生成的私钥

id_rsa.pub

生成的公钥

authorized_keys

存放授权过的无密登录服务器公钥

 

 

 

 

 

 

 

18、编写集群分发脚本

新建xsync文件并添加权限

[bigdata@bigdata01 ~]$ cd ~
[bigdata@bigdata01 ~]$ touch xsync
[bigdata@bigdata01 ~]$ ll
total 0
-rw-rw-r--. 1 bigdata bigdata 0 Jul 24 19:03 xsync
#添加执行权限
[bigdata@bigdata01 ~]$ chmod u+x xsync
[bigdata@bigdata01 ~]$ ll
total 0
-rwxrw-r--. 1 bigdata bigdata 0 Jul 24 19:03 xsync
[bigdata@bigdata01 ~]$ vim xsync 
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
  echo Not Enough Arguement!
  exit;
fi
#2. 遍历集群所有机器
for host in bigdata01 bigdata02 bigdata03
do
  echo ====================  $host  ====================
  #3. 遍历所有目录,挨个发送
  for file in $@
  do
    #4 判断文件是否存在
    if [ -e $file ]
    then
      #5. 获取父目录
      pdir=$(cd -P $(dirname $file); pwd)
      #6. 获取当前文件的名称
      fname=$(basename $file)
      ssh $host "mkdir -p $pdir"
      rsync -av $pdir/$fname $host:$pdir
    else
      echo $file does not exists!
    fi
  done
done
#分发某个文件到其他机器
[bigdata@bigdata01 ~]$ /home/bigdata/xsync xx.txt

 

集群规划

  bigdata01 bigdata02 bigdata03
HDFS

NameNode

DataNode

DataNode

SecondaryNameNode

DataNode

YARN NodeManager

ResourceManager

NodeManager

NodeManager

 

 

 

 

 

 

 

19、修改core-site.xml

[bigdata@bigdata01 hadoop]$ cd $HADOOP_HOME/etc/hadoop
[bigdata@bigdata01 hadoop]$ vim core-site.xml

增加以下内容

<configuration>
<!--指定HDFS中NameNode的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bigdata01:8020</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-3.1.3/data</value>
</property>
<!--  通过web界面操作hdfs的权限 -->
<property>
        <name>hadoop.http.staticuser.user</name>
        <value>bigdata</value>
</property>
<!-- 配置该bigdata(superUser)允许通过代理访问的主机节点-供hive使用 -->
    <property>
        <name>hadoop.proxyuser.bigdata.hosts</name>
        <value>*</value>
</property>
<!-- 配置该bigdata(superUser)允许通过代理用户所属组-供hive使用 -->
    <property>
        <name>hadoop.proxyuser.bigdata.groups</name>
        <value>*</value>
</property>
</configuration>

20、配置hdfs-site.xml

[bigdata@bigdata01 hadoop]$ vi hdfs-site.xml

增加以下内容

<configuration>
    <!-- nn web端访问地址-->
    <property>
        <name>dfs.namenode.http-address</name>
        <value>bigdata01:9870</value>
    </property>

    <!-- 2nn web端访问地址-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>bigdata03:9868</value>
    </property>

    <!-- 测试环境指定HDFS副本的数量1 -->
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

21、配置yarn-site.xml

[bigdata@bigdata01 hadoop]$ vim yarn-site.xml

增加以下内容

<configuration>
    <!--  Reducer获取数据的方式,指定MR走shuffle-->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <!--  指定YARN的ResourceManager地址-->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>bigdata02</value>
    </property>
    <!-- 环境变量通过从NodeManagers的容器继承的环境属性,对于mapreduce应用程序,除了默认值 hadoop op_mapred_home,还有以下内容-->
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    <!-- 解决yarn在执行时遇到超出虚拟内存限制,防止container被kill  -->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
    <!-- yarn容器允许分配的最大最小内存 -->
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>512</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
    </property>
    <!-- yarn容器允许管理的物理内存大小 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <!-- 关闭yarn对物理内存和虚拟内存的限制检查 -->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>
</configuration>

 

22、配置mapred-site.xml

[bigdata@bigdata01 hadoop]$ vim mapred-site.xml

增加以下内容

<configuration>
    <!-- 指定MR运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

23、配置workers

[bigdata@bigdata01 hadoop]$ vim /opt/module/hadoop-3.1.3/etc/hadoop/workers 
#去掉已有内容,例如localhost
#增加以下内容
bigdata01
bigdata02
bigdata03

24、分发以上已经配置好的hadoop配置文件

[bigdata@bigdata01 hadoop]$ /home/bigdata/xsync /opt/module/hadoop-3.1.3/etc/hadoop/
==================== bigdata01 ====================
sending incremental file list

sent 898 bytes  received 18 bytes  1,832.00 bytes/sec
total size is 108,615  speedup is 118.58
==================== bigdata02 ====================
sending incremental file list
hadoop/
hadoop/capacity-scheduler.xml
hadoop/configuration.xsl
hadoop/container-executor.cfg
hadoop/core-site.xml
hadoop/hadoop-env.cmd
hadoop/hadoop-env.sh
hadoop/hadoop-metrics2.properties
hadoop/hadoop-policy.xml
hadoop/hadoop-user-functions.sh.example
hadoop/hdfs-site.xml
hadoop/httpfs-env.sh
hadoop/httpfs-log4j.properties
hadoop/httpfs-signature.secret
hadoop/httpfs-site.xml
hadoop/kms-acls.xml
hadoop/kms-env.sh
hadoop/kms-log4j.properties
hadoop/kms-site.xml
hadoop/log4j.properties
hadoop/mapred-env.cmd
hadoop/mapred-env.sh
hadoop/mapred-queues.xml.template
hadoop/mapred-site.xml
hadoop/ssl-client.xml.example
hadoop/ssl-server.xml.example
hadoop/user_ec_policies.xml.template
hadoop/workers
hadoop/yarn-env.cmd
hadoop/yarn-env.sh
hadoop/yarn-site.xml
hadoop/yarnservice-log4j.properties
hadoop/shellprofile.d/
hadoop/shellprofile.d/example.sh

sent 6,103 bytes  received 1,638 bytes  15,482.00 bytes/sec
total size is 108,615  speedup is 14.03
==================== bigdata03 ====================
sending incremental file list
hadoop/
hadoop/capacity-scheduler.xml
hadoop/configuration.xsl
hadoop/container-executor.cfg
hadoop/core-site.xml
hadoop/hadoop-env.cmd
hadoop/hadoop-env.sh
hadoop/hadoop-metrics2.properties
hadoop/hadoop-policy.xml
hadoop/hadoop-user-functions.sh.example
hadoop/hdfs-site.xml
hadoop/httpfs-env.sh
hadoop/httpfs-log4j.properties
hadoop/httpfs-signature.secret
hadoop/httpfs-site.xml
hadoop/kms-acls.xml
hadoop/kms-env.sh
hadoop/kms-log4j.properties
hadoop/kms-site.xml
hadoop/log4j.properties
hadoop/mapred-env.cmd
hadoop/mapred-env.sh
hadoop/mapred-queues.xml.template
hadoop/mapred-site.xml
hadoop/ssl-client.xml.example
hadoop/ssl-server.xml.example
hadoop/user_ec_policies.xml.template
hadoop/workers
hadoop/yarn-env.cmd
hadoop/yarn-env.sh
hadoop/yarn-site.xml
hadoop/yarnservice-log4j.properties
hadoop/shellprofile.d/
hadoop/shellprofile.d/example.sh

sent 6,103 bytes  received 1,638 bytes  5,160.67 bytes/sec
total size is 108,615  speedup is 14.03

分发完成后,查看其他机器上是否已经分发

[bigdata@bigdata02 ~]$ cd /opt/module/hadoop-3.1.3/etc/hadoop/
[bigdata@bigdata02 hadoop]$ cat core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!--指定HDFS中NameNode的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://bigdata01:9820</value>
</property>
<!-- 指定Hadoop运行时产生文件的存储目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-3.1.3/data</value>
</property>
<!--  通过web界面操作hdfs的权限 -->
<property>
        <name>hadoop.http.staticuser.user</name>
        <value>bigdata</value>
</property>

</configuration>
[bigdata@bigdata02 hadoop]$ 

显示已经分发。

25、启动集群

#如果是第一次启动,需先格式化namenode
[bigdata@bigdata01 hadoop-3.1.3]$ hdfs namenode -format
#如因报错修改后需再次格式化时,需要把/data和/logs目录中的内容全部删除

执行结果

[bigdata@bigdata01 hadoop-3.1.3]$ hdfs namenode -format
WARNING: /opt/module/hadoop-3.1.3//logs does not exist. Creating.
2021-07-24 22:11:53,132 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = bigdata01/192.168.1.100
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.1.3
STARTUP_MSG:   classpath = /opt/module/hadoop-3.1.3//etc/hadoop:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/asm-5.0.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/avro-1.7.7.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-compress-1.18.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-io-2.5.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-lang-2.6.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/commons-net-3.6.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/gson-2.2.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/hadoop-annotations-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/hadoop-auth-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-core-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-databind-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jettison-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jsch-0.1.54.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/json-smart-2.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jsp-api-2.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/paranamer-2.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/re2j-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/hadoop-common-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/hadoop-common-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/hadoop-nfs-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/common/hadoop-kms-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/hadoop-auth-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/error_prone_annotations-2.2.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/hadoop-annotations-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/commons-compress-1.18.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-client-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-client-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/fst-2.50.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/guice-4.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/javax.inject-1.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-api-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-client-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-common-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-registry-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-common-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-router-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-tests-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-services-api-3.1.3.jar:/opt/module/hadoop-3.1.3//share/hadoop/yarn/hadoop-yarn-services-core-3.1.3.jar
STARTUP_MSG:   build = https://gitbox.apache.org/repos/asf/hadoop.git -r ba631c436b806728f8ec2f54ab1e289526c90579; compiled by 'ztang' on 2019-09-12T02:47Z
STARTUP_MSG:   java = 1.8.0_211
************************************************************/
2021-07-24 22:11:53,191 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-07-24 22:11:53,591 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-c9dc1591-0360-4e66-840a-fadaac6c6562
2021-07-24 22:11:55,469 INFO namenode.FSEditLog: Edit logging is async:true
2021-07-24 22:11:55,503 INFO namenode.FSNamesystem: KeyProvider: null
2021-07-24 22:11:55,507 INFO namenode.FSNamesystem: fsLock is fair: true
2021-07-24 22:11:55,507 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-07-24 22:11:55,526 INFO namenode.FSNamesystem: fsOwner             = bigdata (auth:SIMPLE)
2021-07-24 22:11:55,526 INFO namenode.FSNamesystem: supergroup          = supergroup
2021-07-24 22:11:55,526 INFO namenode.FSNamesystem: isPermissionEnabled = true
2021-07-24 22:11:55,527 INFO namenode.FSNamesystem: HA Enabled: false
2021-07-24 22:11:55,720 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-07-24 22:11:55,747 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-07-24 22:11:55,861 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-07-24 22:11:55,866 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-07-24 22:11:55,867 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Jul 24 22:11:55
2021-07-24 22:11:55,869 INFO util.GSet: Computing capacity for map BlocksMap
2021-07-24 22:11:55,869 INFO util.GSet: VM type       = 64-bit
2021-07-24 22:11:55,874 INFO util.GSet: 2.0% max memory 235.9 MB = 4.7 MB
2021-07-24 22:11:55,874 INFO util.GSet: capacity      = 2^19 = 524288 entries
2021-07-24 22:11:55,897 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-07-24 22:11:55,986 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2021-07-24 22:11:55,986 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2021-07-24 22:11:55,986 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-07-24 22:11:55,986 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: defaultReplication         = 3
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: maxReplication             = 512
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: minReplication             = 1
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2021-07-24 22:11:55,987 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-07-24 22:11:56,115 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2021-07-24 22:11:56,154 INFO util.GSet: Computing capacity for map INodeMap
2021-07-24 22:11:56,154 INFO util.GSet: VM type       = 64-bit
2021-07-24 22:11:56,154 INFO util.GSet: 1.0% max memory 235.9 MB = 2.4 MB
2021-07-24 22:11:56,154 INFO util.GSet: capacity      = 2^18 = 262144 entries
2021-07-24 22:11:56,154 INFO namenode.FSDirectory: ACLs enabled? false
2021-07-24 22:11:56,154 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-07-24 22:11:56,154 INFO namenode.FSDirectory: XAttrs enabled? true
2021-07-24 22:11:56,155 INFO namenode.NameNode: Caching file names occurring more than 10 times
2021-07-24 22:11:56,195 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-07-24 22:11:56,197 INFO snapshot.SnapshotManager: SkipList is disabled
2021-07-24 22:11:56,217 INFO util.GSet: Computing capacity for map cachedBlocks
2021-07-24 22:11:56,217 INFO util.GSet: VM type       = 64-bit
2021-07-24 22:11:56,217 INFO util.GSet: 0.25% max memory 235.9 MB = 603.8 KB
2021-07-24 22:11:56,217 INFO util.GSet: capacity      = 2^16 = 65536 entries
2021-07-24 22:11:56,234 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-07-24 22:11:56,234 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-07-24 22:11:56,234 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-07-24 22:11:56,238 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2021-07-24 22:11:56,238 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-07-24 22:11:56,240 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2021-07-24 22:11:56,240 INFO util.GSet: VM type       = 64-bit
2021-07-24 22:11:56,240 INFO util.GSet: 0.029999999329447746% max memory 235.9 MB = 72.5 KB
2021-07-24 22:11:56,240 INFO util.GSet: capacity      = 2^13 = 8192 entries
2021-07-24 22:11:56,299 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1046687142-192.168.1.100-1627135916279
2021-07-24 22:11:56,322 INFO common.Storage: Storage directory /opt/module/hadoop-3.1.3/data/dfs/name has been successfully formatted.
2021-07-24 22:11:56,400 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/module/hadoop-3.1.3/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2021-07-24 22:11:56,702 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/hadoop-3.1.3/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 394 bytes saved in 0 seconds .
2021-07-24 22:11:56,738 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2021-07-24 22:11:56,754 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
2021-07-24 22:11:56,755 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at bigdata01/192.168.1.100
************************************************************/
View Code

 

#查看格式化后的目录结构,多了data和log
[bigdata@bigdata01 ~]$ cd /opt/module/hadoop-3.1.3/
[bigdata@bigdata01 hadoop-3.1.3]$ ll
total 176
drwxr-xr-x. 2 bigdata bigdata    183 Sep 12  2019 bin
drwxrwxr-x. 3 bigdata bigdata     17 Jul 24 22:11 data
drwxr-xr-x. 3 bigdata bigdata     20 Sep 12  2019 etc
drwxr-xr-x. 2 bigdata bigdata    106 Sep 12  2019 include
drwxr-xr-x. 3 bigdata bigdata     20 Sep 12  2019 lib
drwxr-xr-x. 4 bigdata bigdata    288 Sep 12  2019 libexec
-rw-rw-r--. 1 bigdata bigdata 147145 Sep  4  2019 LICENSE.txt
drwxrwxr-x. 2 bigdata bigdata     40 Jul 24 22:11 logs
-rw-rw-r--. 1 bigdata bigdata  21867 Sep  4  2019 NOTICE.txt
-rw-rw-r--. 1 bigdata bigdata   1366 Sep  4  2019 README.txt
drwxr-xr-x. 3 bigdata bigdata   4096 Sep 12  2019 sbin
drwxr-xr-x. 4 bigdata bigdata     31 Sep 12  2019 share

 

#启动hdfs
[bigdata@bigdata01 hadoop-3.1.3]$ start-dfs.sh 
Starting namenodes on [bigdata01]
Starting datanodes
bigdata03: WARNING: /opt/module/hadoop-3.1.3//logs does not exist. Creating.
bigdata02: WARNING: /opt/module/hadoop-3.1.3//logs does not exist. Creating.
Starting secondary namenodes [bigdata03]
#警告是代表logs里有些文件不存在,会自动创建。

 

#在配置了ResourceManager的节点(bigdata02)启动yarn
[bigdata@bigdata02 ~]$ start-yarn.sh 
Starting resourcemanager
Starting nodemanagers
#验证集群进程信息 执行jps命令可以查看集群的进程信息
#查看bigdata01
[bigdata@bigdata01 hadoop-3.1.3]$ jps
10368 DataNode
10712 NodeManager
10253 NameNode
10815 Jps
#查看bigdata02
[bigdata@bigdata02 ~]$ jps
9762 DataNode
9959 ResourceManager
10071 NodeManager
10409 Jps
#查看bigdata03
[bigdata@bigdata03 ~]$ jps
9856 DataNode
10274 Jps
9955 SecondaryNameNode
10165 NodeManager

 

26、查错

#查看logs目录下的log信息,优先查看namenode
[bigdata@bigdata01 logs]$ cd $HADOOP_HOME/logs
[bigdata@bigdata01 logs]$ ll
total 124
-rw-rw-r--. 1 bigdata bigdata 32231 Jul 25 10:43 hadoop-bigdata-datanode-bigdata01.log
-rw-rw-r--. 1 bigdata bigdata   690 Jul 25 10:27 hadoop-bigdata-datanode-bigdata01.out
-rw-rw-r--. 1 bigdata bigdata 42593 Jul 25 10:57 hadoop-bigdata-namenode-bigdata01.log
-rw-rw-r--. 1 bigdata bigdata   690 Jul 25 10:27 hadoop-bigdata-namenode-bigdata01.out
-rw-rw-r--. 1 bigdata bigdata 35772 Jul 25 11:19 hadoop-bigdata-nodemanager-bigdata01.log
-rw-rw-r--. 1 bigdata bigdata  2206 Jul 25 11:10 hadoop-bigdata-nodemanager-bigdata01.out
-rw-rw-r--. 1 bigdata bigdata     0 Jul 24 22:11 SecurityAuth-bigdata.audit
drwxr-xr-x. 2 bigdata bigdata     6 Jul 25 11:19 userlogs

 

27、jpscall脚本

[bigdata@bigdata01 bin]$ cd $HADOOP_HOME/bin
[bigdata@bigdata01 bin]$ touch jpscall.sh
[bigdata@bigdata01 bin]$ ll
total 996
-rwxr-xr-x. 1 bigdata bigdata 441936 Sep 12  2019 container-executor
-rwxr-xr-x. 1 bigdata bigdata   8707 Sep 12  2019 hadoop
-rwxr-xr-x. 1 bigdata bigdata  11265 Sep 12  2019 hadoop.cmd
-rwxr-xr-x. 1 bigdata bigdata  11026 Sep 12  2019 hdfs
-rwxr-xr-x. 1 bigdata bigdata   8081 Sep 12  2019 hdfs.cmd
-rw-rw-r--. 1 bigdata bigdata      0 Jul 25 12:48 jpscall.sh
-rwxr-xr-x. 1 bigdata bigdata   6237 Sep 12  2019 mapred
-rwxr-xr-x. 1 bigdata bigdata   6311 Sep 12  2019 mapred.cmd
-rwxr-xr-x. 1 bigdata bigdata 483728 Sep 12  2019 test-container-executor
-rwxr-xr-x. 1 bigdata bigdata  11888 Sep 12  2019 yarn
-rwxr-xr-x. 1 bigdata bigdata  12840 Sep 12  2019 yarn.cmd
#查看新建的jpscall.sh没有执行权限,赋予
[bigdata@bigdata01 bin]$ chmod u+x jpscall.sh
[bigdata@bigdata01 bin]$ ll
total 996
-rwxr-xr-x. 1 bigdata bigdata 441936 Sep 12  2019 container-executor
-rwxr-xr-x. 1 bigdata bigdata   8707 Sep 12  2019 hadoop
-rwxr-xr-x. 1 bigdata bigdata  11265 Sep 12  2019 hadoop.cmd
-rwxr-xr-x. 1 bigdata bigdata  11026 Sep 12  2019 hdfs
-rwxr-xr-x. 1 bigdata bigdata   8081 Sep 12  2019 hdfs.cmd
-rwxrw-r--. 1 bigdata bigdata      0 Jul 25 12:48 jpscall.sh
-rwxr-xr-x. 1 bigdata bigdata   6237 Sep 12  2019 mapred
-rwxr-xr-x. 1 bigdata bigdata   6311 Sep 12  2019 mapred.cmd
-rwxr-xr-x. 1 bigdata bigdata 483728 Sep 12  2019 test-container-executor
-rwxr-xr-x. 1 bigdata bigdata  11888 Sep 12  2019 yarn
-rwxr-xr-x. 1 bigdata bigdata  12840 Sep 12  2019 yarn.cmd

 

#添加脚本内容,用于查看所有机器上的jps进程
[bigdata@bigdata01 bin]$ vim jpscall.sh
#添加以下内容:

#!/bin/bash
for host in bigdata01 bigdata02 bigdata03
do
        echo "===================$host====================="    
        ssh $host jps
done

 

28、mycluster脚本

#新建脚本用来一次性启动集群里hdfs、yarn服务
[bigdata@bigdata01 bin]$ touch mycluster.sh
[bigdata@bigdata01 bin]$ chmod u+x mycluster.sh 

添加以下内容

#!/bin/bash

if [ $# -ne 1 ]
        then
        echo "error!"
        exit
fi
case $1 in
"start")
        ssh bigdata01 $HADOOP_HOME/sbin/start-dfs.sh
        ssh bigdata02 $HADOOP_HOME/sbin/start-yarn.sh
        ;;
"stop")
        ssh bigdata01 $HADOOP_HOME/sbin/stop-dfs.sh
        ssh bigdata02 $HADOOP_HOME/sbin/stop-yarn.sh
        ;;
*)
        echo "error!"  
        ;;
esac

执行mycluster.sh

[bigdata@bigdata01 bin]$ ./mycluster.sh stop
Stopping namenodes on [bigdata01]
Stopping datanodes
Stopping secondary namenodes [bigdata03]
Stopping nodemanagers
Stopping resourcemanager
[bigdata@bigdata01 bin]$ jpscall.sh 
===================bigdata01=====================
2534 Jps
===================bigdata02=====================
2314 Jps
===================bigdata03=====================
1913 Jps
[bigdata@bigdata01 bin]$ mycluster.sh start
Starting namenodes on [bigdata01]
Starting datanodes
Starting secondary namenodes [bigdata03]
Starting resourcemanager
Starting nodemanagers
[bigdata@bigdata01 bin]$ jpscall.sh 
===================bigdata01=====================
2834 DataNode
3128 NodeManager
2713 NameNode
3247 Jps
===================bigdata02=====================
2672 NodeManager
2555 ResourceManager
3021 Jps
2382 DataNode
===================bigdata03=====================
2087 SecondaryNameNode
2168 NodeManager
1981 DataNode
2285 Jps
View Code

 

29、查看web页面

NameNode:http://bigdata01:9870

ResourceManager:http://bigdata02:8088

SecondaryNameNode:http://bigdata03:9868

SecondaryNameNode地址默认为空内容,需配置

步骤如下:

[bigdata@bigdata01 bin]$ cd $HADOOP_HOME/share/hadoop/hdfs/webapps/static
[bigdata@bigdata01 static]$ vim dfs-dust.js
#在一般编辑状态输入:set nu将编号显示出来
#将第61行内容:
return moment(Number(v)).format('ddd MMM DD HH:mm:ss ZZ YYYY');
#修改为:
return new Date(Number(v)).toLocaleString();
#将修改好的js文件分发出去
[bigdata@bigdata01 static]$ /home/bigdata/xsync ./dfs-dust.js
==================== bigdata01 ====================
sending incremental file list

sent 67 bytes  received 12 bytes  158.00 bytes/sec
total size is 3,445  speedup is 43.61
==================== bigdata02 ====================
sending incremental file list
dfs-dust.js

sent 811 bytes  received 65 bytes  1,752.00 bytes/sec
total size is 3,445  speedup is 3.93
==================== bigdata03 ====================
sending incremental file list
dfs-dust.js

sent 811 bytes  received 65 bytes  584.00 bytes/sec
total size is 3,445  speedup is 3.93

清除浏览器缓存后刷新SecondaryNameNode:http://bigdata03:9868/status.html,内容即可正常显示。

 

  

30、配置历史服务器

为了查看程序的历史运行情况,需要配置一下历史服务器。具体配置步骤如下:

#配置mapred-site.xml
[bigdata@bigdata01 ~]$ cd $HADOOP_HOME/etc/hadoop 
[bigdata@bigdata01 hadoop]$ vim mapred-site.xml 
#在<configuration></configuration>标签里增加以下内容
<!-- 历史服务器端地址 -->
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>bigdata01:10020</value>
</property>

<!-- 历史服务器web端地址 -->
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>bigdata01:19888</value>
</property>

 

#将修改的文件分发出去
[bigdata@bigdata01 hadoop]$ /home/bigdata/xsync ./
==================== bigdata01 ====================
sending incremental file list

sent 864 bytes  received 13 bytes  1,754.00 bytes/sec
total size is 108,903  speedup is 124.18
==================== bigdata02 ====================
sending incremental file list
./
mapred-site.xml

sent 1,407 bytes  received 51 bytes  972.00 bytes/sec
total size is 108,903  speedup is 74.69
==================== bigdata03 ====================
sending incremental file list
./
mapred-site.xml

sent 1,407 bytes  received 51 bytes  972.00 bytes/sec
total size is 108,903  speedup is 74.69

 

#启动历史服务器
[bigdata@bigdata01 hadoop]$ mapred --daemon start historyserver
#jps查看JobHistoryServer是否已经启动
[bigdata@bigdata01 hadoop]$ jps
2834 DataNode
3911 Jps
3128 NodeManager
2713 NameNode
3854 JobHistoryServer

查看web界面,http://bigdata01:19888/jobhistory

 

 

 

31、配置日志聚集

日志聚集概念:应用运行完成以后,将程序运行日志信息上传到HDFS系统上。

日志聚集功能好处:可以方便的查看到程序运行详情,方便开发调试。

注意:开启日志聚集功能,需要重新启动NodeManager 、ResourceManager和HistoryManager。

[bigdata@bigdata01 hadoop]$ vim yarn-site.xml
#在<configuration></configuration>标签里增加如下内容:
<!-- 开启日志聚集  -->
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
</property>
<!-- 访问路径-->
<property>
    <name>yarn.log.server.url</name>
    <value>http://bigdata01:19888/jobhistory/logs</value>
</property>
<!-- 保存的时间7天 -->
<property>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>604800</value>
</property>

 

#将修改后的yarn-site.xml分发出去
[bigdata@bigdata01 hadoop]$ /home/bigdata/xsync ./yarn-site.xml 
==================== bigdata01 ====================
sending incremental file list

sent 69 bytes  received 12 bytes  162.00 bytes/sec
total size is 2,455  speedup is 30.31
==================== bigdata02 ====================
sending incremental file list
yarn-site.xml

sent 1,179 bytes  received 53 bytes  821.33 bytes/sec
total size is 2,455  speedup is 1.99
==================== bigdata03 ====================
sending incremental file list
yarn-site.xml

sent 1,179 bytes  received 53 bytes  821.33 bytes/sec
total size is 2,455  speedup is 1.99

 

#重启服务
[bigdata@bigdata01 hadoop]$ mycluster stop
-bash: mycluster: command not found
[bigdata@bigdata01 hadoop]$ mycluster.sh stop
Stopping namenodes on [bigdata01]
Stopping datanodes
Stopping secondary namenodes [bigdata03]
Stopping nodemanagers
Stopping resourcemanager
[bigdata@bigdata01 hadoop]$ jps
4507 Jps
3854 JobHistoryServer
[bigdata@bigdata01 hadoop]$ mapred --daemon stop historyserver
#启动
[bigdata@bigdata01 hadoop]$ mycluster.sh start
Starting namenodes on [bigdata01]
Starting datanodes
Starting secondary namenodes [bigdata03]
Starting resourcemanager
Starting nodemanagers
[bigdata@bigdata01 hadoop]$ mapred --daemon start historyserver
[bigdata@bigdata01 hadoop]$ jps
4720 NameNode
5138 NodeManager
4851 DataNode
5347 Jps
5289 JobHistoryServer

 

32、集群时间同步,将bigdata01作为时间服务器,将时间定时同步到其他机器。在学习过程中不建议开启时间同步,此处只做了解。

#bigdata01关闭ntp服务和自启动
[bigdata@bigdata01 hadoop]$ sudo systemctl stop ntpd
[bigdata@bigdata01 hadoop]$ sudo systemctl disable ntpd

修改ntp.conf文件

授权192.168.1网段上的所有机器可以从这台机器上查询和同步时间

[bigdata@bigdata01 hadoop]$ sudo vim /etc/ntp.conf 

去掉以下内容的注释:

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

修改为:

restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

修改不使用互联网时间

将以下内容注释:

server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

修改为:

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

修改当该节点丢失网络链接,依然可以使用本地时间作为时间服务器为集群中的其他节点提供时间同步。

#在刚注释掉的#server 3.centos.pool.ntp.org iburst内容下面增加以下内容:
server 127.127.1.0
fudge 127.127.1.0 stratum 10

修改ntpd文件:

[bigdata@bigdata01 hadoop]$ sudo vim /etc/sysconfig/ntpd

增加以下内容:

#让硬件时间与系统时间一起同步
SYNC_HWCLOCK=yes

 

#重新启动ntpd服务
[bigdata@bigdata01 hadoop]$ sudo systemctl start ntpd
#开机启动ntpd服务
[bigdata@bigdata01 hadoop]$ sudo systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

在其他机器上配置同步任务,依次bigdata02,bigdata03设置同步任务

#设置定时同步命令
[bigdata@bigdata03 ~]$ crontab -e
编写代码(配置10分钟与时间服务器同步),如下:
*/10**** sudo ntpdate bigdata01

 

33、

 

这篇关于Centos7安装Hadoop的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!