1、首先打开虚拟机,运行一下两条命令打开hadoop:
start-dfs.sh
start-yarn.sh
2、进入hive目录下,启动metastore
hive --service metastore
3、使用jdbc访问hive,启动 hiveserver2
bin/hive --service hiveserver2
4、启动 beeline 客户端(需要多等待一会)
bin/beeline -u jdbc:hive2://node01:10000 -n hadoop
5、本地文件复制到hive任意路径下:(sale.csv为例)
6、导入到hive:
#把本地txt/csv导入hive
load data local inpath '/kkb/sale.csv' into table sale;
7、查看导入表格前10行:
Select * from sale limit 10;
8、hive导入到mysql需要借助sqoop:(sqoop下运行以下命令,sqoop安装另见)
1 #通过sqoop把数据表导入mysql 2 3 bin/sqoop export \ 4 5 --connect jdbc:mysql://node01:3306/sale \ 6 7 --username root \ 8 9 --password root \ 10 11 --table sale \ 12 13 --num-mappers 1 \ 14 15 --export-dir /user/hive/warehouse/sale \ 16 17 --input-fields-terminated-by ","
到mysql查看,导入完成!