PS:请自行准备好Flume、Kafka的环境。由于本教程是属于整合教程,所以,我们可以直接在原来的基础上进行升级即可。过程是将教程:Flume入门案例之NetCat-Souces里的Sink修改为Kafka,而这里的Kafka用的其实是教程:Flume+Kafka+Storm实战:一、Kakfa与Storm整合里面的topic。
0x01 Flume准备a. 拷贝一份examplecd ~/bigdata/apache-flume-1.8.0-bin
cp conf/example.conf conf/kafka.conf
b. 然后修改Sink(点击跳转官网参考)vi conf/kafka.conf
c. 完整配置文件
a1.sources = r1 a1.sinks = k1 a1.channels = c1 a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.topic = word-count-input a1.sinks.k1.kafka.bootstrap.servers = master:9092 a1.sinks.k1.kafka.flumeBatchSize = 5 a1.channels.c1.type = memory a1.sources.r1.channels = c1 a1.sinks.k1.channel = c10x02 Kafka准备
a. 启动Kafka与ZK(三台都执行)
启动ZK:zkServer.sh start
启动Kafka:nohup ~/bigdata/kafka_2.11-1.0.0/bin/kafka-server-start.sh ~/bigdata/kafka_2.11-1.0.0/config/server.properties >~/bigdata/kafka_2.11-1.0.0/logs/server.log 2>&1 &
目前的进程情况(call_all.sh
脚本请参考:大数据常用管理集群脚本集合):
b. 创建Topic:word-count-input
(如果已创建则忽略)
查看Topic(在上一教程已经创建):~/bigdata/kafka_2.11-1.0.0/bin/kafka-topics.sh --list --zookeeper master:2181
如果没创建则创建:~/bigdata/kafka_2.11-1.0.0/bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 1 --partitions 1 --topic word-count-input
a. 在master启动bin/flume-ng agent --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/kafka.conf --name a1
a. 首先应确保先将Kafka与ZK启动起来(已启动则忽略,三台都执行)
启动ZK:zkServer.sh start
启动Kafka:nohup ~/bigdata/kafka_2.11-1.0.0/bin/kafka-server-start.sh ~/bigdata/kafka_2.11-1.0.0/config/server.properties >~/bigdata/kafka_2.11-1.0.0/logs/server.log 2>&1 &
b. 在master启动Kafka消费者kafka-console-consumer.sh --bootstrap-server master:9092 --topic word-count-input
a. 测试过程与教程:Flume入门案例之NetCat-Souces 一样,可以看到我们的内容可以在Kafka的消费者端接收到。
先测试能否输出到控制台再对接Spark Streaming 技术选型一:avro-memory-logger 技术选型二:avro-memory-kafka 参考: 技术选型一: #streaming_logger.conf agent1.sources=avro-source agent1.channels=logger-channel agent1.sinks=log-sink #define source agent1.sources.avro-source.type=avro agent1.sources.avro-source.bind=localhost agent1.sources.avro-source.port=41414 #define channel agent1.channels.logger-channel.type=memory #define sink agent1.sinks.log-sink.type=logger agent1.sources.avro-source.channels=logger-channel agent1.sinks.log-sink.channel=logger-channel flume-ng agent \ --conf $FLUME_HOME/conf \ --conf-file $FLUME_HOME/conf/streaming_logger.conf \ --name agent1 \ -Dflume.root.logger=INFO,console 参考: 技术选型二: #streaming_kafka.conf agent1.sources=avro-source agent1.channels=logger-channel agent1.sinks=kafka-sink #define source agent1.sources.avro-source.type=avro agent1.sources.avro-source.bind=0.0.0.0 agent1.sources.avro-source.port=41414 #define channel agent1.channels.logger-channel.type=memory #define sink agent1.sinks.kafka-sink.type=org.apache.flume.sink.kafka.KafkaSink agent1.sinks.kafka-sink.topic = streaming_topic agent1.sinks.kafka-sink.brokerList = localhost:9092 agent1.sinks.kafka-sink.requiredAcks = 1 agent1.sinks.kafka-sink.batchSize = 20 agent1.sources.avro-source.channels=logger-channel agent1.sinks.kafka-sink.channel=logger-channel 执行脚本: flume-ng agent \ --conf $FLUME_HOME/conf \ --conf-file $FLUME_HOME/conf/streaming_kafka.conf \ --name agent1 \
作者简介:邵奈一
全栈工程师、市场洞察者、专栏编辑
| 公众号 | 微信 | 微博 | CSDN | 简书 |
福利:
邵奈一的技术博客导航
邵奈一 原创不易,如转载请标明出处。