1、Kafka解析 2、Kafka安装实战 Kafka元数据被ZooKeeper管理 Kafka是Scala写的,所以需要安装Scala、Java
将slf4j-nop-1.7.6.jar拷贝到kafka的libs目录下,slf4j用于nohup 配置集群中每台机器: 1、配置.bashrc export KAFKA_HOME=/usr/local/kafka_2.10-0.9.0.10-0
PATH加入{$KAFKA_HOME}/bin
2、配置属性 kafka目录下server.properties broker.id=0(每台机器不同Worker1设置为1,Worker2设置为2) zookeeper.connect=Master:2181,Worker1:2182,Worker2:2181(zookeeper client默认端口)
3、启动(所有机器都要启动) nohup ./kafka-server-start.sh ../config/server.properties &
创建topic ./kafka-topics.sh --create --zookeeper Master:2181,Worker1:2181,Worker2:2181 --replication-factor 3 --partitions 1 --topic HelloKafka
查看topic ./kafka-topics.sh --discribe --zookeeper Master:2181,Worker1:2181,Worker2:2181 --topic HelloKafka
配置生产者 ./kafka-console-producer.sh --broker-list Master:9092,Worker1:9092,Worker2:9092 --topic HelloKafka
在下面输入 This is DT_Spark! I’m Rocky! Life is short, you need Spark! 消费者: ./kafka-console-consumer.sh --zookeeper Master:2181,Worker1:2181,Worker2:2181 --from-beginning --topic HelloKafka
|