分享两个链接
https://juejin.im/post/6844903495670169607
https://www.zhihu.com/question/53331259
Spring-Kafka相关教程
https://www.jianshu.com/c/0c9d83802b0c
使用docker搭建kafka
1、下载zookeeper镜像:
docker pull wurstmeister/zookeeper
2、下载kafka镜像:
docker pull wurstmeister/kafka
3、根据镜像创建并启动zookeeper容器
docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper
docker run -itd --name zookeeper -p 2181:2181 wurstmeister/zookeeper
4.创建Kafka1容器,并启动
docker run -d --name kafka # 运行后容器的名称
-p 9092:9092 # 端口映射
# 在kafka集群中,每个kafka都有一个BROKER_ID来区分自己
-e KAFKA_BROKER_ID=0
# 配置zookeeper管理kafka的路径192.168.56.122:2181/kafka
-e KAFKA_ZOOKEEPER_CONNECT=192.168.56.122:2181/kafka
# 把kafka的地址端口注册给zookeeper
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.56.122:9092
# 配置kafka的监听端口
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092
# 容器时间同步虚拟机的时间
docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=47.98.128.88:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://47.98.128.88:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
docker run -d --name kafka --publish 9092:9092 --link docker_zookeeper --env KAFKA_ZOOKEEPER_CONNECT=47.98.128.88:2181 --env KAFKA_ADVERTISED_HOST_NAME=47.98.128.88 --env KAFKA_ADVERTISED_PORT=9092 --volume /etc/localtime:/etc/localtime wurstmeister/kafka:latest
5.创建Kafka2容器,并启动
docker run -d --name kafka2 -p 9093:9093 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=宿主机ip:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://宿主机ip:9093 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093 -t wurstmeister/kafka
(注:需要多少个 按照格式创建即可 需要修改的是 port broker_id)
kafka如果在多台服务器搭建: 其他服务器上启动的kafka也一样指向zookeeper服务所在的宿主机ip
简单的kafka操作:
创建topic
kafka-topics.sh --create --zookeeper ip:2181 --replication-factor 3 --partitions 4 --topic test1
查看topic
kafka-topics.sh --list --zookeeper ip:2181
查看指定topic状态
kafka-topics.sh --zookeeper ip:2181 --topic test1 --describe
生产者
kafka-console-producer.sh --broker-list ip:9092 --topic test1
消费者
kafka-console-consumer.sh --bootstrap-server ip:9092 --topic test1 --from-beginning
注:(下面这种方法需要安装docker-compose命令,否则识别不了)
3、在自己选的目录下(随便一个目录下)创建一个docker-compose.yml文件
内容如下:(注意里面的ip地址(192.168.0.101)改为你自己本地的ip地址。
version: '2'
services:
zoo1:
image: wurstmeister/zookeeper
restart: unless-stopped
hostname: zoo1
ports:
- "2181:2181"
container_name: zookeeper
# kafka version: 1.1.0
# scala version: 2.12
kafka1:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 47.98.128.88
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CREATE_TOPICS: "stream-in:2:1,stream-out:2:1"
depends_on:
- zoo1
container_name: kafka1
kafka2:
image: wurstmeister/kafka
ports:
- "9093:9093"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.0.101
KAFKA_ADVERTISED_PORT: 9093
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zoo1
container_name: kafka2
4、启动docker-compose
docker-compose up -d