kafak权限管理

背景:最近公司因为用的云服务器,需要保证kafka的安全性。可喜的是kafka0.9开始,已经支持权限控制了。网上中文资料又少,特此基于kafka0.9,记录kafaka的权限控制 ( flume需要1.7及其以上才支持kafka的SSL认证)。

下面各位看官跟着小二一起开始kafak权限认证之旅吧!嘎嘎嘎!

介绍:kafka权限控制整体可以分为三种类型:1.基于SSL(CDH 5.8不支持)2.基于Kerberos(此认证一般基于CDH,本文不与讨论)3.基于acl的 (CDH5.8中的kafka2.X不支持 )

本文主要基于apace版本的,实现1和3,也是用的最多的展开讨论。

统一说明:在本文中&符号表示注释

一,准备工作组件分布kafka centos11,centos12,centos13zoopeeker centos11,centos12,centos13

二、在kafka集群任选一台机子 ( 先介绍基于SSL的 )

密码统一为123456

&Step 1 Generate SSL key and certificate for each Kafka broker
keytool -keystore server.keystore.jks -alias centos11 -validity 365 -genkey

%Step 2 Creating your own CA
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

&Step 3 Signing the certificate
keytool -keystore server.keystore.jks -alias centos11 -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:123456
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias centos11 -import -file cert-signed

三、其他的kafka集群

&机器centos13 centos12
keytool -keystore kafka.client.keystore.jks -alias centos13 -validity 365 -genkey
keytool -keystore kafka.client.keystore.jks -alias centos13 -certreq -file cert-file
cp cert-file cert-file-centos13

&centos11
scp ./ca* ce* server* root@centos13:/opt/kafka_2.10/

openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-centos13 -out cert-signed-centos13 -days 365 -CAcreateserial -passin pass:123456
keytool -keystore kafkacentos13.client.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore kafkacentos13.client.keystore.jks -alias centos13 -import -file cert-signed-centos13



rm -rf producer.properties
echo "bootstrap.servers=centos13:9093" >> producer.properties
echo "security.protocol=SSL" >> producer.properties
echo "ssl.truststore.location=/opt/kafka_2.10/kafkacentos12.client.keystore.jks">> producer.properties
echo "ssl.truststore.password=123456">> producer.properties
echo "ssl.keystore.location=/opt/kafka_2.10/server.keystore.jks">> producer.properties
echo "ssl.keystore.password=123456">> producer.properties
echo "ssl.key.password=123456">> producer.properties

4.验证:

openssl s_client -debug -connect localhost:9093 -tls1

output:
-----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com

5.使用:

bin/kafka-console-consumer.sh --bootstrap-server kafka2:9093 --topic test --new-consumer --consumer.config config/producer.properties

bin/kafka-console-producer.sh --broker-list centos11:9093 --topic test --producer.config ./config/producer.properties

bin/kafka-console-consumer.sh --bootstrap-server centos11:9093 --topic test --new-consumer --consumer.config ./config/producer.properties

bin/kafka-console-consumer.sh --bootstrap-server centos13:9093 --topic test --new-consumer --consumer.config ./config/producer.properties --from-beginning

6.基于ACL

server.properties中加配置

allow.everyone.if.no.acl.found=true
super.users=User:root
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

7.ACL的简单使用:

bin/kafka-acls.sh --authorizer-properties zookeeper.connect=centos11:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic test

bin/kafka-acls.sh --authorizer-properties zookeeper.connect=centos11:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic test

8.Java Demo需要将server.keystore.jks、client.truststore.jks从任一台机器上拷贝下来即可。

SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported

ConsumerDemo

package xmhd.examples;

import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.config.SslConfigs;
import java.util.Arrays;
import java.util.Properties;
/**

  • Created by shengjk1.
  • blog address :http://blog.csdn.net/jsjsjs1789
  • 生产者可以保证权限认证
  • SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported
    /
    public class ConsumerZbdba {
    public static void main(String[] args) {
    // new ConsumerZbdba("test").start();// 使用kafka集群中创建好的主题 test
    Properties props = new Properties();
    /
    定义kakfa 服务的地址,不需要将所有broker指定上 /
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "centos11:9093;centos13:9093;centos12:9093");
    /
    制定consumer group /
    props.put("group.id", "test");
    props.put("auto.offset.reset","earliest");
    /
    是否自动确认offset /
    // props.put("enable.auto.commit", "true");
    // props.put(ProducerConfig.CLIENT_ID_CONFIG, "myApiKey");
    props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "F:\server.keystore.jks");
    props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "123456");
    props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "F:\client.truststore.jks");
    props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "123456");
    props.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "JKS");
    props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
    //


    ///
    自动确认offset的时间间隔 /
    // props.put("auto.commit.interval.ms", "1000");
    // props.put("session.timeout.ms", "30000");
    /
    key的序列化类 /
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    /
    value的序列化类 /
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    /
    定义consumer /
    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
    /
    消费者订阅的topic, 可同时订阅多个 /
    consumer.subscribe(Arrays.asList("test"));

    /
    读取数据,读取超时时间为100ms */
    while (true) {
    ConsumerRecords<String, String> records = consumer.poll(100);
    for (ConsumerRecord<String, String> record : records)
    System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()+"\n");
    }
    }
    }

ProducerDemo

package xmhd.examples;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.config.SslConfigs;
import java.util.Properties;
/**

  • Created by shengjk1.
  • blog address :http://blog.csdn.net/jsjsjs1789
  • 生产者可以保证权限认证
    */
    public class ProducerZbdba {
    public static void main(String[] args) {
    Properties producerProps = new Properties();
    producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "centos12:9093");
    // producerProps.put(ProducerConfig.CLIENT_ID_CONFIG, "myApiKey");
    producerProps.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "F:\server.keystore.jks");
    producerProps.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "123456");
    producerProps.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "F:\client.truststore.jks");
    producerProps.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "123456");
    producerProps.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "JKS");
    producerProps.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
    producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
    producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
    KafkaProducer producer = new KafkaProducer(producerProps);
    for(int i = 0; i < 100; i++)
    producer.send(new ProducerRecord<String, String>("test", Integer.toString(i), Integer.toString(i)));
    System.out.println("test");
    producer.close();
    }
    }

8.1 flume1.7 的配置 (基于kafka SSL认证)

tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1


tier1.sources.source1.type = exec
tier1.sources.source1.command = tail -F -n+1 /opt/scan.log
tier1.sources.source1.channels = channel1

tier1.channels.channel1.type = memory
tier1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
tier1.sinks.sink1.topic = test
tier1.sinks.sink1.kafka.bootstrap.servers = centos11:9093,centos12:9093,centos13:9093
tier1.channels.channel1.kafka.bootstrap.servers = centos11:9093,centos12:9093,centos13:9093
tier1.sinks.sink1.requiredAcks = 1
tier1.sinks.sink1.batchSize = 100


tier1.sinks.sink1.kafka.producer.security.protocol = SSL
tier1.sinks.sink1.kafka.producer.ssl.truststore.type=JKS
tier1.sinks.sink1.kafka.producer.ssl.truststore.location = /opt/kafka_2.10/server.truststore.jks
tier1.sinks.sink1.kafka.producer.ssl.truststore.password =123456
tier1.sinks.sink1.kafka.producer.security.protocol = SSL
tier1.sinks.sink1.kafka.producer.ssl.keystore.location = /opt/kafka_2.10/server.keystore.jks
tier1.sinks.sink1.kafka.producer.ssl.keystore.password =123456
&tier1.sinks.sink1.kafka.producer.ssl.endpoint.identification.algorithm = HTTPS


tier1.sinks.sink1.channel = channel1
tier1.channels.channel1.capacity = 100


tier1.channels.channel1.kafka.producer.security.protocol = SSL
tier1.channels.channel1.kafka.producer.ssl.truststore.type=JKS
tier1.channels.channel1.kafka.producer.ssl.truststore.location = /opt/kafka_2.10/server.truststore.jks
tier1.channels.channel1.kafka.producer.ssl.truststore.password =123456
tier1.channels.channel1.kafka.producer.security.protocol = SSL
tier1.channels.channel1.kafka.producer.ssl.keystore.location = /opt/kafka_2.10/server.keystore.jks
tier1.channels.channel1.kafka.producer.ssl.keystore.password =123456
tier1.channels.channel1.kafka.producer.ssl.endpoint.identification.algorithm = HTTPS


tier1.channels.channel1.kafka.consumer.security.protocol = SSL
tier1.channels.channel1.kafka.consumer.ssl.truststore.location = /opt/kafka_2.10/server.truststore.jks
tier1.channels.channel1.kafka.consumer.ssl.truststore.password =123456
tier1.channels.channel1.kafka.consumer.security.protocol = SSL
tier1.channels.channel1.kafka.consumer.ssl.keystore.location = /opt/kafka_2.10/server.truststore.jks
tier1.channels.channel1.kafka.consumer.ssl.keystore.password =123456
tier1.channels.channel1.kafka.consumer.ssl.endpoint.identification.algorithm = HTTPS

9.kafak Server.properties最终版的

三台机子需保证一样,centos11为机器名,根据需要自行修改

broker.id=0
############################# Socket Server Settings #############################
&这一点可能需要特别的注意,PLAINTEXT注释掉之后,一些基本的kafka脚本都不在能用了
&listeners=PLAINTEXT://centos11:9092,SSL://centos11:9093
listeners=SSL://centos11:9093
advertised.listeners=SSL://centos11:9093
ssl.keystore.location=/opt/kafka_2.10/server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456
ssl.truststore.location=/opt/kafka_2.10/server.truststore.jks
ssl.truststore.password=123456
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL

&acl
allow.everyone.if.no.acl.found=true
super.users=User:root
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

host.name=centos11
advertised.host.name=centos11

num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
############################# Log Basics #############################
log.dirs=/opt/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
zookeeper.connect=centos11:2181,centos12:2181,centos13:2181
zookeeper.connection.timeout.ms=6000</pre>

kafka producer.propertiescentos11为机器名,根据需求自行修改

bootstrap.servers=centos11:9093
security.protocol=SSL
ssl.truststore.location=/opt/kafka_2.10/client.truststore.jks
ssl.truststore.password=123456
ssl.keystore.location=/opt/kafka_2.10/server.keystore.jks
ssl.keystore.password=123456
ssl.key.password=123456</pre>

10、参考网址:具体细节可参考官网!

http://kafka.apache.org/090/documentation.html#security_authzhttp://www.cloudera.com/documentation/kafka/latest/topics/kafka_security.htmlhttp://blog.csdn.net/zbdba/article/details/52458654

11.通信协议的支持情况
image

12.扩展阅读:

关于SSL原理 http://www.linuxde.net/2012/03/8301.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 159,015评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,262评论 1 292
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,727评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,986评论 0 205
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,363评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,610评论 1 219
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,871评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,582评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,297评论 1 242
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,551评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,053评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,385评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,035评论 3 236
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,079评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,841评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,648评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,550评论 2 270

推荐阅读更多精彩内容