windows下c++中使用kafka


kafka的c/c++的client有

用户最多的是librdkafka, github上有2000+star, 笔者使用的也是librdkafka
还没有正式的0.11.6 release版本,故而笔者选用的是v0.11.5版本,然后掉坑里了

v0.11.5版本rd_lock() 有个bug,在window平台下,机器开机超过7天,rd_lock()的返回值就溢出了,导致无法produce和consume;笔者的机器是常年不关的,所以写测试程序的时候,一开始就发现无法produce和consume,一度怀疑是不是配置没有 配好,还是跟kafka server 版本不兼容,然后用go的kafka client sarama 几分钟完成测试,弄的我快崩溃了,一度想放弃;最后没办法,就单步调试,发现是因为获取系统时间溢出,导致produce request 和 fetch request一直无法满足条件去执行;v0.11.6.-rc2 已经修复这个问题或者版本回退到v0.11.4也没有这个问题。。。

下面几个issues都是这个问题导致的

鉴于官方还没有正式发布v0.11.6,笔者选用了最新的v0.11.6-rc4

  • 下载和编译

    • v0.11.6-rc4
    • 打开根目录\win32下librdkafaka.sln,使用vs 编译, 默认librdkafka和librdkafkacpp都是编译成dll,你可以自行修改编译选项改成静态库
    • 如果你想编译成静态库的话,注意将librdkafka project的预处理定义里面"_USRDLL;LIBRDKAFKA_EXPORTS;"删除,然后加上"LIBRDKAFKA_STATICLIB"; 将librdkafkacpp project的预处理定义里面"_USRDLL;LIBRDKAFKACPP_EXPORTS;"删除,然后加上"LIBRDKAFKA_STATICLIB";然后在你使用librdkafka的项目里面也加上宏定义LIBRDKAFKA_STATICLIB
    • librdkafka 需要使用到zlib和openssl库,注意修改vsproject里面zlib和openssl库的头文件目录和lib目录
  • producer测试

#include <iostream>
#include <string>
#include <list>
#include <stdint.h>
#include <rdkafkacpp.h>

static bool run = true;
static bool exit_eof = false;

void dump_config(RdKafka::Conf* conf) {
    std::list<std::string> *dump = conf->dump();

    printf("config dump(%d):\n", (int32_t)dump->size());
    for (auto it = dump->begin(); it != dump->end(); ) {
        std::string name = *it++;
        std::string value = *it++;
        printf("%s = %s\n", name.c_str(), value.c_str());
    }

    printf("---------------------------------------------\n");
}

class my_event_cb : public RdKafka::EventCb {
public:
    void event_cb(RdKafka::Event &event) override {
        switch (event.type())
        {
        case RdKafka::Event::EVENT_ERROR:
            std::cerr << "ERROR (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            if (event.err() == RdKafka::ERR__ALL_BROKERS_DOWN)
                run = false;
            break;

        case RdKafka::Event::EVENT_STATS:
            std::cerr << "\"STATS\": " << event.str() << std::endl;
            break;

        case RdKafka::Event::EVENT_LOG:
            fprintf(stderr, "LOG-%i-%s: %s\n",
                event.severity(), event.fac().c_str(), event.str().c_str());
            break;

        default:
            std::cerr << "EVENT " << event.type() <<
                " (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            break;
        }
    }
};

class my_hash_partitioner_cb : public RdKafka::PartitionerCb {
public:
    int32_t partitioner_cb(const RdKafka::Topic *topic, const std::string *key,
        int32_t partition_cnt, void *msg_opaque) override {
        return djb_hash(key->c_str(), key->size()) % partition_cnt;
    }
private:
    static inline unsigned int djb_hash(const char *str, size_t len) {
        unsigned int hash = 5381;
        for (size_t i = 0; i < len; i++)
            hash = ((hash << 5) + hash) + str[i];
        return hash;
    }
};

namespace producer_ts {

class my_delivery_report_cb : public RdKafka::DeliveryReportCb {
public:
    void dr_cb(RdKafka::Message& message) override {
        printf("message delivery %d bytes, error:%s, key: %s\n",
            (int32_t)message.len(), message.errstr().c_str(), message.key() ? message.key()->c_str() : "");
    }
};

void producer_test() {
    printf("producer test\n");

    int32_t partition = RdKafka::Topic::PARTITION_UA;

    printf("input brokers list(127.0.0.1:9092;127.0.0.1:9093;127.0.0.1:9094):\n");
    std::string broker_list;

    //std::cin >> broker_list;
    broker_list = "127.0.0.1:9092";

    printf("input partition:");

    //std::cin >> partition;
    partition = 0;

    // config 
    RdKafka::Conf* global_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);
    RdKafka::Conf* topic_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_TOPIC);

    my_hash_partitioner_cb          hash_partitioner;
    my_event_cb                     event_cb;
    my_delivery_report_cb           delivery_cb;
  

    std::string err_string;
    if (topic_conf->set("partitioner_cb", &hash_partitioner, err_string) != RdKafka::Conf::CONF_OK) {
        printf("set partitioner_cb error: %s\n", err_string.c_str());
        return;
    }

    global_conf->set("metadata.broker.list", broker_list, err_string);
    global_conf->set("event_cb", &event_cb, err_string);
    global_conf->set("dr_cb", &delivery_cb, err_string);
    //global_conf->set("retry.backoff.ms", "10", err_string);
    //global_conf->set("debug", "all", err_string);
    //global_conf->set("debug", "topic,msg", err_string);
    //global_conf->set("debug", "msg,queue", err_string);

    dump_config(global_conf);
    dump_config(topic_conf);


    // create producer
    RdKafka::Producer* producer = RdKafka::Producer::create(global_conf, err_string);
    if (!producer) {
        printf("failed to create producer, %s\n", err_string.c_str());
        return;
    }

    printf("created producer %s\n", producer->name().c_str());

    std::string topic_name;
    while (true) {

        printf("input topic to create:\n");
        std::cin >> topic_name;

        // create topic
        RdKafka::Topic* topic =
            RdKafka::Topic::create(producer, topic_name, topic_conf, err_string);

        if (!topic) {
            printf("try create topic[%s] failed, %s\n",
                topic_name.c_str(), err_string.c_str());
            return;
        }

        printf(">");
        for (std::string line; run && std::getline(std::cin, line); ) {
            if (line.empty()) {
                producer->poll(0);
                continue;
            }

            if (line == "quit") {
                break;
            }

            std::string key = "kafka_test";

            RdKafka::ErrorCode res = producer->produce(topic, partition,
                RdKafka::Producer::RK_MSG_COPY,
                (char*)line.c_str(), line.size(), key.c_str(), key.size(), NULL);

            if (res != RdKafka::ERR_NO_ERROR) {
                printf("produce failed, %s\n", RdKafka::err2str(res).c_str());
            }
            else {
                printf("produced msg, bytes %d\n", (int32_t)line.size());
            }

            // do socket io
            producer->poll(0);

            printf("outq_len: %d\n", producer->outq_len());

            //producer->flush(1000);

            //while (run && producer->outq_len()) {
            //    printf("wait for write queue( size %d) write finish\n", producer->outq_len());
            //    producer->poll(1000);
            //}

            printf(">");
        }

        delete topic;

        if (!run) {
            break;
        }
    }

    run = true;

    while (run && producer->outq_len()) {
        printf("wait for write queue( size %d) write finish\n", producer->outq_len());
        producer->poll(1000);
    }

    delete producer;
}
}
  • consumer测试
#include <iostream>
#include <string>
#include <list>
#include <stdint.h>
#include <rdkafkacpp.h>

static bool run = true;
static bool exit_eof = false;

void dump_config(RdKafka::Conf* conf) {
    std::list<std::string> *dump = conf->dump();

    printf("config dump(%d):\n", (int32_t)dump->size());
    for (auto it = dump->begin(); it != dump->end(); ) {
        std::string name = *it++;
        std::string value = *it++;
        printf("%s = %s\n", name.c_str(), value.c_str());
    }

    printf("---------------------------------------------\n");
}

class my_event_cb : public RdKafka::EventCb {
public:
    void event_cb(RdKafka::Event &event) override {
        switch (event.type())
        {
        case RdKafka::Event::EVENT_ERROR:
            std::cerr << "ERROR (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            if (event.err() == RdKafka::ERR__ALL_BROKERS_DOWN)
                run = false;
            break;

        case RdKafka::Event::EVENT_STATS:
            std::cerr << "\"STATS\": " << event.str() << std::endl;
            break;

        case RdKafka::Event::EVENT_LOG:
            fprintf(stderr, "LOG-%i-%s: %s\n",
                event.severity(), event.fac().c_str(), event.str().c_str());
            break;

        default:
            std::cerr << "EVENT " << event.type() <<
                " (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            break;
        }
    }
};

class my_hash_partitioner_cb : public RdKafka::PartitionerCb {
public:
    int32_t partitioner_cb(const RdKafka::Topic *topic, const std::string *key,
        int32_t partition_cnt, void *msg_opaque) override {
        return djb_hash(key->c_str(), key->size()) % partition_cnt;
    }
private:
    static inline unsigned int djb_hash(const char *str, size_t len) {
        unsigned int hash = 5381;
        for (size_t i = 0; i < len; i++)
            hash = ((hash << 5) + hash) + str[i];
        return hash;
    }
};

namespace consumer_ts
{
void msg_consume(RdKafka::Message* message, void* opaque)
{
    switch (message->err())
    {
    case RdKafka::ERR__TIMED_OUT:
        break;

    case RdKafka::ERR_NO_ERROR:
        /* Real message */
        std::cout << "Read msg at offset " << message->offset() << std::endl;
        if (message->key())
        {
            std::cout << "Key: " << *message->key() << std::endl;
        }
        printf("%.*s\n", static_cast<int>(message->len()), static_cast<const char *>(message->payload()));
        break;
    case RdKafka::ERR__PARTITION_EOF:
        /* Last message */
        if (exit_eof)
        {
            run = false;
        }
        break;
    case RdKafka::ERR__UNKNOWN_TOPIC:
    case RdKafka::ERR__UNKNOWN_PARTITION:
        std::cerr << "Consume failed: " << message->errstr() << std::endl;
        run = false;
        break;
    default:
        /* Errors */
        std::cerr << "Consume failed: " << message->errstr() << std::endl;
        run = false;
    }
}

class my_consumer_cb : public RdKafka::ConsumeCb {
public:
    void consume_cb(RdKafka::Message &msg, void *opaque) override
    {
        msg_consume(&msg, opaque);
    }
};

void consumer_test() {
    printf("conumer test\n");

    int32_t partition = RdKafka::Topic::PARTITION_UA;

    printf("input brokers list(127.0.0.1:9092;127.0.0.1:9093;127.0.0.1:9094):\n");
    std::string broker_list;

    //std::cin >> broker_list;
    broker_list = "127.0.0.1:9092";

    printf("inpute partition:");

    //std::cin >> partition;
    partition = 0;

    // config 
    RdKafka::Conf* global_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);
    RdKafka::Conf* topic_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_TOPIC);

    my_hash_partitioner_cb          hash_partitioner;
    my_event_cb                     event_cb;
    my_consumer_cb                  consume_cb;

    int64_t start_offset = RdKafka::Topic::OFFSET_BEGINNING;

    std::string err_string;
    if (topic_conf->set("partitioner_cb", &hash_partitioner, err_string) != RdKafka::Conf::CONF_OK){
        printf("set partitioner_cb error: %s\n", err_string.c_str());
        return;
    }

    global_conf->set("metadata.broker.list", broker_list, err_string);
    global_conf->set("event_cb", &event_cb, err_string);
    //global_conf->set("debug", "all", err_string);
    //global_conf->set("debug", "topic,msg", err_string);
    //global_conf->set("debug", "topic,msg,queue", err_string);

    dump_config(global_conf);
    dump_config(topic_conf);

    // create consumer
    RdKafka::Consumer* consumer = RdKafka::Consumer::create(global_conf, err_string);
    if (!consumer) {
        printf("failed to create consumer, %s\n", err_string.c_str());
        return;
    }

    printf("created consumer %s\n", consumer->name().c_str());

    // create topic
    printf("input topic name:\n");

    std::string topic_name;
    std::cin >> topic_name;

    RdKafka::Topic* topic = RdKafka::Topic::create(consumer, topic_name, topic_conf, err_string);
    if (!topic) {
        printf("try create topic[%s] failed, %s\n", topic_name.c_str(), err_string.c_str());
        return;
    }

    // Start consumer for topic+partition at start offset
    RdKafka::ErrorCode resp = consumer->start(topic, partition, start_offset);
    if (resp != RdKafka::ERR_NO_ERROR) {
        printf("Failed to start consumer: %s\n", 
            RdKafka::err2str(resp).c_str());
        return;
    }

    int use_ccb = 0;
    while (run) {
        //consumer->consume_callback(topic, partition, 1000, &consume_cb, &use_ccb);
        //consumer->poll(0);

        RdKafka::Message *msg = consumer->consume(topic, partition, 2000);
        msg_consume(msg, NULL);
        delete msg;
    }

    // stop consumer
    consumer->stop(topic, partition);
    consumer->poll(1000);

    delete topic;
    delete consumer;
}
};
  • metadata测试


class my_event_cb : public RdKafka::EventCb {
public:
    void event_cb(RdKafka::Event &event) override {
        switch (event.type())
        {
        case RdKafka::Event::EVENT_ERROR:
            std::cerr << "ERROR (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            if (event.err() == RdKafka::ERR__ALL_BROKERS_DOWN)
                run = false;
            break;

        case RdKafka::Event::EVENT_STATS:
            std::cerr << "\"STATS\": " << event.str() << std::endl;
            break;

        case RdKafka::Event::EVENT_LOG:
            fprintf(stderr, "LOG-%i-%s: %s\n",
                event.severity(), event.fac().c_str(), event.str().c_str());
            break;

        default:
            std::cerr << "EVENT " << event.type() <<
                " (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            break;
        }
    }
};

class my_hash_partitioner_cb : public RdKafka::PartitionerCb {
public:
    int32_t partitioner_cb(const RdKafka::Topic *topic, const std::string *key,
        int32_t partition_cnt, void *msg_opaque) override {
        return djb_hash(key->c_str(), key->size()) % partition_cnt;
    }
private:
    static inline unsigned int djb_hash(const char *str, size_t len) {
        unsigned int hash = 5381;
        for (size_t i = 0; i < len; i++)
            hash = ((hash << 5) + hash) + str[i];
        return hash;
    }
};

namespace metadata_ts{

static void metadata_print (const std::string &topic,
                            const RdKafka::Metadata *metadata) {

  if (!metadata) {
      printf("try metadata_print for topic: %s failed.\n", topic.empty() ? "all topic" : topic.c_str());
      return;
  }
  printf("Metadata for %s ( from broker %d:%s)\n",
      topic.empty() ? "all topic" : topic.c_str(), 
      metadata->orig_broker_id(), metadata->orig_broker_name().c_str());

  /* Iterate brokers */
  printf("brokers(%d):\n", (int32_t)metadata->brokers()->size());
  RdKafka::Metadata::BrokerMetadataIterator ib;
  for (ib = metadata->brokers()->begin();
       ib != metadata->brokers()->end();
       ++ib) {
    printf("broker[%d] at %s:%d\n", (*ib)->id(), (*ib)->host().c_str(), (*ib)->port());
  }
  /* Iterate topics */
  printf("topics(%d):\n", (int32_t)metadata->topics()->size());
  RdKafka::Metadata::TopicMetadataIterator it;
  for (it = metadata->topics()->begin();
       it != metadata->topics()->end();
       ++it) {

    printf("    topic\"%s\" with %d partitions:", 
        (*it)->topic().c_str(), (int32_t)(*it)->partitions()->size());

    if ((*it)->err() != RdKafka::ERR_NO_ERROR) {
      printf("  %s", err2str((*it)->err()).c_str());
      if ((*it)->err() == RdKafka::ERR_LEADER_NOT_AVAILABLE)
        printf(" (try again)");
    }
    printf("\n");

    /* Iterate topic's partitions */
    RdKafka::TopicMetadata::PartitionMetadataIterator ip;
    for (ip = (*it)->partitions()->begin();
         ip != (*it)->partitions()->end();
         ++ip) {
      printf("      partition %d, leader %d, replicas:", (*ip)->id(), (*ip)->leader());

      /* Iterate partition's replicas */
      RdKafka::PartitionMetadata::ReplicasIterator ir;
      for (ir = (*ip)->replicas()->begin();
           ir != (*ip)->replicas()->end();
           ++ir) {

        printf("%s%d", (ir == (*ip)->replicas()->begin() ? "" : ","), *ir);
      }

      /* Iterate partition's ISRs */
      printf(", isrs: ");
      RdKafka::PartitionMetadata::ISRSIterator iis;
      for (iis = (*ip)->isrs()->begin(); iis != (*ip)->isrs()->end() ; ++iis)
      printf("%s%d", (iis == (*ip)->isrs()->begin() ? "" : ","), *iis);

      if ((*ip)->err() != RdKafka::ERR_NO_ERROR)
          printf(", %s\n", RdKafka::err2str((*ip)->err()).c_str());
      else
          printf("\n");
    }
  }
}

void metadata_test() {
    printf("metadata_test\n");

    printf("input brokers list(127.0.0.1:9092;127.0.0.1:9093;127.0.0.1:9094):\n");
    std::string broker_list;

    //std::cin >> broker_list;
    broker_list = "127.0.0.1:9092";

    // config 
    RdKafka::Conf* global_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);
    std::string err_string;
    my_hash_partitioner_cb          hash_partitioner;
    my_event_cb                     event_cb;
    global_conf->set("metadata.broker.list", broker_list, err_string);
    global_conf->set("event_cb", &event_cb, err_string);

    // create producer
    RdKafka::Producer* producer = RdKafka::Producer::create(global_conf, err_string);
    if (!producer) {
        printf("failed to create producer, %s\n", err_string.c_str());
        return;
    }

    printf("created producer %s\n", producer->name().c_str());

    while (run) {

        std::string cmd;
        std::cin >> cmd;

        if (cmd == "ls") {
            class RdKafka::Metadata *metadata;
            /* Fetch metadata */
            RdKafka::ErrorCode err = producer->metadata(true, NULL,
                &metadata, 5000);
            if (err != RdKafka::ERR_NO_ERROR) {
                std::cerr << "%% Failed to acquire metadata: "
                    << RdKafka::err2str(err) << std::endl;
                run = 0;
                break;
            }

            std::string topic_name;
            metadata_print(topic_name, metadata);

            delete metadata;
        }
        //run = 0;
    }

}
}
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 160,108评论 4 364
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,699评论 1 296
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 109,812评论 0 244
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 44,236评论 0 213
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,583评论 3 288
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,739评论 1 222
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,957评论 2 315
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,704评论 0 204
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,447评论 1 246
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,643评论 2 249
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,133评论 1 261
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,486评论 3 256
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,151评论 3 238
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,108评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,889评论 0 197
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,782评论 2 277
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,681评论 2 272