Kafka Java Consumer挂在Jenkins上,不会在本地

时间:2018-05-30 18:30:14

标签: java apache-kafka kafka-consumer-api

我使用以下依赖

创建了消费者
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.1.0</version>
</dependency>

以下是消费者的代码:

    private static String TopicName = "Automation_kafka_test";
    LOGGER.info("Initializing the consumer");
    KafkaConsumer<String, String> myKafkaCascadeConsumer = new KafkaConsumer<String, String>(KafkaCascadeConsumer.kafkaCascadeConfiguration());
    for (Map.Entry<String, Object> entry : KafkaCascadeConsumer.kafkaCascadeConfiguration().entrySet()) 
    {
     LOGGER.info("Key = "+entry.getKey() + ", Value =" + entry.getValue());
    }

 KafkaConsumerHelper.readKafkaMessages(myKafkaCascadeConsumer, TopicName);
        myKafkaCascadeConsumer.close();


// read Kafka messages
        public static void readKafkaMessages(KafkaConsumer<String, String> myKafkaConsumer, String topicName) {
            LOGGER.info("Subscribing to Topic =" + topicName);
            myKafkaConsumer.subscribe(Arrays.asList(topicName));
                 while (true) {
                     ConsumerRecords<String, String> records = myKafkaConsumer.poll(100);
                     for (ConsumerRecord<String, String> record : records)
                         System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
                 }

        }

以下是输出:

2018-05-30 14:23:21,247  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:81) - Initializing the consumer
2018-05-30 14:23:21,869  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = key.deserializer, Value =org.apache.kafka.common.serialization.StringDeserializer
2018-05-30 14:23:21,869  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = value.deserializer, Value =org.apache.kafka.common.serialization.StringDeserializer
2018-05-30 14:23:21,869  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = enable.auto.commit, Value =false
2018-05-30 14:23:21,869  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = group.id, Value =AutomationRamtest1
2018-05-30 14:23:21,870  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = consumer.timeout.ms, Value =50000
2018-05-30 14:23:21,871  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = bootstrap.servers, Value =ABsrd00xxx:9092,ABsrd00yyy:9092 ***** masked for privacy***
2018-05-30 14:23:21,871  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = auto.commit.interval.ms, Value =1000
2018-05-30 14:23:21,871  INFO [TestNG-test=Test-1] (US000000_KafkaTest.java:87) - Key = auto.offset.reset, Value =earliest
2018-05-30 14:23:21,887  INFO [TestNG-test=Test-1] (KafkaConsumerHelper.java:53) - Subscribing to Topic =Automation_kafka_test

詹金斯坚持上述声明,并且永远不会离开它。此代码也从未在我的本地获得任何消息,但在使用CLI时,开发人员从我正在阅读的相同主题中获取消息。

此外,我为制作人使用相同的bootstrap.servers设置并且它可以工作。

如果我做错了,你可以告诉我。

1 个答案:

答案 0 :(得分:0)

实际上问题在于小组分配。如果只有单个分区,并且如果任何过去的消费者没有正确终止,那么Kafka会等待最大长值来终止它。如果你试图在同一组中添加消费者,那么它将无法读取。

下面的文章解释了组,分区和主题之间的关系。

https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html

相关问题