消费者阅读__consumer_offsets会传递不可读的消息

时间:2018-09-11 14:23:44

标签: java apache-kafka

我正在尝试从__consumer_offsets主题进行消费,因为这似乎是检索有关诸如消息滞后之类的消费者的kafka指标的最简单方法。理想的方法是从jmx访问它,但想首先尝试使用此方法以及回来似乎是加密的或不可读的形式。也尝试添加stringDeserializer属性。有人对如何更正此有任何建议吗?同样,对此的引用是

的副本

duplicate consumer_offset

没有帮助,因为它没有引用我的问题,即在Java中将消息读取为字符串。还更新了代码,以尝试使用kafka.client消费者使用ConsumerRecord。

consumerProps.put("exclude.internal.topics",  false);
consumerProps.put("group.id" , groupId);
consumerProps.put("zookeeper.connect", zooKeeper);


consumerProps.put("key.deserializer",
  "org.apache.kafka.common.serialization.StringDeserializer");  
consumerProps.put("value.deserializer",
  "org.apache.kafka.common.serialization.StringDeserializer");

ConsumerConfig consumerConfig = new ConsumerConfig(consumerProps);
ConsumerConnector consumer = 
kafka.consumer.Consumer.createJavaConsumerConnector(
       consumerConfig);

Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, new Integer(1));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = 
   consumer.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);

for ( KafkaStream stream : streams) {

     ConsumerIterator<byte[], byte[]> it = stream.iterator();

     //errorReporting("...CONSUMER-KAFKA CONNECTION SUCCESSFUL!");


   while (it.hasNext())
     {

         try{

                 String mesg = new String(it.next().message());
                 System.out.println( mesg);

代码更改:

 try
    {


   // errorReporting("CONSUMER-KAFKA CONNECTION INITIATING...");    
    Properties consumerProps = new Properties();
    consumerProps.put("exclude.internal.topics",  false);
    consumerProps.put("group.id" , "test");
    consumerProps.put("bootstrap.servers", servers);
    consumerProps.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");  
    consumerProps.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");

    //ConsumerConfig consumerConfig = new ConsumerConfig(consumerProps);
    //ConsumerConnector consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
    //       consumerConfig);

    //Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
    //topicCountMap.put(topic, new Integer(1));
    //Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
    //List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);



    KafkaConsumer<String, String> kconsumer = new KafkaConsumer<>(consumerProps); 
    kconsumer.subscribe(Arrays.asList(topic)); 


    try {
          while (true) {
            ConsumerRecords<String, String> records = kconsumer.poll(10);

            for (ConsumerRecord<String, String> record : records)

              System.out.println(record.offset() + ": " + record.value());
          }
        } finally {
          kconsumer.close();
    }    

消息的快照如下所示;在图片底部:

consumer offset

1 个答案:

答案 0 :(得分:2)

虽然可以直接从__consumer_offsets主题进行阅读,但这不是推荐或最简单的方法。

如果可以使用Kafka 2.0,最好的方法是使用AdminClient API来描述组:


以防万一,您绝对希望直接从__consumer_offset格式读取内容,您需要对记录进行解码以使其易于阅读。可以使用GroupMetadataManager类:

您链接的问题中的answer包含执行所有操作的基本代码。

还请注意,您不应将记录反序列化为字符串,而应将它们保留为原始字节,以使这些方法能够正确地对其进行解码。

相关问题