Kafka消费者异常和抵消提交

时间:2017-03-29 05:29:41

标签: java spring apache-kafka kafka-consumer-api spring-kafka

我一直试图为Spring Kafka做一些POC工作。具体来说,我想尝试在Kafka中消费消息时处理错误的最佳实践。

我想知道是否有人能够提供帮助:

  1. 分享围绕Kafka消费者应该做的最佳实践 当出现故障时
  2. 帮助我了解AckMode Record的工作原理,以及在侦听器方法中抛出异常时如何阻止对Kafka偏移队列的提交。
  3. 2的代码示例如下:

    鉴于AckMode设置为RECORD,根据documentation

      

    在处理后返回侦听器时提交偏移量   记录。

    如果侦听器方法引发异常,我原以为偏移量不会增加。但是,当我使用下面的代码/配置/命令组合测试它时,情况并非如此。偏移量仍然会更新,并继续处理下一条消息。

    我的配置:

        private Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
        props.put(ProducerConfig.RETRIES_CONFIG, 0);
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        return props;
    }
    
       @Bean
    ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
        factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
        return factory;
    }
    

    我的代码:

    @Component
    public class KafkaMessageListener{
        @KafkaListener(topicPartitions = {@TopicPartition( topic = "my-replicated-topic", partitionOffsets = @PartitionOffset(partition = "0", initialOffset = "0", relativeToCurrent = "true"))})
        public void onReplicatedTopicMessage(ConsumerRecord<Integer, String> data) throws InterruptedException {
                throw new RuntimeException("Oops!");
        }
    

    验证偏移的命令:

    bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group test-group
    

    我使用的是kafka_2.12-0.10.2.0和org.springframework.kafka:spring-kafka:1.1.3.RELEASE

1 个答案:

答案 0 :(得分:7)

容器(通过ContainerProperties)有一个属性ackOnError,默认情况下为true ...

/**
 * Set whether or not the container should commit offsets (ack messages) where the
 * listener throws exceptions. This works in conjunction with {@link #ackMode} and is
 * effective only when the kafka property {@code enable.auto.commit} is {@code false};
 * it is not applicable to manual ack modes. When this property is set to {@code true}
 * (the default), all messages handled will have their offset committed. When set to
 * {@code false}, offsets will be committed only for successfully handled messages.
 * Manual acks will be always be applied. Bear in mind that, if the next message is
 * successfully handled, its offset will be committed, effectively committing the
 * offset of the failed message anyway, so this option has limited applicability.
 * Perhaps useful for a component that starts throwing exceptions consistently;
 * allowing it to resume when restarted from the last successfully processed message.
 * @param ackOnError whether the container should acknowledge messages that throw
 * exceptions.
 */
public void setAckOnError(boolean ackOnError) {
    this.ackOnError = ackOnError;
}

请记住,如果下一条消息成功,它的偏移量仍然会被提交,这也有效地提交了失败的偏移量。

相关问题