Kafkalistener两次阅读消息

时间:2019-03-31 02:37:33

标签: java apache-kafka spring-kafka

因此,使用以下配置,当我们将spring boot容器缩放到10个jvm时,事件的数量将随机地超过发布的事件,例如,如果发布了320000条消息,则事件有时为320500等。

//Consumer container bean 
    private static final int CONCURRENCY = 1;


@Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
        props.put("enable.auto.commit", "false");

        //props.put("isolation.level", "read_committed");
        return props;
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {

        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        //factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
        factory.getContainerProperties().setPollTimeout(3000);
        factory.setConcurrency(CONCURRENCY);
        return factory;
    }

//Listener 
    @KafkaListener(id="claimserror",topics = "${kafka.topic.dataintakeclaimsdqerrors}",groupId = "topic1", containerFactory = "kafkaListenerContainerFactory")
    public void receiveClaimErrors(String event,Acknowledgment ack) throws JsonProcessingException {
//save event to table ..
}

已更新 下面的更改现在似乎工作正常,我将在使用者中添加重复检查以防止使用者失败的情况

@Bean
public Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "topic1");
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
    props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 1);
    props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
    props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "-1");
    //props.put("isolation.level", "read_committed");
    return props;
}

2 个答案:

答案 0 :(得分:1)

您可以尝试将ENABLE_IDEMPOTENCE_CONFIG设置为true,这将有助于确保生产者将每条消息的一个副本恰好写在流中。

答案 1 :(得分:0)

这种方式对我有用。

你必须像这样配置KafkaListenerContainerFactory:

@Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(kafkaFactory);
    factory.setConcurrency(10);
    factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
    return factory;
}

并像这样使用 ConcurrentMessageListenerContainer :

    @Bean
public IntegrationFlow inboundFlow() {
    final ContainerProperties containerProps = new ContainerProperties(PartitionConfig.TOPIC);
    containerProps.setGroupId(GROUP_ID);

    ConcurrentMessageListenerContainer concurrentListener = new ConcurrentMessageListenerContainer(kafkaFactory, containerProps);
    concurrentListener.setConcurrency(10);
    final KafkaMessageDrivenChannelAdapter kafkaMessageChannel = new KafkaMessageDrivenChannelAdapter(concurrentListener);

    return IntegrationFlows
            .from(kafkaMessageChannel)
            .channel(requestsIn())
            .get();
}

您可以查看此了解更多信息how-does-kafka-guarantee-consumers-doesnt-read-a-single-message-twicedocumentation-ConcurrentMessageListenerContainer

相关问题