卡夫卡与交易制片人恰好一次

时间:2018-08-08 05:30:31

标签: apache-kafka kafka-producer-api

我试图一次使用事务性生产者/消费者完全理解Kafka。

我遇到了以下示例。但是,我仍然很难理解一次。这段代码正确吗?

  

producer.sendOffsetsToTransaction-此代码做什么?应该对同一目标主题执行此操作吗?

什么是Consumer.commitSync()之前的系统崩溃? //将再次读取相同的消息并产生重复的消息吗?

public class ExactlyOnceLowLevel {

    public void runConsumer() throws Exception {
        final KafkaConsumer<byte[], byte[]> consumer = createConsumer();
        final Producer<Long, String> producer = createProducer();

        producer.initTransactions();

        consumer.subscribe(Collections.singletonList(TOPIC));

        while (true) {
            final ConsumerRecords<byte[], byte[]> records = consumer.poll(Duration.ofMillis(100));

            try {
                final Map<TopicPartition, OffsetAndMetadata> currentOffsets = new HashMap<>();
                producer.beginTransaction();
                for (final ConsumerRecord<byte[], byte[]> record : records) {
                    System.out.printf("Received Message topic =%s, partition =%s, offset = %d, key = %s, value = %s\n", record.topic(), record.partition(),
                                record.offset(), record.key(), record.value());

                    final ProducerRecord<Long, String> producerRecord =
                                new ProducerRecord<>(TOPIC_1, new BigInteger(record.key()).longValue(), record.value().toString());
                    // send returns Future
                    final RecordMetadata metadata = producer.send(producerRecord).get();
                    currentOffsets.put(new TopicPartition(TOPIC_1, record.partition()), new OffsetAndMetadata(record.offset()));
                }
                producer.sendOffsetsToTransaction(currentOffsets, "my-transactional-consumer-group"); // a bit annoying here to reference group id rwice
                producer.commitTransaction();
                consumer.commitSync();
                currentOffsets.clear();
                // EXACTLY ONCE!
            }
            catch (ProducerFencedException | OutOfOrderSequenceException | AuthorizationException e) {
                e.printStackTrace();
                // We can't recover from these exceptions, so our only option is to close the producer and exit.
                producer.close();
            }
            catch (final KafkaException e) {
                e.printStackTrace();
                // For all other exceptions, just abort the transaction and try again.
                producer.abortTransaction();
            }
            finally {
                producer.flush();
                producer.close();
            }
        }
    }

    private static KafkaConsumer<byte[], byte[]> createConsumer() {
        final Properties consumerConfig = new Properties();
        consumerConfig.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG, "my-group");
        consumerConfig.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());

        consumerConfig.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, IsolationLevel.READ_COMMITTED); // this has to be

        return new KafkaConsumer<>(consumerConfig);
    }

    private static Producer<Long, String> createProducer() {
        final Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());

        props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
        props.put(ProducerConfig.RETRIES_CONFIG, 3); // this is now safe !!!!
        props.put(ProducerConfig.ACKS_CONFIG, "all"); // this has to be all
        props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 1); // this has to be 1

        return new KafkaProducer<>(props);
    }

    public static void main(final String... args) throws Exception {

        final ExactlyOnceLowLevel example = new ExactlyOnceLowLevel();
        example.runConsumer();

    }
}

1 个答案:

答案 0 :(得分:1)

在Kafka Transactions中使用读/处理/写模式时,您不应尝试与Consumer进行偏移。如您所暗示,这可能会导致问题。

在这种情况下,需要向交易中添加抵消额,并且您仅应使用sendOffsetsToTransaction()来做到这一点。该方法确保仅当完整事务成功时才提交这些偏移量。参见Javadoc

  

将指定偏移量列表发送给使用者组协调员,   并将这些抵消额标记为当前交易的一部分。这些   仅在交易为   成功提交。提交的偏移量应该是下一个   您的应用程序将使用的消息,即lastProcessedMessageOffset   + 1。

     

当您需要批量消费和使用时应使用此方法   一起产生的消息,通常是在消费-转换-产生中   图案。因此,指定的consumerGroupId应该与   所使用使用者的配置参数group.id。请注意,   消费者应具有enable.auto.commit = false且也不应   手动提交偏移量(通过同步或异步提交)。