当KafkaTemplate autoFlush设置为true时,多线程生产者性能如何?

时间:2018-10-17 02:57:40

标签: spring-kafka

KafkaTemplate具有autoFlush选项,该选项在每次发送消息时都会刷新。

/**
 * Send the producer record.
 * @param producerRecord the producer record.
 * @return a Future for the {@link RecordMetadata}.
 */
protected ListenableFuture<SendResult<K, V>> doSend(final ProducerRecord<K, V> producerRecord) {
    if (this.transactional) {
        Assert.state(inTransaction(),
                "No transaction is in process; "
                    + "possible solutions: run the template operation within the scope of a "
                    + "template.executeInTransaction() operation, start a transaction with @Transactional "
                    + "before invoking the template method, "
                    + "run in a transaction started by a listener container when consuming a record");
    }
    final Producer<K, V> producer = getTheProducer();
    if (this.logger.isTraceEnabled()) {
        this.logger.trace("Sending: " + producerRecord);
    }
    final SettableListenableFuture<SendResult<K, V>> future = new SettableListenableFuture<>();
    producer.send(producerRecord, new Callback() {

        @Override
        public void onCompletion(RecordMetadata metadata, Exception exception) {
            try {
                if (exception == null) {
                    future.set(new SendResult<>(producerRecord, metadata));
                    if (KafkaTemplate.this.producerListener != null
                            && KafkaTemplate.this.producerListener.isInterestedInSuccess()) {
                        KafkaTemplate.this.producerListener.onSuccess(producerRecord.topic(),
                                producerRecord.partition(), producerRecord.key(), producerRecord.value(), metadata);
                    }
                }
                else {
                    future.setException(new KafkaProducerException(producerRecord, "Failed to send", exception));
                    if (KafkaTemplate.this.producerListener != null) {
                        KafkaTemplate.this.producerListener.onError(producerRecord.topic(),
                                producerRecord.partition(),
                                producerRecord.key(),
                                producerRecord.value(),
                                exception);
                    }
                }
            }
            finally {
                if (!KafkaTemplate.this.transactional) {
                    closeProducer(producer, false);
                }
            }
        }

    });
    if (this.autoFlush) {
        flush();
    }
    if (this.logger.isTraceEnabled()) {
        this.logger.trace("Sent: " + producerRecord);
    }
    return future;
}

对于那些希望使每个发送请求都同步的人来说似乎不错。

但是,当与产生单例Producer对象的DefaultKafkaProducerFactory一起使用时,KafkaTemplate的所有线程本地生产者都指向同一单个Producer,从而共享发送队列。

在多线程Web环境中,每个线程不仅必须等待来自其自身的消息,还必须等待其他线程已经发送的所有消息。

我认为这不仅是性能方面的问题,而且还是可用性方面的问题,这都是一个坏主意,因为在某些情况下,某些Kafka代理崩溃了,所有想要发送消息的线程很可能在某些不需要的线程挂起了

我正确吗?不应在任何指南,文件或其他内容上发表警告警告的评论吗?

1 个答案:

答案 0 :(得分:0)

我认为JavaDoc非常清晰...

/**
 * Create an instance using the supplied producer factory and autoFlush setting.
 * <p>
 * Set autoFlush to {@code true} if you have configured the producer's
 * {@code linger.ms} to a non-default value and wish send operations on this template
 * to occur immediately, regardless of that setting, or if you wish to block until the
 * broker has acknowledged receipt according to the producer's {@code acks} property.
 * @param producerFactory the producer factory.
 * @param autoFlush true to flush after each send.
 * @see Producer#flush()
 */

您还需要什么?

相关问题