Kafka主题不断滚动新的日志段,没有覆盖,日志文件很小

时间:2018-02-27 14:37:18

标签: apache-kafka kafka-consumer-api kafka-producer-api

当在代理配置中将log.segment.bytes设置为512兆字节时,我有一个kafka主题不断滚动新的日志段。大多数日志文件平均为5-10千字节。

如果我看一下这个话题,我就看不到任何覆盖。即使我创建了覆盖并将segment.bytes设置为任何内容,它仍将遵循相同的行为。

我对此感到有点困惑。关于在哪里寻找下一步的想法?

root@utilitypod-985642408:/opt/kafka/bin# ./kafka-topics.sh --zookeeper 
zookeeper:2181 --describe --topic dev.com.redacted.redacted.services.redacted.priceStream.notification
Topic:dev.com.redacted.redacted.services.redacted.priceStream.notification   PartitionCount:3        ReplicationFactor:3     Configs:segment.bytes=536870912,segment.index.bytes=53687091,flush.messages=20000,flush.ms=600000
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 0    Leader: 1       Replicas: 1,2,0 Isr: 2,1,0
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 1    Leader: 2       Replicas: 2,0,1 Isr: 0,2,1
    Topic: dev.com.redacted.redacted.services.redacted.priceStream.notification  Partition: 2    Leader: 0       Replicas: 0,1,2 Isr: 1,0,2

那是我的Kafka经纪人(在k8s集群中运行,但这不重要)配置:

log.dirs=/var/lib/kafka/data/topics
num.partitions=3
default.replication.factor=3
min.insync.replicas=2
auto.create.topics.enable=true
num.recovery.threads.per.data.dir=4

############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
#init#broker.id=#init#
#init#broker.rack=#init#

#listeners=PLAINTEXT://:9092
listeners=OUTSIDE://:9094,PLAINTEXT://:9092
#init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
inter.broker.listener.name=PLAINTEXT
num.network.threads=2
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
queued.max.requests=16
message.max.bytes=1000000
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
log.flush.interval.messages=20000
log.retention.hours=168
log.segment.bytes=536870912
log.flush.scheduler.interval.ms=2000
log.cleaner.enable=false
log.retention.check.interval.ms=60000
zookeeper.connect=zookeeper:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true

1 个答案:

答案 0 :(得分:2)

您是否有可能发送时间戳超过log.retention.hours(或.ms等)设置的记录?

如果是这样,您的记录几乎会立即被删除。如您所述,段将被滚动。最后一个偏移量将保留,但它将等于结束偏移量 - 意味着日志将为空。