Java Kafka使用者无法读取任何主题记录

时间:2018-06-23 00:16:14

标签: java docker apache-kafka

我已经尝试this great tutorial通过Docker在我的本地环境中启动多个Kafka代理。我可以在一个终端窗口中启动生产者,并在另一个终端窗口中启动消费者时接收消息。为此,我运行了:

./start-kafka-shell.sh <my public ip> 10.224.49.140:2181

如本教程所述,然后继续创建生产者:

$KAFKA_HOME/bin/kafka-console-producer.sh --topic=topic --broker-list=`broker-list.sh`

和消费者:

$KAFKA_HOME/bin/kafka-console-consumer.sh --topic=topic --zookeeper=$ZK

现在,我正在本地环境中编写Java使用者,并尝试连接到通过Docker公开的Kafka端口。这是Java配置:

    Properties props = new Properties();
    props.put("bootstrap.servers", "<my public IP>:9092");
    props.put("group.id", "test-consumer-group");
    props.put("key.deserializer", StringDeserializer.class.getName());
    props.put("value.deserializer", StringDeserializer.class.getName());
    SimpleConsumer consumer = new SimpleConsumer(props, 
    Arrays.asList("topics"));

而且...当我通过生产者生产消息时,什么也没有发生。这是我可以看到的Java用户端输出的唯一日志:

[main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - 
ConsumerConfig values: 
metric.reporters = []
metadata.max.age.ms = 300000
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = test-consumer-group
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [10.224.49.140:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
enable.auto.commit = true
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
session.timeout.ms = 30000
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
key.deserializer = class 
org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
check.crcs = true
request.timeout.ms = 40000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
fetch.min.bytes = 1
send.buffer.bytes = 131072
auto.offset.reset = latest

Listening to topics...
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version 
: 0.9.0.1
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId 
: 23c69d62a0cabf06

我认为这是一个“找不到协调器”的问题,因为我在逐步浏览该代码时浏览了一下代码-但我找不到我怀疑会导致该问题的代码。我似乎也无法在整个应用程序中启用INFO级别的日志,但仍在努力。

我正在使用Kafka 0.9.0客户端(org.apache.kafka-kafka-clients,版本0.9.0.1),并且我有把握地确定该教程也在使用Kafka 0.9.0。

任何人以前都有这个问题,或者可以指出我缺少协调器和/或配置的问题吗?如果它是唯一运行的使用者,为什么我的应用程序在寻找协调器时会有问题?

任何指针,我将不胜感激。谢谢!

编辑#1: 在cricket_007的回应之后,我遵循了Confluent Docker快速入门,但仍然遇到麻烦:

Error while fetching metadata with correlation id 6 : {mytopic=LEADER_NOT_AVAILABLE}

我尝试更改advertised.host.nameadvertised.port。仍然没有运气。我将尝试Docker Machine。

编辑#2:

我已经安装了Docker Machine,但是在运行它之前,注意到以下日志:

[2018-06-23 15:12:26,497] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [2018-06-23 15:12:26,501] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient) [2018-06-23 15:12:26,503] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(127.0.0.1,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient) .... [2018-06-23 15:13:01,657] ERROR [KafkaApi-0] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)

现在我很困惑。听起来好像id = 0的代理实际上已经启动并已经注册了。那么,为什么会抛出以上错误?

0 个答案:

没有答案