卡夫卡消费者无法收到一些消息

时间:2018-11-01 03:08:36

标签: apache-kafka kafka-consumer-api

最近,在使用kafka消费者AIP时遇到一些问题,并且有我的代码:

public class ConsumerClientDemo {

    private static final String KAFKA_SERVERS = "17.162.110.1:9292,17.162.112.1:9293,17.162.114.1:9294";
    private static final String GROUP = "group-admin-test";

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KAFKA_SERVERS);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP);
        props.put(ConsumerConfig.CLIENT_ID_CONFIG, "mbGW4rH5");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");

        props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
                PlainLoginModule.class.getName() + " required username=\"%s\" " + "password=\"%s\";",
                "admin",
                "admin"
        ));
        final KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("mbGW4rH5"));
        final AtomicBoolean isShuttingDown = new AtomicBoolean(false);
        Runtime.getRuntime().addShutdownHook(new Thread(() -> {
            isShuttingDown.set(true);
            synchronized (consumer) {
                consumer.close();
            }
        }));
        try {
            while (!isShuttingDown.get()) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
                for (ConsumerRecord<String, String> record : records) {
                        System.out.printf("topic=%s, partition=%s, offset = %d, key = %s, value = %s%n",
                                record.topic(), record.partition(),record.offset(), record.key(), deviceData.value());
                    }
                }
        } catch (Exception e) {
            System.exit(1);
        }
        System.exit(0);
    }
}

最初没有任何问题,但是成功接收到15条消息后,控制台将打印:

10:42:40:446 INFO [FetchSessionHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Node 2 sent an invalid full fetch response with extra=(mbGW4rH5-0, response=(
10:43:10:499 INFO [FetchSessionHandler] [Consumer clientId=mbGW4rH5, groupId=group-admin-test] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 2: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..

然后此后客户端被暂停,它无法接收任何消息,因此我打开了日志的调试。并且有日志:

10:58:11:200 INFO [FetchSessionHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 2: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
10:58:11:200 DEBUG [Fetcher] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Added READ_UNCOMMITTED fetch request for partition mbGW4rH5-0 at offset 15 to node 17.162.114.1:9294 (id: 2 rack: null)
10:58:11:200 DEBUG [FetchSessionHandler$Builder] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 2 with 1 partition(s).
10:58:11:200 DEBUG [Fetcher] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending READ_UNCOMMITTED FullFetchRequest(mbGW4rH5-0) to broker 17.162.114.1:9294 (id: 2 rack: null)
10:58:13:161 DEBUG [AbstractCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending Heartbeat request to coordinator cloud-access.hanclouds.com:9292 (id: 2147483647 rack: null)
10:58:13:207 DEBUG [AbstractCoordinator$HeartbeatResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Received successful Heartbeat response
10:58:15:113 DEBUG [ConsumerCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending asynchronous auto-commit of offsets {mbGW4rH5-0=OffsetAndMetadata{offset=15, metadata=''}}
10:58:15:159 DEBUG [ConsumerCoordinator$OffsetCommitResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Committed offset 15 for partition mbGW4rH5-0
10:58:15:159 DEBUG [ConsumerCoordinator$4] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Completed asynchronous auto-commit of offsets {mbGW4rH5-0=OffsetAndMetadata{offset=15, metadata=''}}
10:58:16:162 DEBUG [AbstractCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending Heartbeat request to coordinator 17.162.110.1:9292 (id: 2147483647 rack: null)
10:58:16:217 DEBUG [AbstractCoordinator$HeartbeatResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Received successful Heartbeat response
10:58:19:162 DEBUG [AbstractCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending Heartbeat request to coordinator 17.162.110.1:9292 (id: 2147483647 rack: null)
10:58:19:211 DEBUG [AbstractCoordinator$HeartbeatResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Received successful Heartbeat response
10:58:20:114 DEBUG [ConsumerCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending asynchronous auto-commit of offsets {mbGW4rH5-0=OffsetAndMetadata{offset=15, metadata=''}}
10:58:20:165 DEBUG [ConsumerCoordinator$OffsetCommitResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Committed offset 15 for partition mbGW4rH5-0
10:58:20:165 DEBUG [ConsumerCoordinator$4] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Completed asynchronous auto-commit of offsets {mbGW4rH5-0=OffsetAndMetadata{offset=15, metadata=''}}
10:58:22:163 DEBUG [AbstractCoordinator] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Sending Heartbeat request to coordinator 17.162.110.1:9292 (id: 2147483647 rack: null)
10:58:22:226 DEBUG [AbstractCoordinator$HeartbeatResponseHandler] [Consumer clientId=mbGW4rH5, groupId=group-mbGW4rH5] Received successful Heartbeat response

客户端似乎无法接收到15的偏移量,所以我更改了组,并将偏移量设置为最新的值,然后再次起作用。所以我想问为什么不能接收到15的偏移量?以及如何跳过无法接收的偏移以避免客户端被绞死?顺便说一下,Kafka版本是2.0.0,kafka-clients也是如此。

谢谢。

0 个答案:

没有答案
相关问题