当目标代理使用SASL / PLAIN时,Kafka Producer客户端API无法异步发送

时间:2018-09-17 06:17:37

标签: java apache-kafka kafka-producer-api

我有一个简单的演示,将数据从一个不使用SASL的kafka集群转移到另一个使用SASL / PLAIN的kafka集群。代码如下:

 Properties consumerProps = new Properties();
        consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.50.20:9092");
        consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP);
        consumerProps.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
        consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);


        Properties producerProps = new Properties();
        producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.100:9092");
        producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        producerProps.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        producerProps.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        producerProps.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
                PlainLoginModule.class.getName() + " required username=\"%s\" " + "password=\"%s\";",
                "admin",
                "admin"
        ));
       //and some other producer properties

        KafkaProducer producer = new KafkaProducer<>(props);
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
                for (ConsumerRecord<String, String> record : records) {
                    ProducerRecord<String, String> record = new ProducerRecord<>("test", record.key()
                            ,record.value());
                    producer.send(record);
                }
            }

这只是简单地消费数据并将这些数据生成到另一个kafka集群,但是事情是:  当我编写另一个客户客户端以使用192.168.1.100:9092

中的数据时
    Properties props = new Properties();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.100:9092");
        props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP);
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");

        String password = EncryptUtil.encryptPassword(USER_NAME, QUERY_KEY, QUERY_SECRET);
        System.out.println(password);
        props.put(SaslConfigs.SASL_JAAS_CONFIG, String.format(
                PlainLoginModule.class.getName() + " required username=\"%s\" " + "password=\"%s\";",
                "admin",
                "admin"
        ));
        while (true) {
                ConsumerRecords<String, String> records = 
                consumer.poll(Duration.ofMillis(100));
                for (ConsumerRecord<String, String> record : records) {
                   System.out.println(record.value());
                }
            }

它只能打印

    9419 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Discovered group coordinator 192.168.1.100:9092 (id: 2147483647 rack: null)
18456 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Revoking previously assigned partitions []
18456 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] (Re-)joining group
18471 [main] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Successfully joined group with generation 21
18471 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Setting newly assigned partitions [flink-kafka-0]
27522 [main] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Resetting offset for partition flink-kafka-0 to offset 0.
27537 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Node 0 sent an invalid full fetch response with extra=(flink-kafka-0, response=(
57556 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
87563 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
117585 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
146158 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Node 0 sent an invalid full fetch response with extra=(flink-kafka-0, response=(
176263 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
206333 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
236418 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
266492 [kafka-coordinator-heartbeat-thread | group-snKiBQ0O] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
296558 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..
317372 [kafka-coordinator-heartbeat-thread | group-snKiBQ0O] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Group coordinator 192.168.1.100:9092 (id: 2147483647 rack: null) is unavailable or invalid, will attempt rediscovery
326571 [main] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=snKiBQ0O, groupId=group-snKiBQ0O] Error sending fetch request (sessionId=INVALID, epoch=INITIAL) to node 0: org.apache.kafka.common.errors.TimeoutException: Failed to send request after 30000 ms..

然后我正在使用此外壳程序./bin/kafka-consumer-groups.sh --describe --bootstrap-server 192.168.1.100:9092 --command-config config/client_plain.properties --group group-snKiBQ0O检查是否正确发送了数据,并且LOG-END-OFFSET的值为5912,但是CURRENT-OFFSET始终为0。

最后,我将producer.send(record);更改为producer.send(record).get();,并且消费者客户端成功接收了数据。这是为什么?为什么使用SASL / PLAIN的代理不能异步发送数据?是否有很好的处理方法?

谢谢。

更新:我删除了所有的kafka日志和zookeeper数据,它们可以正常工作。但是仍然不明白为什么会发生这种情况

0 个答案:

没有答案
相关问题