卡夫卡流记录转发期间减少接收器上的分区计数

时间:2019-09-30 14:22:09

标签: apache-kafka apache-kafka-streams

我正在使用kafka流来处理一些kafka记录,我有两个节点,一个用于进行某些转换,另一个是最终的接收器。

我的主题是INTER_TOPIC和FINAL_TOPIC,每个都有20个分区。而我对INTER_TOPIC进行写操作的生产者正在写入键值,而partition-er则是循环轮询。

下面的

是我的内部转换节点上的代码。

public void streamHandler() {

        Properties props = getKafkaProperties();

        StreamsBuilder builder = new StreamsBuilder();

        KStream<String, String> processStream = builder.stream("INTER_TOPIC",
                Consumed.with(Serdes.String(), Serdes.String()));

        //processStream.peek((key,value)->System.out.println("key :"+key+" value :"+value));

        processStream.map((key, value) -> getTransformer().transform(key, value)).filter((key,value)->filteroutFailedRequest(key,value)).to("FINAL_TOPIC", Produced.with(Serdes.String(), Serdes.String()));


        KafkaStreams IStreams = new KafkaStreams(builder.build(), props);

        IStreams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
            @Override
            public void uncaughtException(Thread t, Throw-able e) {

                logger.error("Thread Name :" + t.getName() + " Error while processing:", e);
            }
        });

        IStreams.cleanUp();
        IStreams.start();

        try {
            System.in.read();
        } catch (IOException e) {

            logger.error("Failed streaming ",e);
        }
    }

但是我的接收器仅在2个分区中获取数据,但是我配置了20个流线程,并且我验证了我的生产者正在写入所有20个分区,如何知道我的转换节点转发到了FINAL_TOPIC的所有20个分区< / p>

30 Sep 2019 10:39:41,416 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:39:41,416 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:39:41,416 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:39:41,416 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:40:57,427 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:40:57,427 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received
30 Sep 2019 10:40:57,427 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-3] Received
30 Sep 2019 10:40:57,427 INFO  c.j.m.s.StreamHandler [289] [streams-user-61a77203-9afc-4c66-843d-94c20a509793-StreamThread-4] Received

1 个答案:

答案 0 :(得分:1)

  

并且partition-er是循环

为什么您认为分区程序是循环式的?默认情况下,Kafka Streams根据密钥应用基于散列的分区。

如果要更改默认分区程序,可以实现接口StreamPartitioner并通过以下接口传递

Produced.with(Serdes.String(), Serdes.String())
        .withStreamPartitioner(...)
相关问题