在GroupBy中重新分区后,Kafka流未使用自定义SerDes

时间:2020-07-01 03:36:05

标签: apache-kafka-streams

我正在尝试从一个主题中读取内容,并使用一个不是字符串的新密钥对项目进行分组,但是在分组过程中似乎正在重新分区,这是密钥特定的SerDes无法正常工作的地方。

final StreamsBuilder builder = new StreamsBuilder();
KStream<String, HistoryEvent> source = builder.stream("test-topic", Consumed.with(Serdes.String(), new HistoryEventSerDes()));
        source.groupBy((key, value) -> HistoryKey.builder()
            .id(value.getKey())
            .build(),
            Grouped.<HistoryKey, HistoryEvent>as("repartition")
            .withKeySerde(new HistoryKeySerDes())
            .withValueSerde(new HistoryEventSerDes()))
        .windowedBy(TimeWindows.of(Duration.ofSeconds(5)))
        .count(Materialized.<HistoryKey, Long, WindowStore<Bytes, byte[]>>as("test-topic-store")
            .withKeySerde(new HistoryKeySerDes())
            .withValueSerde(Serdes.Long()))
        .toStream()
        .print(Printed.toSysOut());

这是个例外。

org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=1_0, processor=KSTREAM-SOURCE-0000000005, topic=test-topic-service-key-repartition-repartition, partition=0, offset=0, stacktrace=org.apache.kafka.streams.errors.StreamsException: A serializer (com.example.org.HistoryKeySerializer) is not compatible to the actual key type (key type: com.example.org.HistoryEvent). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
    at org.apache.kafka.streams.state.StateSerdes.rawKey(StateSerdes.java:175)
    at org.apache.kafka.streams.state.internals.MeteredWindowStore.keyBytes(MeteredWindowStore.java:222)
    at org.apache.kafka.streams.state.internals.MeteredWindowStore.fetch(MeteredWindowStore.java:152)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl$WindowStoreReadWriteDecorator.fetch(ProcessorContextImpl.java:539)
    at org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:122)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)
    at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)
    at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:363)
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:199)
    at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:425)
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:912)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:819)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:788)
    Caused by: java.lang.ClassCastException: class com.example.org.HistoryEvent cannot be cast to class com.example.org.HistoryKey (com.example.org.HistoryEvent and com.example.org.HistoryKey are in unnamed module of loader org.springframework.boot.loader.LaunchedURLClassLoader @d2cc05a)
    at com.example.org.HistoryKeySerializer.serialize(HistoryKeySerializer.java:13)
    at org.apache.kafka.streams.state.StateSerdes.rawKey(StateSerdes.java:171)
    ... 15 more

0 个答案:

没有答案