Kafka流涉及不同应用程序的拓扑

时间:2019-06-10 23:14:06

标签: apache-kafka apache-kafka-streams

如果我声明了两种拓扑,其中Application1的接收节点是Application2的源节点,则将它们部署为: 我首先部署应在其源节点中等待数据的Application2,然后再部署在其接收节点中提供数据的Application1。

这是可行的,还是当两个拓扑涉及两个应用程序时,我必须通过Apache Kafka Connector在它们之间驱动数据?

(该文档未涉及涉及两个或更多应用程序的拓扑)

换句话说,不是按记录立即将数据从Application1驱动到Application2 ,但是只有在Application1完成其Sourcenode记录的处理之后,Application2才能从其sourcenode-topic读取数据。

Application1拓扑

toplogy.addSource(sourceNodeName, "Vectors")
.addProcessor(processorName, () -> new NodeProcessor(stateStoreName), sourceNodeName)
.addStateStore(vectorStoreBuilder, processorName)
.connectProcessorAndStateStores(processorName, stateStoreName)
.addSink(vectorSink,"IncreaseOfC", stringSerializer, stringSerializer, processorName);

Application2拓扑

  toplogy.addSource(sourceNodeName, "IncreaseOfC")
  .addProcessor(vectorProcessorName, () -> new NodeProcessor(stateStoreName), sourceNodeName)
  .addStateStore(vectorStoreBuilder, vectorProcessorName)
  .connectProcessorAndStateStores(vectorProcessorName, stateStoreName)
  .addSink(vectorSink,"Outliers", stringSerializer, stringSerializer, vectorProcessorName);

0 个答案:

没有答案