如何使用Spark结构化流连续从kafka流数据?

时间:2019-04-15 12:41:05

标签: spark-structured-streaming spark-streaming-kafka

我正在尝试将DStream api迁移到结构化的流媒体,并在如何等待或无法将微批处理与结构化流媒体相关的问题上大跌眼镜。

在下面的代码中,我将创建直接流并一直等待,以便我可以无限期地使用kafka消息。

如何在结构化流媒体中实现相同的目标?

sparkSession.streams.awaitAnyTermination是否足够?

在流式传输和结构化流式传输中,我都在下面放置了示例代码。任何指针都会有很大帮助。谢谢

workbox.routing.registerRoute(
    /\.(?:png|jpg|jpeg|svg|gif|webp)$/,
    new workbox.strategies.StaleWhileRevalidate({
        cacheName: 'image-cache',
        plugins: [
            new workbox.expiration.Plugin({
                maxAgeSeconds: 7 * 24 * 60 * 60,
            })
        ],
    })
);

// Fallbacks
workbox.routing.setCatchHandler(({event}) => {
    switch (event.request.destination) {
        case 'image':
            fallbackImage = caches.match(workbox.precaching.getCacheKeyForURL('Images/fallback.svg'));
            return fallbackImage;
            break;

        case 'document':
            return  caches.match(workbox.precaching.getCacheKeyForURL('/Html/offline.html'));
            break;

        default:
            return Response.error();
    }
});

结构化流等效

val kafkaParams = Map[String, Object](
        "bootstrap.servers" -> "localhost:9092",
        "key.deserializer" -> classOf[StringDeserializer], 
        "value.deserializer" -> classOf[StringDeserializer],
        "auto.offset.reset" -> "latest",
        "max.poll.records" -> "1",
        "group.id" -> "test",
        "enable.auto.commit" -> (true: java.lang.Boolean))
val ssc = new StreamingContext(sparkSession.sparkContext, Seconds(10))
      val stream = KafkaUtils.createDirectStream[String, String](ssc,  PreferConsistent,Subscribe[String, String]("mytopic",kafkaParams))

performRddComputation(stream, sparkSession)

 ssc.start()
 ssc.awaitTermination()

2 个答案:

答案 0 :(得分:1)

我将发布一个适用于我的版本:

val df = sparkSession
  .readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", "localhost:9092")
  .option("kafkfa.offset.strategy","latest")
  .option("subscribe", "mytopic")
  .load()
  //df.printSchema()

  val tdf = df.selectExpr("CAST(value AS STRING)")
    .select("value")
    .writeStream
    .outputMode("append")
    .format("console")
    .option("truncate","false")
    .start()
  tdf.awaitAnyTermination()

它应该为您工作

答案 1 :(得分:1)

如果只有一个查询,只需在查询上使用awaitTermination

val df = sparkSession
      .readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "localhost:9092")
      .option("kafkfa.offset.strategy","latest")
      .option("subscribe", "mytopic")
      .load()
      df.printSchema()

val tdf = df.selectExpr("CAST(value AS STRING)").as[String]
    .select("value")
    .map(record =>  {//do something})
    .writeStream
    .format("console")
    .option("truncate","false")
    .start()

// do something

tdf.awaitTermination()

awaitTermination是一个阻塞调用,因此您将在此之后编写的所有内容仅在查询终止后才被调用。

如果您需要处理多个查询,可以在awaitAnyTermination上使用StreamingQueryManager

sparkSession.streams.awaitAnyTermination()

,如果即使其中一个查询失败,也要保持应用程序运行,请在循环中依次调用awaitAnyTermination()resetTerminated()