PySpark结构化流输出接收器为Kafka提供错误

时间:2018-02-14 13:39:54

标签: apache-spark pyspark apache-kafka spark-structured-streaming

使用Kafka 0.9.0和Spark 2.1.0 - 我使用PySpark结构化流来计算结果并将其输出到Kafka主题上。我指的是同样的Spark文档 https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-modes

现在我运行命令

(输出模式完成,因为它正在聚合流数据。)

(mydataframe.writeStream
    .outputMode("complete")
    .format("kafka")
    .option("kafka.bootstrap.servers", "x.x.x.x:9092")
    .option("topic", "topicname")
    .option("checkpointLocation","/data/checkpoint/1")
    .start())

它给出了如下错误

 ERROR StreamExecution: Query [id = 0686130b-8668-48fa-bdb7-b79b63d82680, runId = b4b7494f-d8b8-416e-ae49-ad8498dfe8f2] terminated with error
org.apache.spark.sql.AnalysisException: Required attribute 'value' not found;
    at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:73)
    at org.apache.spark.sql.kafka010.KafkaWriter$$anonfun$6.apply(KafkaWriter.scala:73)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.kafka010.KafkaWriter$.validateQuery(KafkaWriter.scala:72)
    at org.apache.spark.sql.kafka010.KafkaWriter$.write(KafkaWriter.scala:88)
    at org.apache.spark.sql.kafka010.KafkaSink.addBatch(KafkaSink.scala:38)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply$mcV$sp(StreamExecution.scala:503)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:503)
    at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262)
    at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46)
    at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch(StreamExecution.scala:502)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply$mcV$sp(StreamExecution.scala:255)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$1.apply(StreamExecution.scala:244)
    at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:262)
    at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:46)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:244)
    at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:43)
    at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:239)
    at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:177)**

不确定它所期望的属性值。需要帮助来解决这个问题。

控制台输出接收器在控制台上产生正确的输出,因此代码似乎工作正常。仅当使用kafka作为输出接收器导致此问题时

2 个答案:

答案 0 :(得分:0)

  

不确定它所期望的属性值。需要帮助来解决这个问题。

您的myDataFrame需要一个valueStringTypeBinaryType}列,其中包含您要发送给Kafka的有效负载(消息)。

目前您正在尝试写入Kafka,但没有描述要写入哪些数据。

获取此类colunm的一种方法是使用.withColumnRenamed重命名现有列。如果要编写多个列,通常最好创建一个包含数据帧的JSON表示的列,可以使用to_json sql.function获取。 But beware of .toJSON!

答案 1 :(得分:0)

Spark 2.1.0不支持Kafka作为输出接收器。它是根据documentation在2.2.0中引入的。

另请参阅this answer,该链接指向引入该功能的提交,并提供备用解决方案,以及此JIRA,其中添加了2.2.1中的文档。

相关问题