Spark Streaming with large messages java.lang.OutOfMemoryError:Java堆空间

时间:2016-11-08 14:22:03

标签: hadoop apache-kafka spark-streaming spark-dataframe kafka-consumer-api

enter image description here我正在使用Spark Streaming 1.6.1和Kafka0.9.0.1(createStreams API)HDP 2.4.2,我的用例向Kafka主题发送大邮件,范围从5MB到30 MB,在这种情况下Spark Streaming无法完成其工作并因以下异常而崩溃。我正在进行数据帧操作并以csv格式保存在HDFS上,下面是我的代码片段

Reading from Kafka Topic:    
 val lines = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicMap, StorageLevel.MEMORY_AND_DISK_SER_2/*MEMORY_ONLY_SER_2*/).map(_._2)

 Writing on HDFS:     
 val hdfsDF: DataFrame = getDF(sqlContext, eventDF, schema,topicName)
      hdfsDF.show
      hdfsDF.write
        .format("com.databricks.spark.csv")
        .option("header", "false")
        .save(hdfsPath + "/" + "out_" + System.currentTimeMillis().toString())

16/11/11 12:12:35 WARN ReceiverTracker: Error reported by receiver for stream 0: Error handling message; exiting - java.lang.OutOfMemoryError: Java heap space
    at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
    at java.lang.StringCoding.decode(StringCoding.java:193)
    at java.lang.String.<init>(String.java:426)
    at java.lang.String.<init>(String.java:491)
    at kafka.serializer.StringDecoder.fromBytes(Decoder.scala:50)
    at kafka.serializer.StringDecoder.fromBytes(Decoder.scala:42)
    at kafka.message.MessageAndMetadata.message(MessageAndMetadata.scala:32)
    at org.apache.spark.streaming.kafka.KafkaReceiver$MessageHandler.run(KafkaInputDStream.scala:137)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

其次是:

 java.lang.Exception: Could not compute split, block input-0-1478610837000 not found
at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:277)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:

0 个答案:

没有答案
相关问题