为什么即使没有磁盘持久性或检查点,spark也会将大文件写入临时本地磁盘?

时间:2015-02-11 05:37:09

标签: scala apache-spark persist checkpoint

我在群集上运行一个小作业,每台机器有15G内存和8G磁盘。

作业总是陷入死锁,最后一条错误消息是:

java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:345)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream$$anonfun$write$3.apply$mcV$sp(BlockObjectWriter.scala:86)
    at org.apache.spark.storage.DiskBlockObjectWriter.org$apache$spark$storage$DiskBlockObjectWriter$$callWithTiming(BlockObjectWriter.scala:221)
    at org.apache.spark.storage.DiskBlockObjectWriter$TimeTrackingOutputStream.write(BlockObjectWriter.scala:86)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at org.xerial.snappy.SnappyOutputStream.dumpOutput(SnappyOutputStream.java:300)
    at org.xerial.snappy.SnappyOutputStream.rawWrite(SnappyOutputStream.java:247)
    at org.xerial.snappy.SnappyOutputStream.write(SnappyOutputStream.java:107)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
    at java.io.ObjectOutputStream$BlockDataOutputStream.writeByte(ObjectOutputStream.java:1914)
    at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1575)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:350)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:195)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:751)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4$$anonfun$apply$2.apply(ExternalSorter.scala:750)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:750)
    at org.apache.spark.util.collection.ExternalSorter$$anonfun$writePartitionedFile$4.apply(ExternalSorter.scala:746)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at org.apache.spark.util.collection.ExternalSorter.writePartitionedFile(ExternalSorter.scala:746)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

当它发生时,shuffle写入大小为0.0B,输入大小为3.4MB。我想知道什么操作可以快速占用整个5G可用磁盘空间。

此外,整个作业的存储级别仅限于MEMORY_ONLY_SERIALIZED,并且检查点已完全禁用。

1 个答案:

答案 0 :(得分:0)

如果您知道随机播放操作适合内存,您可以尝试设置 spark.shuffle.spill为false。 (否则你得到OOM)。 在http://spark.apache.org/docs/latest/configuration.html,您可以看到有关随机行为和其他公共配置选项的选项。

MEMORY_ONLY_SERIALIZED适用于RDD。

相关问题