存储级别在spark RDD MEMORY_AND_DISK_2()中抛出异常

时间:2015-05-06 06:27:57

标签: java apache-spark

任何人都可以解释rdd的存储级别是如何工作的。

当我使用具有存储级别的persist方法时,我遇到堆内存错误(StorageLevel.MEMORY_AND_DISK_2()) 但是,当我使用缓存方法时,我的代码工作正常。

根据spark doc文档缓存使用默认存储级别(MEMORY_ONLY)保留RDD。

我收到堆错误的代码

JavaRDD<String> rawData = sparkContext
                    .textFile(inputFile.getAbsolutePath())
                    .setName("Input File").persist(SparkToolConstant.rdd_stroage_level);

//          cache()

            String[] headers = new String[0];
            String headerStr = null;
            if (headerPresent) {
                headerStr = rawData.first();
                headers = headerStr.split(delim);
                List<String> headersList = new ArrayList<String>();
                headersList.add(headerStr);
                JavaRDD<String> headerRDD = sparkContext
                        .parallelize(headersList);
                JavaRDD<String> filteredRDD = rawData.subtract(headerRDD)
                        .setName("Raw data without header").persist(StorageLevel.MEMORY_AND_DISK_2());;
                rawData = filteredRDD;
            }

堆栈跟踪

 Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 10, localhost): java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:2271)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1876)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:110)
    at org.apache.spark.storage.BlockManager.dataSerializeStream(BlockManager.scala:1176)
    at org.apache.spark.storage.BlockManager.dataSerialize(BlockManager.scala:1185)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:846)
    at org.apache.spark.storage.BlockManager.putArray(BlockManager.scala:668)
    at org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:176)
    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:79)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:242)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:

Spark版本:1.3.0

1 个答案:

答案 0 :(得分:0)

看到这个问题很长时间没有得到答复,我发布此信息以获取一般信息以及像我这样搜索引导的人。

如果没有关于您的申请的更多细节,这类问题很难回答。通常,在序列化到磁盘时,您会发现内存错误似乎是颠倒的。我建议你试试with Kryo serialization,如果你有很多额外的内存,请使用Alluxio (the software formerly known as Tachyon :)进行&#34;磁盘序列化,&#34;这会加快速度。

Tuning Data Storage, Serialized RDD Storage and (maybe helpful) GC Tuning上的Spark文档更多内容:

  

当你的物体仍然太大而无法有效存储时   这种调整,一种减少内存使用的简单方法是存储   它们是序列化形式,使用序列化的StorageLevels   RDD persistence API,例如MEMORY_ONLY_SER。然后Spark会   将每个RDD分区存储为一个大字节数组。唯一的缺点   以序列化形式存储数据的访问时间较慢,因为有   在运行中反序列化每个对象。我们强烈建议使用Kryo   如果你想以序列化的形式缓存数据,因为它会导致很多   比Java序列化更小的尺寸(当然比原始Java更小)   对象)。