捕获exceptionorg.apache.spark.SparkException:作业由于阶段失败而中止

时间:2019-02-20 18:25:55

标签: apache-spark

我正在生产中运行我的代码,它在大多数时间成功运行,但有时会失败,并出现以下错误:

catch exceptionorg.apache.spark.SparkException: Job aborted due to stage failure: Task 14 in stage 9.1 failed 4 times, most recent failure: Lost task 14.3 in stage 9.1 (TID 3825, xxxprd0painod02.xxxprd.local): java.io.FileNotFoundException: /data03/hadoop/yarn/local/usercache/user/appcache/application_xxxxxxx012345_70120/blockmgr-97546ecd-567d-4451-91dd-762744aadc2b/1e/temp_shuffle_fb43319d-8cec-43e1-b7f8-cda30410d36c (No such file or directory)
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

我试图更改执行程序的内存,以确保我们有足够的内存,但仍然面临相同的问题。

关于如何解决此问题的任何想法?

谢谢, 宝贝

1 个答案:

答案 0 :(得分:0)

通过查看错误消息,似乎存储在块中的数据/文件有问题。尝试刷新元数据或尝试再次还原文件,因为文件损坏可能会解决您的问题。如果无法解决问题,请发布您的代码。

它与内存无关。

相关问题