Spark | Yarn FilenotFound异常

时间:2017-09-27 11:22:04

标签: apache-spark yarn

运行具有1个主服务器和2个从服务器的3节点CDH群集。我有一个用Java编写的Web应用程序,它向YARN提交了spark工作。立即获得以下错误。 Web App与Tomcat一起部署,Tomcat作为不同的OS用户运行。

Application application_1502437323246_0010 failed 2 times due to AM Container for appattempt_1502437323246_0010_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:, click on links to logs of each attempt.
Diagnostics: File file:/home/user/tomcat/apache-tomcat-8.0.38/temp/spark-1692c53f-313a-41c1-9581-e716c244b7c8/__spark_libs__4041232999285325500.zip does not exist
java.io.FileNotFoundException: File file:/home/user/tomcat/apache-tomcat-8.0.38/temp/spark-1692c53f-313a-41c1-9581-e716c244b7c8/__spark_libs__4041232999285325500.zip does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:811)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:588)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

未通过此尝试。申请失败。

看起来工作节点无法访问上述文件位置,理想情况下应该在HDFS上创建这些文件,以便工作人员可以访问它。

问题

1)这些文件是什么以及为什么它们会在tomcat的temp文件夹下创建? 2)是否有配置可以在HDFS上创建这些文件来解决上述错误 3)在" Client"中运行时的任何其他注意事项。部署模式?

任何其他信息都将非常有用,因为我是Spark和HDFS的新手。我使用的是CDH 5.12的默认配置以及Spark 2.1.0发行版

0 个答案:

没有答案