运行简单的Spark程序时出错

时间:2015-03-07 00:21:33

标签: hadoop apache-spark

我有一个非常简单的SparkPi.java程序,我把它放在一个jar文件(sparktest.jar)中。当我以本地模式将它提交到Spark集群时,一切都很好(命令是:spark-submit --cl ass JavaSparkPi --master local --num-executors 4 ./sparktest.jar 50)。但是,当我将其更改为在YARN集群上运行时(我在自己的计算机中有一个单节点本地Hadoop集群),它会给我以下错误(命令为:spark-submit --cl ass JavaSparkPi --master yarn-client --num-executors 4 ./sparktest.jar 50):

 For more detailed output, check application tracking page:http://win-7h2roeh9rhb
:8088/proxy/application_1423255988135_0062/Then, click on links to logs of each
attempt.
Diagnostics: Exception from container-launch.
Container id: container_1423255988135_0062_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    at org.apache.hadoop.util.Shell.run(Shell.java:455)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:
715)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.la
unchContainer(DefaultContainerExecutor.java:211)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.C
ontainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:615)
    at java.lang.Thread.run(Thread.java:744)

Shell output:         1 file(s) moved.


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1425685976713
     final status: FAILED
     tracking URL: http://win-7h2roeh9rhb:8088/cluster/app/application_14232
55988135_0062
         user: hadoop
Exception in thread "main" org.apache.spark.SparkException: Application finishe
 with failed status
    at org.apache.spark.deploy.yarn.ClientBase$class.run(ClientBase.scala:5
2)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:35)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:139)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl
java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcce
sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:76)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

HADOOP_CONF_DIR变量已正确设置。可能是什么问题?

0 个答案:

没有答案
相关问题