java.io.FileNotFoundException:文件文件:/ tmp / spark- / __ spark_libs__ .zip不存在

时间:2017-09-17 00:13:58

标签: java eclipse apache-spark

我的Spark环境如下所示:

OS : CentOS 7
Hadoop : 2.7.4
Spark : 2.2 for hadoop 2.7
Eclipse : Oxygen

我的Hadoop和Spark安装成功。命令jps抛出以下消息

6738 Jps
5219 Worker
5220 Worker
5222 Worker
5575 org.eclipse.equinox.launcher_1.4.0.v20161219-1356.jar
2906 NameNode
3660 DataNode
3965 ResourceManager
4381 NodeManager
5038 Master
4783 JobHistoryServer

而且spark-shell命令运行良好。

$ spark-shell --master yarn
17/09/17 08:48:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/17 08:48:25 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/09/17 08:48:42 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.200.51:4040
Spark context available as 'sc' (master = yarn, app id = application_1505604907413_0003).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_131)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

问题是Eclipse IDE上的Java Spark客户端。这个项目很简单。 Spark jars文件夹的jar包含在构建路径中。这些是Java代码

package com.aaa.spark;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;

public class JavaClient {

    public static void main(String[] args) {
        // TODO Auto-generated method stub
        SparkConf conf = new SparkConf().setAppName("SparkTest").setMaster("yarn-client");
    JavaSparkContext context = new JavaSparkContext(conf);

    System.out.println(context.master() + " : " + context.version());
    context.stop();
    }
}

但是这个简单的代码不起作用。它抛出以下异常。

17/09/17 08:42:50 INFO Client: Application report for application_1505604907413_0002 (state: ACCEPTED)
17/09/17 08:42:50 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1505605369876
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1505604907413_0002/
     user: m_usr
17/09/17 08:42:51 INFO Client: Application report for application_1505604907413_0002 (state: FAILED)
17/09/17 08:42:51 INFO Client: 
     client token: N/A
     diagnostics: Application application_1505604907413_0002 failed 2 times due to AM Container for appattempt_1505604907413_0002_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1505604907413_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/spark-1b711310-8ca2-43ff-a15b-c5c7951d8b56/__spark_libs__5711484877820940876.zip does not exist
java.io.FileNotFoundException: File file:/tmp/spark-1b711310-8ca2-43ff-a15b-c5c7951d8b56/__spark_libs__5711484877820940876.zip does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:428)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:359)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

Failing this attempt. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1505605369876
     final status: FAILED
     tracking URL: http://master:8088/cluster/app/application_1505604907413_0002
     user: m_usr
17/09/17 08:42:51 INFO Client: Deleted staging directory file:/home/m_usr/.sparkStaging/application_1505604907413_0002
17/09/17 08:42:51 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at com.aaa.spark.JavaClient.main(JavaClient.java:11)
17/09/17 08:42:51 INFO SparkUI: Stopped Spark web UI at http://192.168.200.51:4040

我被困在这里。我无法理解Eclipse Java代码中的配置错误。

更新1

执行spark-shell --master=yarn时,不会显示错误消息,并显示成功消息,

17/09/20 20:42:26 INFO Client: Application report for application_1505905006868_0010 (state: ACCEPTED)
17/09/20 20:42:26 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1505907745945
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1505905006868_0010/
     user: m_usr
17/09/20 20:42:27 INFO Client: Application report for application_1505905006868_0010 (state: ACCEPTED)
17/09/20 20:42:28 INFO Client: Application report for application_1505905006868_0010 (state: ACCEPTED)
17/09/20 20:42:29 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(spark-client://YarnAM)
17/09/20 20:42:29 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1505905006868_0010), /proxy/application_1505905006868_0010

但是,当我在Eclipse IDE上执行上面的Spark Java代码时,会抛出错误消息,如下所示:

17/09/20 20:40:02 INFO Client: Application report for application_1505905006868_0009 (state: ACCEPTED)
17/09/20 20:40:02 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1505907601211
     final status: UNDEFINED
     tracking URL: http://master:8088/proxy/application_1505905006868_0009/
     user: m_usr
17/09/20 20:40:03 INFO Client: Application report for application_1505905006868_0009 (state: FAILED)
17/09/20 20:40:03 INFO Client: 
     client token: N/A
     diagnostics: Application application_1505905006868_0009 failed 2 times due to AM Container for appattempt_1505905006868_0009_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://master:8088/cluster/app/application_1505905006868_0009Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/spark-abd84352-1219-4ffc-92dc-ae94140d936a/__spark_libs__3762603621456052516.zip does not exist
java.io.FileNotFoundException: File file:/tmp/spark-abd84352-1219-4ffc-92dc-ae94140d936a/__spark_libs__3762603621456052516.zip does not exist

我想我错过了Eclipse IDE上的一些Spark配置。

更新2

Spark安装过程

使用scp命令将spark-bin-file复制到s_usr01和s_usr02

scp spark-2.2.0-bin-hadoop2.7.tgz s_usr01@slave01:/home/s_usr0
每个s_usr01和s_usr02上的

tar spark gzip文件

$ ssh s_usr01@slave01 tar zxvf spark-2.2.0-bin-hadoop2.7.tgz
$ ssh s_usr01@slave01 rm spark-2.2.0-bin-hadoop2.7.tgz

在.bashrc文件上设置路径

############ Eclipse PATH ###########
export ECLIPSE_HOME=./eclipse/jee-oxygen/eclipse
export PATH=$PATH:$ECLIPSE_HOME

######### JDK8 PATH ############
JAVA_HOME=/usr/java/jdk1.8.0_131
CLASSPATH=.:$JAVA_HOME/lib/tools.jar
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME CLASSPATH
export PATH

############ Hadoop PATH ###########
export HADOOP_HOME=/home/m_usr/hadoop-2.7.4
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export PATH=$PATH:/usr/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin
export HADOOP_PID_DIR=/home/m_usr/hadoop-2.7.4/pids
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar:wq
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH

export YARN_HOME=$HADOOP_HOME
export PATH=$PATH:$YARN_HOME

############ Spark Path ############
export SPARK_HOME=/home/m_usr/spark-2.2.0-bin-hadoop2.7
export SPARK_SUBMIT=/home/m_usr/spark-2.2.0-bin-hadoop2.7/bin/spark-submit

export PATH=$PATH:$SPARK_HOME/bin
export PATH=$PATH:$SPARK_HOME/sbin

修改$ SPARK_HOME / conf

上的配置文件
$ vi spark-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_131
export HADOOP_HOME=/home/m_usr/hadoop-2.7.4
export SPARK_HOME=/home/m_usr/spark-2.2.0-bin-hadoop2.7
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

$ vi spark-defaults.conf
spark.master                     spark://master:7077
spark.eventLog.enabled           true
spark.eventLog.dir               file:///home/m_usr/spark-2.2.0-bin-hadoop2.7/sparkeventlogs
spark.serializer                 org.apache.spark.serializer.KryoSerializer

vi log4j.properties

# Set everything to be logged to the console
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender


$ vi slaves
slave01
slave02

将spark配置文件远程复制到s_usr01和s_usr02 spark cluster

$ scp * s_usr01@slave01:/home/s_usr01/spark-2.2.0-bin-hadoop2.7/conf

最后用sh命令

执行spark
./sbin/start-all.sh

如您所见,spark-shell适用于Yarn。问题是Eclipse IDE上的Spark Java Client。

更新3

我发现了一些奇怪的东西。 Eclipse IDE的例外是

java.io.FileNotFoundException: File file:/tmp/spark-cf36a407-60c1-4100-89aa-ad38fdce7a87/__spark_libs__3088825805082848673.zip does not exist

但是这个文件是临时生成的,如下所示,

ls -al /tmp
drwxr-xr-x.  2 m_usr m_usr  32  9월 25 20:06 hsperfdata_m_usr
drwxr-xr-x.  2 root  root    6  9월 25 20:04 hsperfdata_root
drwx------.  3 m_usr m_usr 105  9월 25 20:06 spark-cf36a407-60c1-4100-89aa-ad38fdce7a87

我的Java客户端由于某种错误的原因找不到此文件。

17/09/25 20:06:11 INFO Client: Application report for application_1506335085215_0010 (state: ACCEPTED)
17/09/25 20:06:12 INFO Client: Application report for application_1506335085215_0010 (state: FAILED)
17/09/25 20:06:12 INFO Client: 
     client token: N/A
     diagnostics: Application application_1506335085215_0010 failed 2 times due to AM Container for appattempt_1506335085215_0010_000002 exited with  exitCode: -1000

但是spark-shell命令永远不会抛出此异常并且运行良好。

spark-shell --master yarn

有什么想法吗?

0 个答案:

没有答案
相关问题