无法在独立模式下运行Spark

时间:2016-12-06 11:59:22

标签: hadoop apache-spark

我正在学习如何在群集上使用apache spark并遇到一些问题。 我已经向主持人下载了JDK,scala和spark:

#setting JDK

export JAVA_HOME=/home/yuzhou/mfs/opt/jdk1.8.0_111
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH

export HADOOP_HOME=/home/yuzhou/mfs/opt/hadoop-2.7.3-src

#setting Scala

export SCALA_HOME=/home/yuzhou/mfs/opt/scala-2.10.6
export PATH=${SCALA_HOME}/bin:$PATH

#setting Spark

export SPARK_HOME=/home/yuzhou/mfs/opt/spark-2.0.2-bin-hadoop2.7

#setting PythonPath
export PYTHONPATH=/home/yuzhou/mfs/opt/spark-2.0.2-bin-hadoop2.7/python

我向conf / spark-env.sh添加了一些信息:

export JAVA_HOME=/home/mfs/opt/jdk1.8.0_111
export STANDALONE_SPARK_MASTER_HOST=192.168.245.95
export SPARK_LOCAL_IP=192.168.245.95
export SPARK_MASTER_HOST=192.168.245.95
export SPARK_MASTER_PORT=390
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_INSTANCES=4
export SPARK_WORKER_PORT=391
export SPARK_WORKER_CORES=2
export SPARK_WORKER_DIR=/home/mfs/opt/spark-2.0.2-bin-hadoop2.7/work
之后我跑

./sbin/start-master.sh

并遇到以下问题:

Exception in thread "main" java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
    at org.apache.hadoop.security.Groups.<init>(Groups.java:79)
    at org.apache.hadoop.security.Groups.<init>(Groups.java:74)
    at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:303)
    at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:284)
    at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
    at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
    at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
    at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2345)
    at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2345)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2345)
    at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:217)
    at org.apache.spark.deploy.master.Master$.startRpcEnvAndEndpoint(Master.scala:1026)
    at org.apache.spark.deploy.master.Master$.main(Master.scala:1011)
    at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:132)
    ... 16 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
    at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
    at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
    at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:39)

我搜索了很多文章,但没有弄清楚如何解决问题。我必须先安装Hadoop吗?或者有什么我忘记添加的东西?

提前致谢!

0 个答案:

没有答案