使用Mesos时,Spark HDFS读取失败

时间:2016-03-14 11:17:15

标签: hadoop apache-spark hdfs

我们已经设置了一个小型Spark群集,我们正在测试它是否可以从HDFS读取。我们正在测试一个只从HDFS读取的小作业,并将读取文件中的行数写入stdout。 使用本地主服务器(local[*])运行时,此方法正常。 但是,当尝试将作业提交到Mesos(mesos://zk://host1:2181,host2:2181,host3:2181/mesos)时,我们会收到以下错误:

16/03/14 11:10:27 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, XXX): java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "XXXX"; destination host is: "XXX":9000;
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.getBlockLocations(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)
    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:271)
    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:238)
    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:231)
    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1498)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
    at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:79)
    at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:163)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1553)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:1121)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1848)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Couldn't set up IO streams
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:786)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 38 more
Caused by: java.util.ServiceConfigurationError: org.apache.hadoop.security.SecurityInfo: Provider org.apache.hadoop.security.AnnotatedSecurityInfo not found
    at java.util.ServiceLoader.fail(ServiceLoader.java:231)
    at java.util.ServiceLoader.access$300(ServiceLoader.java:181)
    at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:365)
    at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
    at org.apache.hadoop.security.SecurityUtil.getTokenInfo(SecurityUtil.java:327)
    at org.apache.hadoop.security.SaslRpcClient.getServerToken(SaslRpcClient.java:263)
    at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:219)
    at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
    ... 41 more

所有Spark节点都具有相同版本的Hadoop库(2.6.0)和相同的配置。我已经手动检查了类是否存在,还通过在Spark Shell中使用Class.forName(...)进行测试。 我们使用Kerberos进行身份验证,并且所有节点都有一个有效的Kerberos票证供您阅读。

我们正在运行Spark 1.5.1。

欢迎任何建议。

0 个答案:

没有答案