hdfs:文件不存在

时间:2019-01-09 09:43:27

标签: scala amazon-web-services apache-spark apache-spark-sql amazon-emr

我试图从s3存储桶中读取数据,并在spark中进行计算,然后将输出写入s3存储桶。该过程已成功完成。但是,在EMR步骤级别,我发现作业失败。如果我看到日志,则表明该文件不存在。

请参阅下面的日志。

19/01/09 08:40:37 INFO RMProxy: Connecting to ResourceManager at ip-172-30-0-84.ap-northeast-1.compute.internal/172.30.0.84:8032
19/01/09 08:40:37 INFO Client: Requesting a new application from cluster with 2 NodeManagers
19/01/09 08:40:37 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (106496 MB per container)
19/01/09 08:40:37 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
19/01/09 08:40:37 INFO Client: Setting up container launch context for our AM
19/01/09 08:40:37 INFO Client: Setting up the launch environment for our AM container
19/01/09 08:40:37 INFO Client: Preparing resources for our AM container
19/01/09 08:40:39 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
19/01/09 08:40:43 INFO Client: Uploading resource file:/mnt/tmp/spark-e0c6fbd3-14b0-4fcd-bbd2-c78658fdefd0/__spark_libs__8470659354947187213.zip -> hdfs://ip-172-30-0-84.ap-northeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1547023042733_0001/__spark_libs__8470659354947187213.zip
19/01/09 08:40:47 INFO Client: Uploading resource s3://dev-system/SparkApps/jar/rxsicheck.jar -> hdfs://ip-172-30-0-84.ap-northeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1547023042733_0001/rxsicheck.jar
19/01/09 08:40:47 INFO S3NativeFileSystem: Opening 's3://dev-system/SparkApps/jar/rxsicheck.jar' for reading
19/01/09 08:40:47 INFO Client: Uploading resource file:/mnt/tmp/spark-e0c6fbd3-14b0-4fcd-bbd2-c78658fdefd0/__spark_conf__4575598882972227909.zip -> hdfs://ip-172-30-0-84.ap-northeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1547023042733_0001/__spark_conf__.zip
19/01/09 08:40:47 INFO SecurityManager: Changing view acls to: hadoop
19/01/09 08:40:47 INFO SecurityManager: Changing modify acls to: hadoop
19/01/09 08:40:47 INFO SecurityManager: Changing view acls groups to: 
19/01/09 08:40:47 INFO SecurityManager: Changing modify acls groups to: 
19/01/09 08:40:47 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
19/01/09 08:40:47 INFO Client: Submitting application application_1547023042733_0001 to ResourceManager
19/01/09 08:40:48 INFO YarnClientImpl: Submitted application application_1547023042733_0001
19/01/09 08:40:49 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:49 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1547023248110
     final status: UNDEFINED
     tracking URL: http://ip-172-30-0-84.ap-northeast-1.compute.internal:20888/proxy/application_1547023042733_0001/
     user: hadoop
19/01/09 08:40:50 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:51 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:52 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:53 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:54 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:55 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:56 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:57 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:58 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:40:59 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:00 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:01 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:02 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:03 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:04 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:05 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:06 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:07 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:08 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:09 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:10 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:11 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:12 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:13 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:14 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:15 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:16 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:17 INFO Client: Application report for application_1547023042733_0001 (state: ACCEPTED)
19/01/09 08:41:18 INFO Client: Application report for application_1547023042733_0001 (state: FAILED)
19/01/09 08:41:18 INFO Client: 
     client token: N/A
     diagnostics: Application application_1547023042733_0001 failed 2 times due to AM Container for appattempt_1547023042733_0001_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://ip-172-30-0-84.ap-northeast-1.compute.internal:8088/cluster/app/application_1547023042733_0001Then, click on links to logs of each attempt.
Diagnostics: File does not exist: hdfs://ip-172-30-0-84.ap-northeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1547023042733_0001/__spark_libs__8470659354947187213.zip
java.io.FileNotFoundException: File does not exist: hdfs://ip-172-30-0-84.ap-northeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1547023042733_0001/__spark_libs__8470659354947187213.zip
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
    at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:359)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Failing this attempt. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1547023248110
     final status: FAILED
     tracking URL: http://ip-172-30-0-84.ap-northeast-1.compute.internal:8088/cluster/app/application_1547023042733_0001
     user: hadoop
Exception in thread "main" org.apache.spark.SparkException: Application application_1547023042733_0001 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1122)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1168)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/01/09 08:41:18 INFO ShutdownHookManager: Shutdown hook called
19/01/09 08:41:18 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-e0c6fbd3-14b0-4fcd-bbd2-c78658fdefd0
Command exiting with ret '1'

我可以看到预期的输出结果,但是作业显示它失败了。我有什么想念的吗?

这是我的代码:

package Spark_package

import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
object SampleFile {
  def main(args: Array[String]) {

    val spark = SparkSession.builder.master("local[*]").appName("SampleFile").getOrCreate()
    val sc = spark.sparkContext
    val conf = new SparkConf().setAppName("SampleFile")
    val sqlContext = spark.sqlContext

    val df = spark.read.format("csv").option("header","true").option("inferSchema","true").load("s3a://test-system/Checktool/Zipdata/*.gz")

    df.createOrReplaceTempView("data")
    val res = spark.sql("select count(*) from data")

    res.coalesce(1).write.format("csv").option("header","true").mode("Overwrite").save("s3a://dev-system/Checktool/bkup/")

    spark.stop()
  }
}

请帮助我如何解决此问题?

1 个答案:

答案 0 :(得分:0)

删除master(local [*])并在相应的集群作品上运行。