如何解决"线程中的异常" main" org.apache.spark.SparkException:应用程序应用程序以失败状态结束"?

时间:2016-07-30 20:20:24

标签: apache-spark spark-streaming

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
    Using properties file: /usr/lib/spark/conf/spark-defaults.conf
    Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
    Adding default property: spark.eventLog.enabled=true
    Adding default property: spark.driver.maxResultSize=1920m
    Adding default property: spark.shuffle.service.enabled=true
    Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
    Adding default property: spark.sql.parquet.cacheMetadata=false
    Adding default property: spark.driver.memory=3840m
    Adding default property: spark.dynamicAllocation.maxExecutors=10000
    Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
    Adding default property: spark.yarn.am.memoryOverhead=558
    Adding default property: spark.yarn.am.memory=5586m
    Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    Adding default property: spark.master=yarn-client
    Adding default property: spark.executor.memory=5586m
    Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
    Adding default property: spark.dynamicAllocation.enabled=true
    Adding default property: spark.executor.cores=2
    Adding default property: spark.yarn.executor.memoryOverhead=558
    Adding default property: spark.dynamicAllocation.minExecutors=1
    Adding default property: spark.dynamicAllocation.initialExecutors=10000
    Adding default property: spark.akka.frameSize=512
    Parsed arguments:
    master yarn-cluster
    deployMode cluster
    executorMemory 1500m
    executorCores 1
    totalExecutorCores null
    propertiesFile /usr/lib/spark/conf/spark-defaults.conf
    driverMemory 1500m
    driverCores null
    driverExtraClassPath null
    driverExtraLibraryPath null
    driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    supervise false
    queue null
    numExecutors null
    files null
    pyFiles null
    archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
    mainClass SparkBWA
    primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
    name SparkBWA
    childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589]
    jars null
    packages null
    packagesExclusions null
    repositories null
    verbose true
    Spark properties used, including those specified through
    --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
    spark.yarn.am.memoryOverhead -> 558
    spark.driver.memory -> 1500m
    spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
    spark.executor.memory -> 5586m
    spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
    spark.eventLog.enabled -> true
    spark.scheduler.minRegisteredResourcesRatio -> 0.0
    spark.dynamicAllocation.maxExecutors -> 10000
    spark.akka.frameSize -> 512
    spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share /google/alpn/alpn-boot-8.1.7.v20160121.jar
    spark.sql.parquet.cacheMetadata -> false
    spark.shuffle.service.enabled -> true
    spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
    spark.dynamicAllocation.initialExecutors -> 10000
    spark.dynamicAllocation.minExecutors -> 1
    spark.yarn.executor.memoryOverhead -> 558
    spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
    spark.yarn.am.memory -> 5586m
    spark.driver.maxResultSize -> 1920m
    spark.master -> yarn-client
    spark.dynamicAllocation.enabled -> true
    spark.executor.cores -> 2
    Main class: org.apache.spark.deploy.yarn.Client
    Arguments:
    --name SparkBWA
    --driver-memory 1500m
    --executor-memory 1500m
    --executor-cores 1
    --archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
    --jar file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
    --class SparkBWA
    -algorithm mem
    -reads paired
    -index /Data/HumanBase/hg38
    -partitions 32
    ERR000589_1.filt.fastq
    ERR000589_2.filt.fastqhb
    Output_ERR000589
    System properties:
    spark.yarn.am.memoryOverhead -> 558
    spark.driver.memory -> 1500m
    spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
    spark.executor.memory -> 1500m
    spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
    spark.eventLog.enabled -> true
    spark.scheduler.minRegisteredResourcesRatio -> 0.0
    SPARK_SUBMIT -> true
    spark.dynamicAllocation.maxExecutors -> 10000
    spark.akka.frameSize -> 512
    spark.sql.parquet.cacheMetadata -> false
    spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    spark.app.name -> SparkBWA
    spark.shuffle.service.enabled -> true
    spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
    spark.dynamicAllocation.initialExecutors -> 10000
    spark.dynamicAllocation.minExecutors -> 1
    spark.yarn.executor.memoryOverhead -> 558
    spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
    spark.submit.deployMode -> cluster
    spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
    spark.yarn.am.memory -> 5586m
    spark.driver.maxResultSize -> 1920m
    spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
spark.yarn.am.memory is set but does not apply in cluster mode.
spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdetector-m/10.132.0.2:8032 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1467990031555_0106
Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0106 finished with failed status
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
    at org.apache.spark.deploy.yarn.Client.main(Client.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    atorg.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7  31)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

当我尝试检查AM和执行程序日志时。该命令不起作用,所以我尝试手动访问NM的日志目录以查看详细的应用程序日志。以下是NM日志文件中的应用程序日志:

2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/SparkBWA.jar is closed by DFSClient_NONMAPREDUCE_-762268348_1
    2016-07-31 01:12:40,419 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip
    2016-07-31 01:12:40,445 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,446 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,448 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
    2016-07-31 01:12:40,495 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip
    2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
    2016-07-31 01:12:40,509 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
    2016-07-31 01:12:44,720 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_1.inprogress
    2016-07-31 01:12:44,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_1.inprogress for DFSClient_NONMAPREDUCE_-1111833453_14
    2016-07-31 01:12:45,373 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
    2016-07-31 01:12:45,375 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
    2016-07-31 01:12:45,379 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_1.inprogress is closed by DFSClient_NONMAPREDUCE_-1111833453_14
    2016-07-31 01:12:45,843 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b7989393-f278-477c-8e83-ff5da9079e8a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:12:49,914 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_2.inprogress
    2016-07-31 01:12:50,100 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_2.inprogress for DFSClient_NONMAPREDUCE_378341726_14
    2016-07-31 01:12:50,737 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
    2016-07-31 01:12:50,738 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
    2016-07-31 01:12:50,742 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_2.inprogress is closed by DFSClient_NONMAPREDUCE_378341726_14
    2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742335_1511 10.132.0.3:50010 10.132.0.4:50010 
    2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742337_1513 10.132.0.3:50010 10.132.0.4:50010 
    2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742336_1512 10.132.0.3:50010 10.132.0.4:50010 
    2016-07-31 01:12:51,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.3:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
    2016-07-31 01:12:54,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.4:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
    2016-07-31 01:12:55,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.46380a1f-b5fd-4924-96aa-f59dcae0cbec is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:05,882 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 244 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 234 SyncTimes(ms): 221 
    2016-07-31 01:13:05,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7273ee28-eb1c-4fe2-98d2-c5a20ebe4ffa is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:15,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f640743-d06c-4583-ac95-9d520dc8f301 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:25,902 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.bc63864c-0267-47b5-bcc1-96ba81d6c9a5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:35,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93557793-2ba2-47e8-b54c-234c861b6e6c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:45,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0fdf083c-3c53-4051-af16-d579f700962e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:13:55,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.834632f1-d9c6-4e14-9354-72f8c18f66d0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:05,933 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 262 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 252 SyncTimes(ms): 236 
    2016-07-31 01:14:05,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d06ef3b4-873f-464d-9cd0-e360da48e194 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:15,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.32ccba74-5f6c-45fc-b5db-26efb1b840e2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:25,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fef919cd-9952-4af8-a49a-e6dd2aa032f1 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:35,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.77ffdf36-8e42-43d8-9c1f-df6f3d11700d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:45,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c31cfcbb-b47c-4169-ab0f-7ae87d4f815d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:14:55,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6429570d-fb0a-4117-bb12-127a67e0a0b7 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:05,981 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 280 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 270 SyncTimes(ms): 253 
    2016-07-31 01:15:05,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8030b18d-05f2-4520-b5c4-2fe42338b92b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:15,991 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f608a0f4-e730-43cd-a19d-da57caac346e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:25,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9d5a1f80-2f2a-43a7-84f1-b26a8c90a98f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:36,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.279e96fc-180c-47a5-a3ba-cfda581eedad is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:46,015 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a85bbf52-61f4-4899-98b1-23615a549774 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:15:56,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.80613e8e-7015-4aeb-81df-49884bd0eb5e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:06,028 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 298 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 288 SyncTimes(ms): 267 
    2016-07-31 01:16:06,031 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2be7fc48-bd1c-4042-88e4-239b1c630458 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:16,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.40fc68a6-f003-4e35-b4b3-50bd3c4a0c82 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:26,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.97e7d15c-4d28-4089-b4a5-9f0935a72589 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:36,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.84d8e78d-90fd-419f-9000-fa04ab56955e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:46,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6691cc3e-6969-4a8f-938f-272d1c96701d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:16:56,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.077143b6-281a-468c-8b2c-bcb6cd3bc27a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:06,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 316 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 306 SyncTimes(ms): 284 
    2016-07-31 01:17:06,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.817d1886-aea2-450a-a586-08677dc18d60 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:16,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.abd46886-1359-4c5e-8276-ea4f2969411f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:26,087 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.24625260-59be-4a9b-b47b-b8d5b76cb789 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:36,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.11630782-e50e-4260-a0da-99845bc3f1db is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:46,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.16cdd027-f1b8-4cbf-a30c-2f1712f4abb5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:17:56,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93fb2e86-2fec-4069-b73b-632750fda603 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:06,116 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 334 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 324 SyncTimes(ms): 300 
    2016-07-31 01:18:06,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b19fddda-ea90-49ab-b44d-434cce28cb67 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:16,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d81ab189-bde5-4878-b82b-903983466f86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:26,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.e5b51632-f714-4814-b896-59bba137b42d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:36,144 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.39791121-9399-4a22-a50c-90eaddf31ffb is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:46,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.861c269b-5466-4855-84fd-587ed3306012 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:18:56,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8a9ff721-bd56-4bea-b399-31bfaabe8c7c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:06,168 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 352 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 342 SyncTimes(ms): 313 
    2016-07-31 01:19:06,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.492bf987-4991-4533-80e2-678efa843cb9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:16,178 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9294c0c6-43db-4f6d-9d31-f493143b6baf is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:26,187 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.341dd131-c14c-4147-bcbc-849d1d6bba8c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:36,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.56f92e8e-ef93-4279-a57f-472dd5d8f399 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:46,204 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5ddcda82-b501-4043-bb54-a29902d9d234 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:19:56,212 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.31e7517b-2ef3-458c-9979-324d7a96302f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:06,218 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 370 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 360 SyncTimes(ms): 329 
    2016-07-31 01:20:06,220 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5251f5df-0957-4008-b664-8d82eaa9789e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:16,229 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.3320b948-2478-4807-9ab3-d23e4945765e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:26,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0928c940-e57d-4a34-a7dc-53dade7ff909 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:36,246 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6240fcdf-696e-49c4-a883-3eda5ab89b4d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:46,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5622850e-b7b0-458a-9ffa-89e134fa3fda is closed by DFSClient_NONMAPREDUCE_-1615501432_1
    2016-07-31 01:20:56,262 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.faa076e8-490c-489f-8183-778325e0b144 is closed by DFSClient_NONMAPREDUCE_-1615501432_1

1 个答案:

答案 0 :(得分:5)

首先,您需要找出选择哪个主机/节点作为ApplicationMaster的主机。转到YARN UI并查找Spark应用程序。

获得节点后,请转到磁盘上的日志logs/userlogs/application_1469891809555_0005/container_1469891809555_0005_01_000001/stderr。您需要为容器stderr找到000001,这是Spark应用程序的ApplicationMaster容器。

相关问题