火花超时问题

时间:2018-10-03 22:15:36

标签: scala apache-spark apache-spark-sql

我在Apache Spark集群上运行一个小程序,并且出现了这样的错误。其误导。是因为超时还是由于NullPointerException。有人可以帮我在这里理解和解释日志吗?我不明白问题是由于NullPointer异常还是超时引起的。

Container: container_1538602189474_0001_02_000001 on 172.31.38.133_44198
==========================================================================
LogType:stderr
Log Upload Time:Wed Oct 03 14:36:10 -0700 2018
LogLength:6588
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/tmp/nm-local-dir/usercache/ccc_v1_g_55799_16370/filecache/10/__spark_libs__1903550458924267347.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/10/03 14:34:27 INFO util.SignalUtils: Registered signal handler for TERM
18/10/03 14:34:27 INFO util.SignalUtils: Registered signal handler for HUP
18/10/03 14:34:27 INFO util.SignalUtils: Registered signal handler for INT
18/10/03 14:34:28 INFO yarn.ApplicationMaster: Preparing Local resources
18/10/03 14:34:28 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1538602189474_0001_000002
18/10/03 14:34:28 INFO spark.SecurityManager: Changing view acls to: hadoop,ccc_v1_g_55799_16370
18/10/03 14:34:28 INFO spark.SecurityManager: Changing modify acls to: hadoop,ccc_v1_g_55799_16370
18/10/03 14:34:28 INFO spark.SecurityManager: Changing view acls groups to: 
18/10/03 14:34:28 INFO spark.SecurityManager: Changing modify acls groups to: 
18/10/03 14:34:28 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop, ccc_v1_g_55799_16370); groups with view permissions: Set(); users  with modify permissions: Set(hadoop, ccc_v1_g_55799_16370); groups with modify permissions: Set()
18/10/03 14:34:28 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
18/10/03 14:34:28 INFO yarn.ApplicationMaster: Waiting for spark context initialization...
18/10/03 14:34:30 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:30 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:34 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:34 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:35 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:35 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:35 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:35 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:36 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:36 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:36 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:34:36 WARN streaming.FileStreamSink: Error while looking for metadata directory.
18/10/03 14:36:09 ERROR yarn.ApplicationMaster: Uncaught exception: 
java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:401)
    at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:254)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:764)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:67)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$2.run(SparkHadoopUtil.scala:66)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:762)
    at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
18/10/03 14:36:09 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@44757d4)
18/10/03 14:36:09 ERROR scheduler.LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(21,1538602569367,JobFailed(org.apache.spark.SparkException: Job 21 cancelled because SparkContext was shut down))
18/10/03 14:36:09 ERROR netty.Inbox: Ignoring error
java.lang.NullPointerException
    at org.apache.spark.storage.BlockManagerMaster.getStorageStatus(BlockManagerMaster.scala:167)
    at org.apache.spark.storage.BlockManagerSource$$anonfun$6.apply(BlockManagerSource.scala:51)
    at org.apache.spark.storage.BlockManagerSource$$anonfun$6.apply(BlockManagerSource.scala:51)
    at org.apache.spark.storage.BlockManagerSource$$anon$1.getValue(BlockManagerSource.scala:31)
    at org.apache.spark.storage.BlockManagerSource$$anon$1.getValue(BlockManagerSource.scala:30)
    at com.codahale.metrics.CsvReporter.reportGauge(CsvReporter.java:234)
    at com.codahale.metrics.CsvReporter.report(CsvReporter.java:150)
    at com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162)
    at org.apache.spark.metrics.sink.CsvSink.report(CsvSink.scala:71)
    at org.apache.spark.metrics.MetricsSystem$$anonfun$report$1.apply(MetricsSystem.scala:116)
    at org.apache.spark.metrics.MetricsSystem$$anonfun$report$1.apply(MetricsSystem.scala:116)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.metrics.MetricsSystem.report(MetricsSystem.scala:116)
    at org.apache.spark.executor.Executor.stop(Executor.scala:214)
    at org.apache.spark.scheduler.local.LocalEndpoint$$anonfun$receiveAndReply$1.applyOrElse(LocalSchedulerBackend.scala:79)
    at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.apply$mcV$sp(Inbox.scala:105)
    at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:205)
    at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:101)
    at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(Dispatcher.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
End of LogType:stderrenter code here

0 个答案:

没有答案