Spark 1.2.1独立集群模式spark-submit不起作用

时间:2015-02-28 06:10:35

标签: apache-spark

我有3个节点火花簇

node1,node2和node 3

我在节点1上运行以下命令来部署驱动程序

/usr/local/spark-1.2.1-bin-hadoop2.4/bin / spark-submit --class com.fst.firststep.aggregator.FirstStepMessageProcessor --master spark:// ec2-xx- xx-xx-xx.compute-1.amazonaws.com:7077 --deploy-mode cluster --supervise file:///home/xyz/sparkstreaming-0.0.1-SNAPSHOT.jar /home/xyz/config.properties

驱动程序在集群中的节点2上启动。但是在节点2上获得它试图绑定到节点1 ip的异常。

2015-02-26 08:47:32 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off 
2015-02-26 08:47:32 INFO  Slf4jLogger:80 - Slf4jLogger started 
2015-02-26 08:47:33 ERROR NettyTransport:65 - failed to bind to ec2-xx.xx.xx.xx.compute-1.amazonaws.com/xx.xx.xx.xx:0, shutting down Netty transport 
2015-02-26 08:47:33 WARN  Utils:71 - Service 'Driver' could not bind on port 0. Attempting port 1. 
2015-02-26 08:47:33 DEBUG AkkaUtils:63 - In createActorSystem, requireCookie is: off 
2015-02-26 08:47:33 ERROR Remoting:65 - Remoting error: [Startup failed] [ 
akka.remote.RemoteTransportException: Startup failed 
        at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:136) 
        at akka.remote.Remoting.start(Remoting.scala:201) 
        at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184) 
        at akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:618) 
        at akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:615) 
        at akka.actor.ActorSystemImpl._start(ActorSystem.scala:615) 
        at akka.actor.ActorSystemImpl.start(ActorSystem.scala:632) 
        at akka.actor.ActorSystem$.apply(ActorSystem.scala:141) 
        at akka.actor.ActorSystem$.apply(ActorSystem.scala:118) 
        at org.apache.spark.util.AkkaUtils$.org$apache$spark$util$AkkaUtils$$doCreateActorSystem(AkkaUtils.scala:121) 
        at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:54) 
        at org.apache.spark.util.AkkaUtils$$anonfun$1.apply(AkkaUtils.scala:53) 
        at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1765) 
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141) 
        at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1756) 
        at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:56) 
        at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:33) 
        at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala) 
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: ec2-xx-xx-xx.compute-1.amazonaws.com/xx.xx.xx.xx:0 
        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) 
        at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393) 
        at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389) 
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:206) 
        at scala.util.Try$.apply(Try.scala:161) 
        at scala.util.Success.map(Try.scala:206) 

请建议

谢谢

2 个答案:

答案 0 :(得分:5)

花了很多时间之后。我得到了答案。我做了以下更改

  1. 删除SPARK_LOCAL_IP和SPARK_MASTER_IP
  2. 的条目
  3. 在etc / hosts中添加每个其他节点的主机名和私有IP地址。
  4. 使用--deploy-mode cluster --supervise
  5. 这就是全部,它与完全HA组件(主,从和驱动程序)完美配合

    由于

答案 1 :(得分:2)

EC2 1.2实例中不支持群集模式,它会创建独立群集。因此,您可以尝试删除

--deploy-mode cluster --supervise