部署Storm构建JAR

时间:2015-12-19 07:10:00

标签: java apache-storm kafka-consumer-api

  1. 我开发了一个Java类,它从Kafka队列中读取数据并将其打印出来

    ZkHosts zkHosts=new ZkHosts("localhost:2181");
    String topic_name="test";
    String consumer_group_id="storm";
    String zookeeper_root="";
    SpoutConfig kafkaConfig=new SpoutConfig(zkHosts, 
            topic_name, zookeeper_root, consumer_group_id);
    kafkaConfig.scheme=new SchemeAsMultiScheme(new StringScheme());
    /*kafkaConfig.forceFromStart=false;
    kafkaConfig.startOffsetTime =-2;*/
    
    KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
    TopologyBuilder builder=new TopologyBuilder();
    //builder.setSpout("KafkaSpout", kafkaSpout, 1);
    builder.setSpout("KafkaSpout", kafkaSpout);
    builder.setBolt("PrinterBolt", new PrinterBolt()).globalGrouping("KafkaSpout");
    Map<String, Object> conf = new HashMap<String, Object>();
    conf.put(Config.TRANSACTIONAL_ZOOKEEPER_PORT, 2181);
    conf.put(Config.TRANSACTIONAL_ZOOKEEPER_SERVERS, Arrays.asList("localhost"));
    conf.put(Config.STORM_ZOOKEEPER_SESSION_TIMEOUT, 20000);
    conf.put(Config.STORM_ZOOKEEPER_CONNECTION_TIMEOUT, 20000);
    conf.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, 3);
    conf.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, 30);
    LocalCluster cluster=new LocalCluster();
    try{
        cluster.submitTopology("KafkaConsumerTopology", conf, builder.createTopology());
        Thread.sleep(120000);
    }catch (Exception e) {
        //throw new IllegalStateException("Couldn't initialize the topology", e);
        System.out.println(e.getMessage());
    }
    
  2. 编码后,我正在将Maven构建为JAR文件并将jar移至Amazon AWS集群

  3. 然后运行nohup java -cp uber-***-0.0.1-SNAPSHOT.jar com.***.&&&.kafka.App
  4. 之类的命令

    但我在这里遇到错误,有人能告诉我在部署中我做了什么错误吗?我在想我必须做的事情:

    • 我需要在strom config文件夹中部署这个jar文件,我需要吗?但我确实将jar放在AWS(不在风暴文件夹中)的单独文件夹中
    • 如何查看系统输出
    • 我是否需要在项目中包含任何yml文件?

    请查看以下异常:

    29537 [Thread-14-KafkaSpout] ERROR backtype.storm.util - Async loop died!
    java.lang.ExceptionInInitializerError: null
        at org.apache.log4j.Logger.getLogger(Logger.java:39) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.utils.Logging$class.logger(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.network.BlockingChannel.logger$lzycompute(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.network.BlockingChannel.logger(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.utils.Logging$class.debug(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.network.BlockingChannel.debug(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.network.BlockingChannel.connect(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.consumer.SimpleConsumer.connect(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.consumer.SimpleConsumer.getOrMakeConnection(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.consumer.SimpleConsumer.getOffsetsBefore(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(Unknown Source) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:77) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:67) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.PartitionManager.<init>(PartitionManager.java:83) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:135) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at backtype.storm.daemon.executor$fn__3373$fn__3388$fn__3417.invoke(executor.clj:565) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        at clojure.lang.AFn.run(AFn.java:24) [uber-iot-0.0.1-SNAPSHOT.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
    Caused by: java.lang.IllegalStateException: Detected both log4j-over-slf4j.jar AND slf4j-log4j12.jar on the class path, preempting StackOverflowError. See also http://www.slf4j.org/codes.html#log4jDelegationLoop for more details.
        at org.apache.log4j.Log4jLoggerFactory.<clinit>(Log4jLoggerFactory.java:49) ~[uber-iot-0.0.1-SNAPSHOT.jar:na]
        ... 22 common frames omitted
    

1 个答案:

答案 0 :(得分:0)

@Matthias J. Sax和大家,感谢您的帮助。 我在这里犯的错误是,我所遵循的部署过程是错误的。 要部署toplogy构建,我必须遵循以下流程,

  1. Jar必须进入风暴AWS文件夹,然后必须在命令下运行以使其被Storm识别
  2. rm -f * .out

    (nohup bin / storm nimbus&gt; nimubus.out)&amp;

    (nohup bin / storm supervisor&gt; supervisor.out)&amp;

    (nohup bin / storm jar topos / IoT.jar com.bridgera.iot.test.App01&gt; IoT.out)&amp;

    在这里,我告诉风暴,它可以找到我的jar和主要类,从哪里可以找到拓扑构建器......

    谢谢你们......