将静态数据集与DStream

时间:2015-09-03 14:22:56

标签: java apache-spark spark-streaming hadoop2

我正在尝试在Java中使用Spark Streaming应用程序。我的Spark应用程序从Hadoop读取连续提要 目录使用 textFileStream() ,间隔为每1分钟。 我需要对传入的DStream执行Spark聚合(分组依据)操作。汇总后,我正在使用DStream<Key, Value1>加入汇总的RDD<Key, Value2> 来自hadoop目录的 textFile() 从静态数据集创建的RDD<Key, Value2>

启用检查点时会出现问题。使用空检查点目录,它运行正常。运行2-3批后,我使用 ctrl + c 将其关闭并再次运行。 第二次运行时,它会立即抛出火花异常:&#34; SPARK-5063&#34;

Exception in thread "main" org.apache.spark.SparkException: RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063

以下是火花应用代码块:

private void compute(JavaSparkContext sc, JavaStreamingContext ssc) {

   JavaRDD<String> distFile = sc.textFile(MasterFile);      
   JavaDStream<String> file = ssc.textFileStream(inputDir);             

   // Read Master file
   JavaRDD<MasterParseLog> masterLogLines = distFile.flatMap(EXTRACT_MASTER_LOGLINES);
   final JavaPairRDD<String, String> masterRDD = masterLogLines.mapToPair(MASTER_KEY_VALUE_MAPPER);

   // Continuous Streaming file
   JavaDStream<ParseLog> logLines = file.flatMap(EXTRACT_CKT_LOGLINES);

   // calculate the sum of required field and generate group sum RDD
   JavaPairDStream<String, Summary> sumRDD = logLines.mapToPair(CKT_GRP_MAPPER);
   JavaPairDStream<String, Summary> grpSumRDD = sumRDD.reduceByKey(CKT_GRP_SUM);

   //GROUP BY Operation
   JavaPairDStream<String, Summary> grpAvgRDD = grpSumRDD.mapToPair(CKT_GRP_AVG);

   // Join Master RDD with the DStream  //This is the block causing error (without it code is working fine)
   JavaPairDStream<String, Tuple2<String, String>> joinedStream = grpAvgRDD.transformToPair(

       new Function2<JavaPairRDD<String, String>, Time, JavaPairRDD<String, Tuple2<String, String>>>() {

           private static final long serialVersionUID = 1L;

           public JavaPairRDD<String, Tuple2<String, String>> call(
               JavaPairRDD<String, String> rdd, Time v2) throws Exception {
               return masterRDD.value().join(rdd);
           }
       }
   );
   joinedStream.print(10);
}

public static void main(String[] args) {

   JavaStreamingContextFactory contextFactory = new JavaStreamingContextFactory() {
        public JavaStreamingContext create() {

           // Create the context with a 60 second batch size
           SparkConf sparkConf = new SparkConf();
           final JavaSparkContext sc = new JavaSparkContext(sparkConf);
           JavaStreamingContext ssc1 = new JavaStreamingContext(sc, Durations.seconds(duration));               

           app.compute(sc, ssc1);

           ssc1.checkpoint(checkPointDir);                       
           return ssc1;
        }
   };

   JavaStreamingContext ssc = JavaStreamingContext.getOrCreate(checkPointDir, contextFactory);

   // start the streaming server
   ssc.start();
   logger.info("Streaming server started...");

   // wait for the computations to finish
   ssc.awaitTermination();
   logger.info("Streaming server stopped...");
}

我知道将静态数据集与DStream连接起来的代码块导致错误,但这是从spark-streaming中获取的 Apache spark网站的页面(子标题&#34; stream-dataset join &#34;&#34; 加入操作&#34;)。请帮助我让它工作即使 这样做有不同的方式。我需要在流媒体应用程序中启用检查点。

环境详情:

  • Centos6.5:2节点群集
  • Java:1.8
  • Spark:1.4.1
  • Hadoop:2.7.1 *

0 个答案:

没有答案