加入数据帧并执行操作

时间:2017-10-19 16:41:10

标签: scala apache-spark

大家好,我有一个数据框,每个日期都是最新的,每天我需要将新的qte和新的ca添加到旧的并更新日期。 所以我需要更新已经存在的那些并添加新的。这是我最后想要的一个例子

val histocaisse = spark.read
      .format("csv")
      .option("header", "true") //reading the headers
      .load("C:/Users/MHT/Desktop/histocaisse_dte1.csv")

    val hist = histocaisse
      .withColumn("pos_id", 'pos_id.cast(LongType))
      .withColumn("article_id", 'pos_id.cast(LongType))
      .withColumn("date", 'date.cast(DateType))
      .withColumn("qte", 'qte.cast(DoubleType))
      .withColumn("ca", 'ca.cast(DoubleType))



    val histocaisse2 = spark.read
      .format("csv")
      .option("header", "true") //reading the headers

      .load("C:/Users/MHT/Desktop/histocaisse_dte2.csv")

    val hist2 = histocaisse2.withColumn("pos_id", 'pos_id.cast(LongType))
      .withColumn("article_id", 'pos_id.cast(LongType))
      .withColumn("date", 'date.cast(DateType))
      .withColumn("qte", 'qte.cast(DoubleType))
      .withColumn("ca", 'ca.cast(DoubleType))
    hist2.show(false)

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-07|2.5 |3.5 |
|2     |2         |2000-01-07|14.7|12.0|
|3     |3         |2000-01-07|3.5 |1.2 |
+------+----------+----------+----+----+

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-08|2.5 |3.5 |
|2     |2         |2000-01-08|14.7|12.0|
|3     |3         |2000-01-08|3.5 |1.2 |
|4     |4         |2000-01-08|3.5 |1.2 |
|5     |5         |2000-01-08|14.5|1.2 |
|6     |6         |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+

+------+----------+----------+----+----+
|pos_id|article_id|date      |qte |ca  |
+------+----------+----------+----+----+
|1     |1         |2000-01-08|5.0 |7.0 |
|2     |2         |2000-01-08|39.4|24.0|
|3     |3         |2000-01-08|7.0 |2.4 |
|4     |4         |2000-01-08|3.5 |1.2 |
|5     |5         |2000-01-08|14.5|1.2 |
|6     |6         |2000-01-08|2.0 |1.25|
+------+----------+----------+----+----+

我在这里做了什么

    val histoCombinaison2=hist2.join(hist,Seq("article_id","pos_id"),"left")
      .groupBy("article_id","pos_id").agg((hist2("qte")+hist("qte")) as ("qte"),(hist2("ca")+hist("ca")) as ("ca"),hist2("date"))

  histoCombinaison2.show()

我得到了以下异常

Exception in thread "main" org.apache.spark.sql.AnalysisException: expression '`qte`' is neither present in the group by, nor is it an aggregate function. Add to group by or wrap in first() (or first_value) if you don't care which value you get.;
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:40)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:58)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.org$apache$spark$sql$catalyst$analysis$CheckAnalysis$class$$anonfun$$checkValidAggregateExpression$1(CheckAnalysis.scala:218)

2 个答案:

答案 0 :(得分:0)

location

感谢。

答案 1 :(得分:0)

正如我已经提到您的评论,您应该定义schema并将其用于阅读csvdataframe

import sqlContext.implicits._

import org.apache.spark.sql.types._
val schema = StructType(Seq(
  StructField("pos_id", LongType, true),
  StructField("article_id", LongType, true),
  StructField("date", DateType, true),
  StructField("qte", LongType, true),
  StructField("ca", DoubleType, true)
))

val hist1 = sqlContext.read
  .format("csv")
  .option("header", "true")
  .schema(schema)
  .load("C:/Users/MHT/Desktop/histocaisse_dte1.csv")

hist1.show

val hist2 = sqlContext.read
  .format("csv")
  .option("header", "true") //reading the headers
  .schema(schema)
  .load("C:/Users/MHT/Desktop/histocaisse_dte2.csv")

hist2.show

然后你应该使用when函数来定义你需要实现的逻辑

val df = hist2.join(hist1, Seq("article_id", "pos_id"), "left")
  .select($"pos_id", $"article_id",
    when(hist2("date").isNotNull, hist2("date")).otherwise(when(hist1("date").isNotNull, hist1("date")).otherwise(lit(null))).alias("date"),
    (when(hist2("qte").isNotNull, hist2("qte")).otherwise(lit(0)) + when(hist1("qte").isNotNull, hist1("qte")).otherwise(lit(0))).alias("qte"),
    (when(hist2("ca").isNotNull, hist2("ca")).otherwise(lit(0)) + when(hist1("ca").isNotNull, hist1("ca")).otherwise(lit(0))).alias("ca"))

我希望答案很有帮助