显式转换读取.csv与案例类Spark 2.1.0

时间:2017-04-02 14:23:13

标签: scala csv apache-spark

我有以下案例类:

case class OrderDetails(OrderID : String, ProductID : String, UnitPrice : Double,
                    Qty : Int, Discount : Double)

我正在尝试阅读此csv:https://github.com/xsankar/fdps-v3/blob/master/data/NW-Order-Details.csv

这是我的代码:

val spark = SparkSession.builder.master(sparkMaster).appName(sparkAppName).getOrCreate()
import spark.implicits._
val orderDetails = spark.read.option("header","true").csv( inputFiles + "NW-Order-Details.csv").as[OrderDetails]

错误是:

 Exception in thread "main" org.apache.spark.sql.AnalysisException: 
 Cannot up cast `UnitPrice` from string to double as it may truncate
 The type path of the target object is:
  - field (class: "scala.Double", name: "UnitPrice")
  - root class: "es.own3dh2so4.OrderDetails"
 You can either add an explicit cast to the input data or choose a higher precision type of the field in the target object;

如果所有字段都是“双倍”值,为什么不能进行转换?我不懂什么?

Spark版本2.1.0,Scala版本2.11.7

2 个答案:

答案 0 :(得分:10)

您只需要将字段明确地转换为Double

val orderDetails = spark.read
   .option("header","true")
   .csv( inputFiles + "NW-Order-Details.csv")
   .withColumn("unitPrice", 'UnitPrice.cast(DoubleType))
   .as[OrderDetails]

另外,通过Scala(和Java)约定,你的case类构造函数参数应该是较低的camel case:

case class OrderDetails(orderID: String, 
                        productID: String, 
                        unitPrice: Double,
                        qty: Int, 
                        discount: Double)

答案 1 :(得分:0)

如果要更改多列的数据类型;如果我们使用withColumn选项,它将看起来很丑。 为数据应用架构的更好方法是

  1. 使用编码器获取案例类架构,如下所示 val caseClassschema = Encoders.product [CaseClass] .schema

  2. 在读取数据时应用此架构 val data = spark.read.schema(caseClassschema)