Spark Scala CSV列名称为小写

时间:2017-10-06 21:39:45

标签: scala csv apache-spark apache-spark-sql

请找到下面的代码,让我知道如何将列名更改为小写。我尝试使用ColumnRename但我必须为每个列执行此操作并键入所有列名称。我只是想在专栏上这样做,所以我不想提及所有列名,因为它们太多了。

Scala版本:2.11 Spark:2.2

wheel

必需的输出:

import org.apache.spark.sql.SparkSession
import org.apache.log4j.{Level, Logger}
import com.datastax


import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import org.apache.spark.sql._

object dataframeset {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf().setAppName("Sample1").setMaster("local[*]")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val rdd1 = sc.cassandraTable("tdata", "map3")
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("akka").setLevel(Level.ERROR)
    val spark1 = org.apache.spark.sql.SparkSession.builder().master("local").config("spark.cassandra.connection.host","127.0.0.1")
      .appName("Spark SQL basic example").getOrCreate()

    val df = spark1.read.format("csv").option("header","true").option("inferschema", "true").load("/Users/Desktop/del2.csv")
    import spark1.implicits._
    println("\nTop Records are:")
    df.show(1)


    val dfprev1 = df.select(col = "sno", "year", "StateAbbr")

    dfprev1.show(1)
}
}

实际输出:

|sno|year|stateabbr|    statedesc|cityname|geographiclevel

All the Columns names should be in lower case. 

2 个答案:

答案 0 :(得分:3)

只需使用toDF

df.toDF(df.columns map(_.toLowerCase): _*)

答案 1 :(得分:0)

实现它的其他方法是使用FoldLeft方法。

val myDFcolNames = myDF.columns.toList
val rdoDenormDF = myDFcolNames.foldLeft(myDF)((myDF, c) =>
    myDF.withColumnRenamed(c.toString.split(",")(0), c.toString.toLowerCase()))