使用LocalDateTime的Spark序列化错误

时间:2016-03-09 07:16:44

标签: scala apache-spark

代码 -

val rdd=sc.textFile("/tmp/abc.csv")
rdd.first.split(",").zipWithIndex
val rows=rdd.filter(x => !x.contains("ID") && !x.contains("Case Number"))
val split1=rows.map(x => x.split(","))
split1.take(3)
import java.time._
import java.time.format._
val format=DateTimeFormatter.ofPattern("MM/dd/yyyy h:m:s a")
val dates=split1.map( x => LocalDateTime.parse( x(2) , format))

错误:

  

org.apache.spark.SparkException:任务不可序列化   org.apache.spark.util.ClosureCleaner $ .ensureSerializable(ClosureCleaner.scala:304)

1 个答案:

答案 0 :(得分:1)

相当丑陋的处理方法是在匿名函数中推送格式初始化:

split1.map(x => 
  LocalDateTime.parse(x(2), DateTimeFormatter.ofPattern("MM/dd/yyyy h:m:s a")))
相关问题