Java.lang.IllegalArgumentException:要求失败:在Double中找不到列

时间:2018-01-29 13:28:20

标签: scala csv cassandra rdd spark-cassandra-connector

我正在使用spark我有很多包含行的csv文件,一行看起来像这样:

2017,16,16,51,1,1,4,-79.6,-101.90,-98.900

它可以包含更多或更少的字段,具体取决于csv文件

每个文件对应一个cassandra表,我需要插入文件包含的所有行,所以我基本上做的是获取行,拆分它们的元素并将它们放在List [Double]

sc.stop
import com.datastax.spark.connector._, org.apache.spark.SparkContext, org.apache.spark.SparkContext._, org.apache.spark.SparkConf


val conf = new SparkConf(true).set("spark.cassandra.connection.host", "localhost")
val sc = new SparkContext(conf)
val nameTable = "artport"
val ligne = "20171,16,165481,51,1,1,4,-79.6000,-101.7000,-98.9000"
val linetoinsert : List[String] = ligne.split(",").toList
var ainserer : Array[Double] = new Array[Double](linetoinsert.length)
for (l <- 0 to linetoinsert.length)yield {ainserer(l) = linetoinsert(l).toDouble}
val liste = ainserer.toList
val rdd = sc.parallelize(liste)
rdd.saveToCassandra("db", nameTable) //db is the name of my keyspace in cassandra

当我运行我的代码时,我收到此错误

java.lang.IllegalArgumentException: requirement failed: Columns not found in Double: [collecttime, sbnid, enodebid, rackid, shelfid, slotid, channelid, c373910000, c373910001, c373910002]
  at scala.Predef$.require(Predef.scala:224)
  at com.datastax.spark.connector.mapper.DefaultColumnMapper.columnMapForWriting(DefaultColumnMapper.scala:108)
  at com.datastax.spark.connector.writer.MappedToGettableDataConverter$$anon$1.<init>(MappedToGettableDataConverter.scala:37)
  at com.datastax.spark.connector.writer.MappedToGettableDataConverter$.apply(MappedToGettableDataConverter.scala:28)
  at com.datastax.spark.connector.writer.DefaultRowWriter.<init>(DefaultRowWriter.scala:17)
  at com.datastax.spark.connector.writer.DefaultRowWriter$$anon$1.rowWriter(DefaultRowWriter.scala:31)
  at com.datastax.spark.connector.writer.DefaultRowWriter$$anon$1.rowWriter(DefaultRowWriter.scala:29)
  at com.datastax.spark.connector.writer.TableWriter$.apply(TableWriter.scala:382)
  at com.datastax.spark.connector.RDDFunctions.saveToCassandra(RDDFunctions.scala:35)
  ... 60 elided

我发现插入有效,如果我的RDD类型为:

rdd: org.apache.spark.rdd.RDD[(Double, Double, Double, Double, Double, Double, Double, Double, Double, Double)]

但是我从我正在做的那个是RDD org.apache.spark.rdd.RDD[Double]

我不能使用scala Tuple9作为示例,因为我不知道我的列表在执行之前将包含的元素数量,这个解决方案也不适合我的问题,因为有时候我的列数超过100列csv和tuple在Tuple22停止

感谢您的帮助

1 个答案:

答案 0 :(得分:0)

正如@SergGr所提到的,Cassandra表具有已知列的模式。因此,在保存到Cassandra数据库之前,您需要将[(1, 'test1', 'foo'), (2, 'test2', 'bar')] 映射到Array。您可以使用Cassandra schema。请尝试以下代码,我假设Case Class表中的每一列都是Cassandra类型。

Double