sqlContext.createDateFrame()时为NPE

时间:2019-07-17 05:26:02

标签: java apache-spark

我有以下代码,希望使用spark将记录列表更新到mysql db中。

DataFrame df = sqlContext.createDataFrame(airlineList, Airline.class);
sparkHelper.saveAirlinesToMysql(df);

Airline.class是

@Getter @Setter
public class Airline implements Serializable {
    private Long id;
    private String name;
    private String referenceId;
    @Getter(AccessLevel.NONE)
    @Setter(AccessLevel.NONE)
    private Object[] parsedReferenceId;

    public Object[] getParsedReferenceId() {
        if (parsedReferenceId == null) {
            parsedReferenceId = ReferenceHelper.parseReferenceId(this.referenceId);
        }
        return parsedReferenceId;
    }
}

当我运行spark作业以保存到db时,它显示给我

ERROR [2019-07-17 05:06:49,505] com.api.command.SparkDriverCommand: Spark run failed due to NullPointerException at org.spark-project.guava.reflect.TypeToken.method(TypeToken.java:465)
java.lang.NullPointerException
        at org.spark-project.guava.reflect.TypeToken.method(TypeToken.java:465)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:110)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:109)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.sql.catalyst.JavaTypeInference$.org$apache$spark$sql$catalyst$JavaTypeInference$$inferDataType(JavaTypeInference.scala:109)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:111)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:109)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.sql.catalyst.JavaTypeInference$.org$apache$spark$sql$catalyst$JavaTypeInference$$inferDataType(JavaTypeInference.scala:109)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:111)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:109)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.sql.catalyst.JavaTypeInference$.org$apache$spark$sql$catalyst$JavaTypeInference$$inferDataType(JavaTypeInference.scala:109)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:111)
        at org.apache.spark.sql.catalyst.JavaTypeInference$$anonfun$2.apply(JavaTypeInference.scala:109)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.sql.catalyst.JavaTypeInference$.org$apache$spark$sql$catalyst$JavaTypeInference$$inferDataType(JavaTypeInference.scala:109)
        at org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(JavaTypeInference.scala:54)
        at org.apache.spark.sql.SQLContext.getSchema(SQLContext.scala:941)
        at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:603)
        at com.api.command.SparkDriverCommand.saveAirlines(SparkDriverCommand.java:485)
        at com.api.command.SparkDriverCommand.run(SparkDriverCommand.java:465)
        at com.api.command.SparkDriverCommand.run(SparkDriverCommand.java:107)
        at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85)
        at com.api.command.SparkDriverCommand.run(SparkDriverCommand.java:120)

我想知道是否是因为parsedReferenceId字段没有设置器和获取器?请帮忙。预先感谢。

0 个答案:

没有答案
相关问题